Archive

Posts Tagged ‘Goals’

Innovation with a Purpose

March 7th, 2009 jonmott Comments

George Siemens recently blogged about a model for evaluating & implementing technology. He dubs it the “IRIS” model, flowing from Innovation, to Research, to Implementation to Systematization. His point is that we need to think about technology differently at each stage of the process:

When we encounter a new tool or a new concept, we are experiencing technology at the innovation level. We’re focused on “what is possible”, not what can be implemented. We’re more concerned about how a new idea/tool/process differs from existing practices. After we’ve had the joy of a shift in thinking and perspective about what is possible, we begin to research and implement. This is a cyclical process. Attention is paid to “how does it work” and “what is the real world impact”. At this level, our goal is to see how our new (innovative) views align with current reality. If a huge disconnect exists, reform mode kicks in and we attempt to alter the system. Most often, that’s a long process. I’m not focused on that option here. I’m making the assumption that many tools can be implemented within the existing system. Finally, once we’ve experimented with options and we have a sense of what works in our organization, we begin the process of systematizing the innovation

I think this is a great model that can guide our level of rigor and attention to detail at each phase of technology emergence. As I noted in a comment to his post, I think an additional dimension to the model ought to be considered at the innovation phase. Not only should we ask “What is possible?” but “Why would we want to do what this new technology makes possible?” Given the enormous amount of time, resources and political capital required to move through the next three phases so elegantly summarized in George’s model, I’m increasingly inclined to spend more time on this question when evaluating new technology.

While I’m sensitive to George’s concern that too much focus on the “why” question might throw a wet blanket on creativity and discovery during the innovation stage, I’m still inclined to ask the question as early as possible. Unless you have a budget set aside purely for research and development (something that seems increasingly unlikely in the current economic situation), it seems prudent to justify even the most innovative technology investigations in terms of the value they might add to teaching & learning. So I’m inclined to ask the “why” question early and often. How would the world be a better place with new technology x, y, or z? Maybe my penchant for asking this question is driven by where I sit at my institution (with responsibility for broad, campus wide technology implementations). But in the end, I think academic technologists of all stripes  have to strike the right balance between wide-open, blue sky innovations and explorations and the more mundane work of aligning resources with priorities and demonstrating the value of our technology investments.

I guess I’m in favor of innovation with a purpose. Is that too restrictive?

Demonstrating a Significant Difference

October 31st, 2008 jonmott Comments

Larry Seawright and I made our presentation this morning at Educause 2008. Our slides are available here.

Together with Stephanie Allen and Whitney Ransom McGowan, Larry and I have been working on an alternative approach to evaluating the effectiveness of teaching & learning technology. Traditionally, evaluation takes the form of comparative-media studies in which one group of students learns via standard methods (control) and others learn with new, experimental methods (test). Over and over (and over) again, these kinds of studies have found differences that are not statistically significant.

The so-called “NSD” (no significant difference) problem is the bane of teaching & learning evaluators the around the world. A growing group of influential scholars has rejected the comparative-media studies approach in favor of design-based research. Borrowing elements of this approach, we have implemented a goal-driven model of instructional design, technology integration, and evaluation at BYU.

Our approach to evaluating the impact of teaching & learning technology (and getting beyond the NSD problem) begins with the end in mind. The first and essential step in this approach is to begin any teaching & learning with technology project with a carefully articulated goal. Without such a goal, there is no clear, shared understanding of what “success” looks like. Hence, evaluation is virtually impossible–if you don’t know what success looks like, i.e. what should be better as the result of a project, what should you evaluate?

Measuring the impact of teaching & learning technology depends on a clear articulation of learning goals, strategies for accomplishing those goals and tactics for implementing those strategies. The goals can then be re-formulated as teaching & learning “problems” and strategies and tactics become “solutions.” Evaluation is then simply the process of measuring the results implemented solutions, as illustrated below:

goals1.jpg

To facilitate the consistent articulation of teaching & learning goals, we’ve adopted the Sloan-C’s Five Pillars: (1) Student Learning Outcomes, (2) Cost Effectiveness (Scalability), (3) Access, (4) Student Satisfaction, and (5) Faculty Satisfaction. By choosing to explicitly focus on one or more of these goals in every teaching and learning project, we identify what success should look like and, at the same time, establish an evaluation plan for each project.

As the examples in our slides suggest, there are often serendipitous results of teaching & learning technology implementation efforts. For example, a project aimed at improving access might also improve student learning outcomes and student satisfaction. However, by articulating and staying focused on a clear, shared rationale (and funding justification) for projects, we have been able to consistently measure and demonstrate the impact of our teaching & learning technology projects and get beyond the NSD problem.

It all begins by starting with the end in mind.

Beginning with the End in Mind

 When I think about technology, I always think about problems. Not problems with the technology itself, but about problems that technology can be used to solve. Far too often we technologists get enamored with a cool new technology (”It’s shiny!”) and we immediately begin looking for something to do with it. “Hey, this cool new hammer would work as a bottle opener if you hold it just right!”

 

But alas, a very long list of bad teaching & learning technology implementations can be cited to demonstrate why the “solution looking for a problem” approach is a bad idea. When I was an instructional designer at BYU’s Center for Instructional Design (now the Center for Teaching & Learning), I constantly fought this battle with faculty members who would come to the Center and announce that they needed a DVD, a website, an immersive 3D simulation, or some other such instructional technology creation. I would politely nod and we’d have a conversation that went something like this:

 

ME: Okay, so we can build the best (fill in the blank) possible, can you explain to me exactly what it is about your course that’s not going the way you want?

 

FACULTY: What do you mean?

 

ME: Well, once we’ve built (fill in the blank), what should be better about your course? Better yet, what should your students know or be able to do that they currently don’t?

 

FACULTY: Well, that’s a good question. What they really lack is . . .

 

From there, we’d spend some time talking about student learning and what the faculty member could do to better facilitate it. Then we’d explore how (fill in the blank) would help students learn better or faster. As often as not, we’d decide not to build (fill in the blank). Instead, we’d tweak some things in the course and build something else more appropriate to the problem the faculty member was trying to solve.

 

Based on experiences like this, I’ve developed a very simple approach to teaching and learning technology, academic technology and technology in general. Simply put, a technology is only as useful as the problems it solves. The trick for technologists, then, is to always begin with the end in mind. We have to get focused on the problem we’re trying to solve and stay focused on it. We have to constantly ask ourselves, “What am I trying to fix or improve? How will I be able to tell when I’ve succeeded?”

 

Once we know what we’re trying to improve, we have an objective. Let’s call it a “goal.” With our goal firmly in mind, we can then move to strategy formulation. I think of a strategy as a long-term plan of action focused on achieving a goal. From strategy we can then move to specific tactics or operational activities aimed at implementing the strategy.

 

Let me make this more concrete. Let’s say that the faculty and administrators on a campus are concerned that they’re using up too much classroom time on administrivia, e.g. collecting and returning papers, administering quizzes, etc. A GOAL aimed at addressing this problem would be to reduce time spent on administrivia during class time. One possible STRATEGY for accomplishing this GOAL would be to move most class-administrative activities to an online environment where they could be completed outside of regular class time. One possible TACTIC for implementing this STRATEGY would be to make a particular online course management system (CMS) available for faculty members and students.

 

The beauty of this approach is that it drives both goal-driven technology implementations (tactics) AND straightforward evaluations of those implementations. Was a technology implementation effective? That question can now be answered, first and foremost, by answering another question: Was the goal accomplished? If the problem is less severe or even non-existent after the strategy and tactics were implemented, you can declare success. If the goal wasn’t accomplished, at least you learned that the tactic (and perhaps the strategy) you picked didn’t work.

 

Admittedly, this is a simplistic approach to technology planning and evaluation. But it has served me well and I’ll continue to rely on it until something better comes along. The bottom line? Technology should be used to make the world a better place. If we’re not able to demonstrate exactly how and to what extent technology is improving things, all were left with is, “It’s shiny!”