Larry Seawright and I made our presentation this morning at Educause 2008. Our slides areÂ available here.
Together with Stephanie Allen and Whitney Ransom McGowan, Larry and I have been working on an alternative approach to evaluating the effectiveness of teaching & learning technology. Traditionally, evaluation takes the form of comparative-media studies in which one group of students learns via standard methods (control) and others learn with new, experimental methods (test). Over and over (and over) again, these kinds of studies have found differences that are not statistically significant.
The so-called “NSD” (no significant difference) problem is the bane of teaching & learning evaluators theÂ around the world. A growing group of influential scholars has rejected the comparative-media studies approach in favor ofÂ design-based research. Borrowing elements of this approach, we have implemented a goal-driven model of instructional design, technology integration, and evaluation at BYU.
Our approach to evaluating the impact of teaching & learning technology (and getting beyond the NSD problem) begins with the end in mind. The first and essential step in this approach is to begin any teaching & learning with technology project with a carefully articulated goal. Without such a goal, there is no clear, shared understanding of what “success” looks like. Hence, evaluation is virtually impossible–if you don’t know what success looks like, i.e. what should be better as the result of a project, what should you evaluate?
Measuring the impact of teaching & learning technology depends on a clear articulation of learning goals, strategies for accomplishing those goals and tactics for implementing those strategies. The goals can then be re-formulated as teaching & learning “problems” and strategies and tactics become “solutions.” Evaluation is then simply the process of measuring the results implemented solutions, as illustrated below:
To facilitate the consistent articulation of teaching & learning goals, we’ve adopted the Sloan-C’s Five Pillars: (1) Student Learning Outcomes, (2) Cost Effectiveness (Scalability), (3) Access, (4) Student Satisfaction, and (5) Faculty Satisfaction. By choosing to explicitlyÂ focus on one or more of these goals in every teaching and learning project, we identify what success should look like and, at the same time, establish an evaluation plan for each project.
As the examples in our slides suggest, there are often serendipitous results of teaching & learning technology implementation efforts. For example, a project aimed at improving access might also improve student learning outcomes and student satisfaction. However, by articulating and staying focused on a clear, sharedÂ rationale (and funding justification) for projects, we have been able to consistently measure and demonstrate the impact of our teaching & learning technology projects and get beyond the NSD problem.
It all begins by starting with the end in mind.