Archive

Archive for June, 2009

Reliability, Validity and Loosely-Coupled Assessment

June 25th, 2009 jonmott Comments


Last week Jeremy Brown wrote a thoughtful response to my post about “PLNs, Portfolios, and a Loosely-Coupled Gradebook.” Jeremy expressed concern that my notion of “loosely-coupled” assessment doesn’t adequately address the issues of validity and reliability. He also issued a “warning” about the “assessment minefield into which” I am marching.

While fully appreciating Jeremy’s evaluation bona fides, my working definitions of “reliability” and “validity” are slightly more straightforward (and conventional?) than those he uses. Simply put, I take validity to mean the accuracy with which I am measuring a variable. In layman’s terms, I ask, “Am I actually measuring what I think I’m measuring?” Reliability, on the other hand, refers to the consistency of those measurements: “Am I measuring the same variable consistently over time (at various points of observation) and across multiple subjects?” The old bathroom scale example has always helped me keep these two concepts straight. If a person weighs 200 pounds and the bathroom scale says they weigh 200 pounds, then the measurement is valid. However, if the scale indicates different weights each time a person steps on it (even though their weight hasn’t changed), the measurement isn’t reliable. On the other hand, if the scale consistently indicates that a 200 pound person weighs only 150 pounds every time they weigh themselves, the measure is reliable (consistent) but not valid (accurate).

It’s important to be clear about what we mean by validity and reliability because Jeremy’s central concern seems to be that greater student ownership of and responsibility for portfolios will degrade the “reliability of the judgments passed” on individual student work. He posits two reasons that this would be the case:

  1. The “degree of difficulty” versus “relative facility” of the work completed and submitted via a portfolio.
  2. Selection of artifacts-A student might inadvertently include artifacts which under-represent her or his actual expertise or skill level.

I concur that the variability in student facility with various digital technologies might result in unreliable measures of student ability and skill. While part of being digitally literate in 2009 means being able to create and publish content online at some minimum level of professional acceptability, evaluators should be careful not conflate portfolio design prowess with content area expertise. The same is true when evaluators read papers-students are expected to write at a basic minimum level of professional acceptability, but eloquent prose is not the same as subject matter achievement. Consequently, it is critically important that those who require portfolios to be abundantly clear about the purposes of the portfolio “assignment” and how the portfolio will be assessed. Portfolios should then be assessed accordingly.

When thinking about the purposes of portfolios, administrators and faculty members should be careful to distinguish between the various goals they might have for student portfolio creation and evaluation. On the one hand, portfolios might be encouraged or assigned to help students reflect on their learning, to engage with others about what they’ve learned and how they’ve learned it, to present their work to various audiences, and to develop essential digital literacy and communication skills. When pursuing such goals, validity and reliability concerns are secondary to the process of creating and maintaining a portfolio, so highly student-centric, student-owned and operated portfolios are desirable.

On the other hand, if the purpose of a portfolio is to provide a consistent, aggregated view of a students’ performance through their time at an institution or in an academic program, reliability and validity are central concerns and the cautions Jeremy offers are more immediately relevant. It is my belief, however, that we should strenuously avoid assigning portfolios for purely institutional or program assessment purposes. If our programs aren’t designed in such a way that examples of student work (whether compiled into portfolios or not) is assessed (and possibly collected and aggregated) along the way, then it seems appropriate to redesign the programs instead of bolting an artificial evidence gathering requirement on at the back end.

Once again, we need to begin with the end in mind. What is it we want our students to  become? What experiences do they need to become such? What artifacts are the natural result of (or natural extensions) of these experiences? How will we consistently evaluate these artifacts to give individual students feedback about their performance and growth? How will we aggregate these evaluations to determine our institutional or program level performance? These are the questions that should drive our portfolio and assessment strategies-not external accreditation requirements. If we focus on these student-centric questions, meeting even the most stringent accreditation requirements will be a relatively simple afterthought.

PLNs, Portfolios, and a Loosely-Coupled Gradebook

June 16th, 2009 jonmott Comments

Note: In this post, I reference articles from the Winter 2009 edition of Peer Review, the theme of which was “Assessing Learning Outcomes.” I highly recommend the entire issue if you’re interested in student learning assessment and portfolios. I recently provided an overview of BYU’s loosely-coupled gradebook strategy at TTIX 2009. (We are currently building a standalone gradebook in partnership with Orem, Utah based startup Agilix.) As part of my presentation at TTIX, I also described our plans to leverage the same technology we’re building into the gradebook to implement a loosely-coupled portfolio assessment tool.

The driving purpose behind these efforts is to bridge the gap between our institutional network and the cloud, between the predominant “course management system” (CMS) paradigm and the emerging model of personal learning networks (PLNs), between student-centered and institution-centered portfolios, etc. I maintain that bridging this gap is a necessary condition for the significant transformation of learning via technology. Until learning tools and content become more malleable (i.e. open, modular, and interoperable) we will not realize the full potential of an interconnected, networked world in education.

Student Learning Portfolios & Institutional Assessment

Portfolios are increasingly at the nexus of student learning, institutional assessment, PLNs, CMSs, and assorted other aspects of the higher ed milieu. Student learning portfolios are essential in the movement toward more valid and authentic assessment in higher education. However, the focus on institutional and program assessment has, at least in some instances, diverted our attention from our primary objective of improving student learning. As Trent Batson observed in 2007, that the initial effort to enhance student learning with portfolios has been “hijacked by the need for accountability” to boards of education and accrediting bodies.

This trend is worrisome. If the focus on portfolios shifts primarily to institutional and program assessment, we will have missed out on the essential value of portfolios. Portfolios have the potential to galvanize and enhance student learning. As Miller and Morgaine have observed: “E-portfolios provide a rich resource for both students and faculty to learn about achievement of important outcomes over time, make connections among disparate parts of the curriculum, gain insights leading to improvement, and develop identities as learners or as facilitators of learning.”

Given the potential benefits of portfolios, I believe that student learning portfolios should, first and foremost, belong to and be maintained by individual learners. Gary Brown supports this view, maintaining that portfolios should be student (and not institutionally) operated: “A real student-centered model would put the authority, or ownership, of [ePortfolios] in the hands of the students: They could share evidence of their learning for review with peers, and offer that evidence to instructors for grading and credentialing.” Such an approach increases student ownership and responsibility for learning. It also affords portability and longevity since students are not dependent upon a particular institution (or set of institutions) to provide them with portfolio technology and storage space.

Clark & Enyon have argued that have to get past this tension between student and institutional portfolios: “The e-portfolio movement must find ways to address [institutional assessment] needs without sacrificing its focus on student engagement, student ownership, and enriched student learning.”

Bridging the Gap

So exactly how can we bridge the gap between student-centered learning portfolios and institutional assessment needs? I propose that the loosely-coupled gradebook strategy we’re pursuing can be leveraged to provide a viable solution to this problem.

Here are the dimensions of a loosely-coupled portfolio assessment strategy:

  1. Institutions of higher learning should focus on what they do best and on what only they can do. Namely, they should admit and register students, manage course enrollments and  degree program rosters, and maintain secure records and communications tools for faculty and students engaged in the learning process.
  2. We can then leverage the best online, third-party applications for student publishing, networking, and portfolio creation. Individual institutions (or even institutions working together) would be hard pressed to produce applications comparable in quality and stability to Google Docs, YouTube, Blogger, Acrobat Online, MS Office Live, Wikispaces, and WordPress.
  3. Teachers and learners should embrace the power of the network to enhance, extend and improve learning. Even if institutions could develop and deploy better tools than those freely available online, it would be a bad idea to do so. The fundamental value proposition of these apps is that, since they live in the cloud, they’re accessible anytime, anywhere, by anyone. The openness this affords promotes greater transparency and expanded opportunities for collaboration.
  4. Students should be encouraged to be effective, technologically literate, digital citizens who are proficient using a variety of online tools. As they participate in the learning process, they should regularly save and aggregate their work, packaging and repackaging it for various audiences. One of these audiences might be those responsible for degree program review and assessment.
  5. Once students have assembled their collections of learning artifacts, metacognitive commentary, and portfolios, they might then be required to simply “register” the URLs of their portfolios and artifacts with their institution. Those conducting program and institutional assessment would then use a lightweight “overlay” tool to review and assess submitted student work.

The various aspects of this approach and the associated work flow might look something like this: Portfolio Assessment Diagram

The Power of a Loosely-Coupled Strategy

The “open learning network” (OLN) strategy I’ve written about from time to time is based on several value propositions. One is that institutions should do what they do best (manage student data, facilitate secure communication between teachers and learners) while leveraging third-party, cloud-based applications for such things as personal publishing and collaboration. The loosely-coupled gradebook strategy described in previous posts is a key component of this larger idea.

Another central value proposition of the OLN is the connectedness it facilitates between teachers and learners both within and without institutional boundaries. When learners not only consume online content, but also refine, improve, remix, mashup, and create new content themselves during the learning process, their learning is more authentic, meaningful, and enduring. And they build deeper, more profound connections between facts, concepts, and the other human beings they interact with. This notion of of connectedness is a fundamental aspect of what we consider education and literacy today.

George Siemens blogged today that:

“Not only are we socially connected in our learning, but the concepts that form our understanding of a subject also reveal network attributes. Understanding is a certain constellation (pattern) of connections between concepts. . . . being a literate person is not so much about what you know, but about how you know things are connected.”

I concur. As we continue to pursue the OLN vision, it is essential that we facilitate opennes in the learning process to promote greater interaction and connections with content, people, cultures, and places. It is with this end in mind that we ought to promote personal learning networks (PLNs), OLNs, and loosely-coupled gradebooks. We want our students to be literate, connected, and efficacious life-long learners who make their homes, their communities, their workplaces and the world better places than they found them.

Loosely Coupled Gradebook Presentation @ TTIX 2009

The slides for my presentation at TTIX 2009 about BYU’s loosely coupled gradebook project.

And a link to the UStream capture of the presentation.