Archive

Posts Tagged ‘technology’

Jim Groom is Watching Me

February 12th, 2010 jonmott Comments

Jim Groom, aka Rorschach, is watching me. He apparently took umbrage with my ELI presentation in which I–very much tongue-in-cheek–suggested that he and Michael Chasen could live together in harmony, perhaps even sitting down to sing Kumbaya.

Jim appears to be concerned that I’m advocating a “middle-of-the-road” approach that validates the LMS paradigm. Lest anyone else be confused, let me state that nothing could be further from the truth. If you listen to my entire presentation, I hope it’s clear that I’m not advocating the perpetuation of the single, vertical, integrated technology stack that is the LMS. Rather, the AND that I’m really advocating is the blending of the secure, university network for private, proprietary data (e.g., student records) and the open, read-write Web.

David Wiley and I recently argued, the “open learning network” model is “revolutionary primarily in its refusal to be radical in either direction.” There is value in both the LMS and PLE paradigms. However, blending the best aspects of both does not mean keeping either or both in their current forms. It means leveraging the best of each and mashing them up into something completely new and different. By doing so we can create a learning network that is both private AND public, secure AND open, reliable AND flexible, integrated AND modular, and that is supportive of both teachers AND learners.

Tinkering, Playing, and Learning

November 6th, 2009 jonmott Comments

John Seely Brown is visited the BYU Campus today and gave a compelling talk about Homo sapiens, Homo faber, and Homo ludens.

Essentially, he argues that the formal education focuses almost exclusively on the Homo sapiens notion of “man as knowledge,” attempting to fill students with information and facts. This is gradually, in some quarters, giving way to Homo faber, or “man as maker,” students as creators of new knowledge and ideas represented in learning artifacts. But what Brown argues is almost completely missing is the notion of Homo ludens, or “man as player,” students as tinkerers, playing with ideas, concepts, trying new ways to put things together to express ideas solve problems.

While this is somewhat akin to what Gee has said about learning through playing games, Brown is suggesting, I think something deeper and more profound. When we play and tinker, he says, we get into systems and figure out how they work, why they work, and what the rules are that underlie various systems. Having done so, we can begin hacking them, changing conditions in the system to get results we want but for which the system wasn’t explicitly designed to produce. (This is a process Paul Buchheit richly terms “applied philosophy.”)

Playing and tinkering can be casual, simply messing around. But when we move (and help our students move) to deep tinkering, we “soak and poke” around systems to see what can be pushed around, what can be rearranged, what can be repurposed, and what can be modified to what result. This yields what Brown terms and “intimate familiarity” with material at hand and an “embodied immersion” in a system. Deep tinkering results in the sort of deeply situated understanding Polyani calls “indwelling”.

This kind of knowing and learning facilitates the transformation from learning to know, to learning to be, to learning to become. And becoming requires repeated effort, frequently followed by failure, followed by refined effort, followed by incrementally improved performance. Over time, this process of failing, over and over and over again, yields success (and even, in many instances, perfect performance).

So how do we get to this kind of learning in our schools?

Brown offers two suggestions. The first is that we need to strike the right balance between Homo sapiens, Homo faber, and Homo ludens with a sense of awe and zest for life at the core. This directly leads to the next suggestion—dramatically improving learning will require more institutional innovation than it will technological innovation. Technological affordances already outstrip practice. What we need to do now is rethink the ways we organize to facilitate learning—authentic, deeply situated learning—and use the technology available to us to make authentic differences in the lives of our students.

This reminds me, once again, that while the technology is important, dramatically improving teaching and learning practice is at least as much a cultural challenge as it is an technological undertaking.

Deja Vu All Over Again – Blackboard Still Stuck in the Innovator’s Dilemma

July 24th, 2009 jonmott Comments

It’s been a week (or so) since BbWorld. I’ve had the chance now to ruminate about what I saw and heard there. I wanted to let things rattle around in my head a bit before passing judgment on Blackboard’s message this year because it was increasingly clear to me that my assessment of BbWorld 2009 would be virtually the same as my assessment from 2008–Blackboard is improving at the margins, but is not addressing the fundamental weaknesses of the CMS (I use this term generically, lumping Blackboard together with Moodle, Sakai, D2L, etc.).

Improvement at the Margins

Given that BbWorld was in Washington, D.C. this year, it seems appropriate to use a political analogy here. In 1990, George C. Edwards III published At the Margins: Presidential Leadership of Congress. In this now classic study of the president’s ability to lead Congress and enact significant policy change, Edwards concluded that the presidency is so hemmed in by countervailing pressures and influences (in the form of 535 members of Congress, cabinet members, bureaucrats, interest groups, voters, the media, etc.) that presidents should not be expected, save under dramatic and rare circumstances, to effect significant change. Rather, Edwards concludes, presidents have historically been most effective when they have sought to influence change “at the margins,” moving policy incrementally in their preferred direction.

Blackboard, much like the President of the United States, is hemmed in by a large client base (variously represented by university administrators, IT staff, faculty, and students), competing visions within the company, a board of directors and stockholders, etc. It shouldn’t be surprising to anyone, then, that Blackboard’s innovations and improvements are “at the margins.” It’s exceedingly rare for a company with Blackboard’s inertia to make dramatic, revolutionary changes in the products they offer.

BbWorld Announcements & Observations

So what kinds of innovations and improvements did Blackboard announce at BbWorld 2009? I catalogued and reacted to what I heard via Twitter during the conference. For what it’s worth, I begin with a selection of my tweets, first from the keynote (I’ve opted to keep these in reverse-chronological order, as is the convention for a Twitter feed):

Then from the Listening Session:

Deja vu All Over Again

While I’m very encouraged by Blackboard’s announcement (re-iteration) of it’s intent to fully support the Common Cartridge standard and to move toward opening up its database schema for administrators, I’m left wanting much, much more. All of the major announcements–the partnership with Echo360, the acquisition of Terribly Clever, the integration with Wimba (not really new news), the Kindle integration, the focus on closing tickets faster–had little to do with the core concerns about the CMS I have been blogging about for the last year. All of Blackboard’s announcements were about improvements at the margin.

As a checkpoint, I went back and re-read my post from one year ago, written just after BbWorld 2008. Forgive me for regurgitating large portions of it here, but it’s striking how similar my reactions to this year’s BbWorld are to those of last year:

As Clayton Christensen has famously observed, the producers of innovative products gradually lose their creative, innovative edge as they acquire and then seek to protect market share. When a company’s innovations result in significant profits, managers generally find themselves face to face with the innovator’s dilemma. To remain successful, Christensen argues that companies need to listen “responsively to their customers and [invest] aggressively in the technology, products, and manufacturing capabilities that [satisfy] their customers’ next-generation needs.” However, these very same behaviors can create blind spots for innovators. By simply providing incremental improvements to existing products, companies run the risk of missing major, paradigm-shifting innovations in their market spaces. Likewise, they’re in danger of focusing too much on their existing customer bases instead of new potential customers who currently don’t user their products (non-consumers). These twin dangers leave erstwhile market leaders susceptible to disruptive technologies, provided by firms who aren’t stuck in current paradigms or too narrowly focused on pre-defined customer segments.

Blackboard finds itself squarely in the midst of this classic problem. They have a large and fairly stable customer base. Incremental feature enhancements, improved customer service and product stability are likely to keep most of their customers satisfied for time being. But what of the disrupters in the market place? If one considers open source CMS alternatives like Sakai and Moodle to be the most-disruptive players in the market, Blackboard’s strategies appear to be on the right track.

Perhaps not surprisingly (I suppose I predicted this), Blackboard did this year exactly what they did last year and exactly what Christensen cites as the pattern of incumbent market leaders: they announced new feature enhancements and indicated continued attention to and investment in customer service and product stability. As I wrote one year ago:

While I applaud these innovations as good steps in the right direction, there remain fundamental flaws with Blackboard’s (and virtually every other CMS provider’s) underlying infrastructure. For all of the new window dressing, Blackboard remains first and foremost a semester-based, content-delivery oriented, course management system. The software is not (at least noticeably) evolving to become a student-centered learning management system. And while the addition of wikis and blogs inside the Blackboard system is as welcome improvement, there is still little or no integration between student learning tools “inside the moat” and outside of it “in the cloud.”

It is for these reasons that I don’t count Sakai, Moodle, D2L or Angel [which Bb acquired since BbWorld 2008] amongst the biggest, long-term threats to Blackboard. Disruption will, I believe, come from another direction.

From whence will disruption come? More from my post last year:

In Christensen’s newest book, Disrupting Class, he and his co-authors argue that the real disruption in educational technology will come (and is already coming) via learner-centered technologies and networking tools. A rapidly growing number of people are creating their own personal learning environments with tools freely available to them, without the benefit of a CMS. As Christensen would say, they have hired different technologies to do the job of a CMS for them. But the technologies they’re hiring are more flexible, accessible and learner-centered than today’s CMSs. This is not to say that CMSs are about to disappear. Students enrolled in institutions of higher learning will certainly continue to participate in CMS-delivered course sites, but since these do not generally persist over time, the really valuable learning technologies will increasily be in the cloud.

Open learning networks have the potential to bring together the world of the CMS (or better yet “institutional learning networks”) and the world of PLEs together. The next big challenge ahead of us is to figure out ways to create autonomous, institution-independent “learner spaces” that provide home bases for learners that can bridge the two worlds. In these spaces, learners would ideally aggregate relationships, artifacts, and content from ALL of their learning activities, be they digital or analog, online or offline, synchronous or asynchronous, from one institution or many.

I heard virtually nothing at BbWorld this year which would suggest that Blackboard is actively engaged in adapting and evolving to address this challenge. Rather, they continue to innovate at the margins, maintaining their status quo, market-leading position.

What’s the Broken?

@z_rose recently blogged that the problem with the “one-stop-shop” Virtual Learning Environment (VLE) (another frequently-used term for the CMS) is that it is aimed at both learning administration and learning facilitation. These are not, she astutely notes, the same things and VLEs end up doing both poorly.

Both administration and pedagogy are necessary in schools. They are also completely different in what infrastructure they require. This (in my opinion) has been the great failing of VLEs – they all try to squeeze the round pedagogy peg into the square administration hole.

It hasn’t worked very well. Trying to coax collaboration in what is effectively an administrative environment, without the porous walls that social media thrives on, hasn’t worked. The ‘walled garden’ of the VLE is just not as fertile as the juicy jungle outside, and not enough seeds blow in on the wind.

That’s why I’m always cautious of the ‘one-stop-shop’ approach in education – administration and pedagogy are very, very different shops. It’s like having a fishmongers and a haberdashers sharing the same store – no discernible upsides, but a LOT of downsides (stinky fabric springs to mind).

Blackboard and every other CMS / VLE have become exceedingly efficient course content and course administrivia management tools. If data from BYU’s Blackboard usage surveys can be taken as a reasonable guide, most faculty members use Blackboard for administrative, not teaching and learing, purposes, i.e., content dissemination, announcements, e-mail, and gradebooking (70% plus use Bb for these purposes). Dramatically smaller portions (less than 30%) use the teaching and learning tools provided Blackboard (e.g., quizzes, discussion boards, groups, etc.). Increasingly, they’re going to the cloud to use tools that are far better and more flexible than those provided natively inside the CMS.

In 2004, George Siemens wrote, “The real issue is that LMS vendors are attempting to position their tools as the center-point for elearning – removing control from the system’s end-users: instructors and learners.” This is still all-t0o-true five years later. In most CMS implementations, it is exceedingly difficult (if not impossible) for teachers and especially individual learners to take control of the learning environment and shape it to their particular needs. For example, by default, students are generally not able to start their own discussion threads in CMS-delivered courses. Siemens elaborated on these end-user roadblocks, noting that LMSs roundly fail in three significant ways:

  • The rigidity and underlying design of the tool “drives/dictates the nature of interaction (instructors-learner, learner-learner, learner-content).”
  • The interface is too focused on “What do the designers/administrators want/need to do?” rather than on “What does the end user want/need to do?”
  • “Large, centralized, mono-culture tools limit options. Diversity in tools and choices are vital to learners and learning ecology.”

Notably, the absence of LMS-integrated synchronous conferencing and collaboration tools has been largely remedied within the various CMSs. But these other three substantial shortcomings have yet to be addressed. And I would add a critically important fourth weakness–today’s CMSs do not support continuous, cumulative learning throughout a student’s career at an institution, let alone throughout their life after they exit our institutions. As I have written previously, students are completely at the mercy of the institution when it comes to their “presence” and participation in a CMS. They are placed in arbitrarily organized sections of courses for 15-week periods and then “deleted,” as if they never existed in the system. As David Wiley has pondered, how many of us would use Facebook if Facebook deleted our friend connections and pictures every four months?

The fundamental dilemma with the CMS as we know it today is that it is largely a course-centric, lecture-model reinforcing technology with its center of gravity in institutional efficiency and convenience. As such, it is a technology that inclines instructors and students to “automate the past,” replicating previous practice using new, more efficient and more expensive tools instead of innovating around what really matters–authentic teaching, learning, and assessment behaviors.

Blackboard’s Opportunity?

Lest I come across as a Blackboard/CMS naysayer or doomsayer, I should note that, in their early days, Blackboard and other CMSs were the disruptive technology. They were the source of innovation and new thinking about how we organize to teach and learn. However, roughly a decade after the inception of the CMS, the academic community finds itself again ripe for disruption, not only of the technology we use to “manage courses” but in the very system itself. While many will continue to innovate at the margins, there are large crowds of non-consumers out there clamoring for something that meets their needs. At BYU, for example, 25% of our faculty members opt to use a blog, a wiki, or a custom-built course website instead of Blackboard or another CMS. These are the non-consumers Christensen reminds us that we need to figure out how to serve. The same goes for the “non-traditional” students who either aren’t wired to learn the way we’re organized to teach them (in course-sized chunks, bundled in units of time we call semesters) or who, for a variety of reasons, don’t have access to our institutions.

Blackboard still has the opportunity to facilitate discussion and innovation around these critical issues. The company took an important step in this direction by organizing the “Pipeline Matters” session the day before BbWorld, bringing together educational leaders from K12, community colleges, traditional “higher ed” instituitons, and edudcational associations. As a fortunate participation in this conversation, I recognize and appreciate Blackboard’s unique position (with its roughly 3000 client institutions) in the educational space to bring together a broad and diverse set of educational players to address issues such as the one we did last week–how can we improve our efforts to keep students in school and to help them easily re-enter and succeed when they, despite our best efforts, leak out of the “pipeline”?

These sorts of questions are critically important not only to educators, but also to Blackboard’s immediate future and direction because they can compel the company to get outside its comfort zone and rethink how it does what it does and why it does it.

Blackboard can still play a leading role in education. But it needs to think more about end-users and about non-consumers, not just about the universtity administrators who purchase and implement their products. That’s an admittedly tall order for a publicly-traded corporation to take on. But, as Christensen argues, they have to figure out a way to do so if they’re to remain relevant. That’s precisely the innovator’s dilemma.

As I conclude last year, if Blackboard doesn’t innovate, someone else will.

And it won’t be long.

I’ve Seen the Future and the Future is Us (Using Google)

The past couple of weeks have been full of new technology announcements. Three in particular are notable because of the splash they received as “pre-releases” and how different one of them is from the other. I’ll readily admit that in these observations I have a particular bias, or at least a very narrow focus–I’m looking at the potential of these new technologies to transform and dramatically improve learning. By my count, one of the releases has the potential to do so. The other two? Not so much.

First, WolframAlpha launched amid buzz that it was the “Google killer.” It promised to revolutionize search by beginning to deliver on the much awaited “semantic web.” In their own worlds, Wolfram’s objective is to “to make all systematic knowledge immediately computable and accessible to everyone.” This ambitious vision is intended to make better sense of the data accessible via the web to help us answer questions, solve problems, and make decisions more effectively. In the end, though, the tool is in reality “a variation on the Semantic Web vision . . . more like a giant closed database than a distributed Web of data.” It’s still early to be drawing final conclusions, but Google is not dead. Life goes on much the same as it did before. And, most importantly from my perspective, it does not appear that learning will be dramatically transformed, improved, or even impacted by the release of WolframAlpha. (BTW, I loved @courosa’s observation that “the Wolfram launch was like that group that claimed they were going to reveal evidence of UFOs a few yrs back.” After the hoopla died down, there wasn’t all that much “there” there.)

The second big product announcement, and the month’s next contestant for “Google Killer” status, was Microsoft’s Bing, Redmond’s latest attempt to reclaim its Web relevance. Microsoft is spinning Bing as a “decision engine,” also aimed at dramatically improving search. Beginning with the assertion that “nearly half of all searches don’t result in the answer that people are seeking,” the Bing team promises to deliver a tool that will yield “knowledge that leads to action” and help you “make smarter, faster decisions.” (By the way, is it still 2009? A product release website with text embedded in SilverLight that I can’t copy & paste? Really?) Again, it’s early–Bing isn’t even publicly available yet–but even if MS delivers on its lofty claims, this sort of technology doesn’t seem poised to transform learning.

Maybe I’m missing something, but better search doesn’t seem to be our biggest barrier to dramatically improved learning. And both WolframAlpha and Bing are coming at the problem of data overload with better search algorithms, better data processing tools, and intelligent data sorting / presentation tools. I think all of this is great. But neither of these approaches touches the fundamental, core activity of learning–making sense out the world’s complexities in communities of learners. In my “Post-LMS Manifesto” earlier this month, I observed:

Technology alone cannot save us or help us solve our most daunting societal problems. Only we, as human beings working together, can do that. And while many still long for the emergence of the virtual Aristotle, I do not. For I believe that learning is a fundamentally human endeavor that requires personal interaction and communication, person to person. We can extend, expand, enhance, magnify, and amplify the reach and effectiveness of human interaction with technology and communication tools, but the underlying reality is that real people must converse with each other in the process of “becoming.”

Having seen WolframAlpha and Bing, I’m even more firm in this belief. Advanced, improved, more sophisticated search and data sorting technology is much needed and wanted. I’ll be the first in line to use the search engine that proves itself more effective than Google Search. But, as the MLIS students at UBC understand, “without the person, the information is just data.”

A People Problem, Not a Data Problem

Enter Google Wave. In striking contrast to the Wolfram & MS attempts to surpass Google’s search predominance, Google itself announced that it had reinvented e-mail. It’s no coincidence that the reigning king of search hasn’t been spending all of its time and resources on reinventing search (although I don’t doubt that the Google brain trust is spending at least a few cycles doing that). Google has instead focused substantial energy on improving the tools we use to collaborate and communication around data and content.

In it’s Bing press release, MS notes that there are 4.5 new websites created every second. Two years ago, Michael Wesch noted that the world was on pace to create 40 exabytes (40 billion gigabytes) of new data. And the rate of data creation is only accelerating. More recently, Andreas Weigand contends that “in 2009, more data will be generated by individuals than in the entire history of mankind through 2008.”

On the surface, this might seem less substantive than improved data search and analysis tools and, therefore, less relevant to the business or learning. But dealing with data overload and making sense out of it all is a fundamentally human problem. Again, we can extend, expand, enhance, magnify, and amplify the reach and effectiveness of our access to and analysis of data, but making sense out of it all requires individuals, groups and crowds to have conversations about the origins, interpretations and meanings of that data. That is essence of being human. We can outsource our memories to Google, but we cannot (should not!) outsource our judgment, critical analysis, and interpretive capacities to any mechanical system.

The Future of Learning and Learning Technology

<melodrama>I’ve seen the future. And the future is us.</mellodrama>. As we use–and even more importantly appropriate, adapt, and repurpose–tools like Google Wave, we can leverage technology to preserve and enhance that which is most fundamentally human about ourselves. I appreciated Luke’s reminder that teachers and learners should “take ownership of online teaching and learning tools” and, accordingly, “not be shy about reminding our users of their responsibilities, and our users shouldn’t be shy about asking for help, clarification, or if something is possible.” This is precisely what many users of Twitter have done. My startling realization about my Twitter activity is that it has become an indispensable component of my daily learning routine. It’s become a social learning tool for me, giving me access to people and content in a way I never imagined.

Based on an hour and 20 minute long video, Google Wave appears poised to dramatically improve on the Twitter model. Accordingly, the possibilities for enhanced interactions between learners are encouraging. And the ripples of the Wave (sorry, couldn’t resist) have profound implications. With Wave, entire learning conversations are captured and shared with dynamic communities of learners. Lars Rasmussen (co-creator of Wave) noted: “We think of the entire conversation object as being a shared object hosted on a server somewhere” (starting at about 6:22 into the presentation). The ability for late-joining participants to “playback” the conversation and get caught up is particularly intriguing. Elsewhere:

  • Jim Groom asserts that Google Wave will make the LMS “all but irrelevant by re-imagining email and integrating just about every functionality you could possibly need to communicate and manage a series of course conversations through an application as familiar and intimate as email.”
  • David Wiley wonders if Wave might “completely transform the way we teach and learn.”
  • Tim O’Reilly observes that the emergence of Wave has created a “kind of DOS/Windows divide in the era of cloud applications. Suddenly, familiar applications look as old-fashioned as DOS applications looked as the GUI era took flight. Now that the web is the platform, it’s time to take another look at every application we use today.”

All of this continues to point to the demise of the LMS as we know it. However, I agree with Joshua Kim’s observation that the LMS’s “future needs to be different from its past.” As he notes, he’s anxious to use Wave for group projects, but he wants his course rosters pre-loaded and otherwise integrated with institutional systems. This is, as I have previously noted, the most likely evolutionary path for learning technology environments–a hybrid between open, flexible cloud-based tools like Wave and institutionally managed systems that provide student data integration and keep assessment data secure. And this is bound to looking something a lot more like an open learning network than a traditional course management system.

As we adopt and adapt tools like Twitter and Google Wave to our purposes as learning technologists, we have to change the way we think about managing facilitating learning conversations. We can no longer be satisfied with creating easy to manage course websites that live inside moated castles. We have to open up the learning process and experience to leverage the vastness of the data available to us and the power of the crowd, all the while remembering that learning is fundamentally about individuals conversing with each other about the meaning and value of the data they encounter and create. Technologies like Google Wave are important, not in and of themselves, but precisely because they force us to remember this reality and realign our priorities and processes to match it.

I’ve seen the future of learning technology, and the future is us.

A Post-LMS Manifesto

In the wake of the announcement of Blackboard’s acquisition of ANGEL, the blogosphere has been buzzing about Learning Management Systems (LMSs) and their future (or lack thereof). The timing of this announcement came at an interesting time for me. A BYU colleague and I (Mike Bush) recently published a piece in Educational Technology Magazine with the unassuming title “The Transformation of Learning with Technology.” (If you read this article, you’ll recognize that much of my thinking in this post is influenced by my work with Mike.) I’ve also been working on strategy document to guide our LMS and LMS-related decisions and resource allocations here at BYU.

These ongoing efforts and my thoughts over the last twenty-four hours about the Bb-ANGEL announcement have come together in the form of a “post-LMS manifesto” (if can dare use such a grandiose term for a blog post). In the press release about Blackboard’s acquisition of ANGEL, Michael Chasen asserted that the move would “accelerate the pace of innovation and interoperability in e-learning.” As a Blackboard client, I certainly hope that’s true. However, more product innovation and interoperability, while desirable, aren’t going to make Blackboard fundamentally different than it is today—a “learning management system” or “LMS.” And that worries me because I continue to have serious concerns about the future of the LMS-paradigm itself, a paradigm that I have critiqued extensively on this blog.

Learning and Human Improvement

Learning is fundamentally about human improvement. Students flock to colleges and university campuses because they want to become something they are not. That “something” they want to become ranges from the loftiest of intellectual ideals to the most practical and worldly goals of the marketplace. For those of us who work in academe, our duty and responsibility is to do right by those who invest their time, their energy, and their futures in us and our institutions. It is our job to help them become what they came to us to become—people who are demonstrably, qualitatively, and practically different than the individuals they were before.

Technology has and always will be an integral part of what we do to help our students “become.” But helping someone improve, to become a better, more skilled, more knowledgeable, more confident person is not fundamentally a technology problem. It’s a people problem. Or rather, it’s a people opportunity. Philosophers and scholars have wrestled with the challenge and even the paradox of education and learning for centuries. In ancient Greece, Plato formulated what we have come to call “Meno’s paradox” in an attempt to get at the underlying difficulties associated with teaching someone a truth they do not already know. The solution in that age was to pair each student with an informed tutor—as Alexander the Great was paired with Aristotle—to guide the learner through the stages of progressive enlightenment and understanding.

More than two millennia later, United States President James Garfield underscored the staying power of this one-to-one approach: “Give me a log hut, with only a simple bench, Mark Hopkins [a well-known educator and lecturer of the day] on one end and I on the other, and you may have all the buildings, apparatus, and libraries without him.” I suppose President Garfield, were he alive today, would include LMSs and other educational technology on the list of things he would give up in favor of a skilled, private tutor.

The problem with one-to-one instruction is that it simply doesn’t scale. Historically, there simply haven’t been enough tutors to go around if our goal is to educate the masses, to help every learner “become.” Another century later, Benjamin Bloom formalized this dilemma, dubbing it the “2 Sigma Problem.” Through experimental investigation, Bloom found that “the average student under tutoring was about two standard deviations above the average” of students who studied in a traditional classroom setting with 30 other students (“The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring,” Educational Researcher, 13(6), 4-16). Notwithstanding this enormous gap, Bloom was optimistic that continued focus on mastery learning would allow us to eventually narrow the distance between individually-tutored and group-instructed students.

Moving Beyond the LMS

There is, at its very core, a problem with the LMS paradigm. The “M” in “LMS” stands for “management.” This is not insignificant. The word heavily implies that the provider of the LMS, the educational institution, is “managing” student learning. Since the dawn of public education and the praiseworthy societal undertaking “educate the masses,” management has become an integral part of the learning. And this is exactly what we have designed and used LMSs to do—to manage the flow of students through traditional, semester-based courses more efficiently than ever before. The LMS has done exactly what we hired it to do: it has reinforced, facilitated, and perpetuated the traditional classroom model, the same model that Bloom found woefully less effective than one-on-one learning.

For decades, we’ve been told that technology is (or soon would be) capable of replicating the role of a private, individual tutor, of providing a “virtual Aristotle” for each individual learner. But after the billions of dollars we’ve spent on educational technology, we’re nowhere near such an achievement. In fact, we can’t even say that we’ve improved learning at all! (See Larry Cuban’s Oversold & Underused for an excellent, in-depth treatment of this subject). And our continued investment of billions of dollars in the LMS is unlikely to get us any closer to our learning improvement goals either. Because the LMS is primarily a traditional classroom support tool, it is ill-suited to bridge the 2-sigma gap between classroom instruction and personal tutoring.

We shouldn’t be terribly surprised or disappointed that LMSs—or any other technology for that matter—have not revolutionized learning. As Michael Wesch and his students have so sagely reminded us, technology alone cannot save us or help us solve our most daunting societal problems. Only we, as human beings working together, can do that. And while many still long for the emergence of the virtual Aristotle, I do not. For I believe that learning is a fundamentally human endeavor that requires personal interaction and communication, person to person. We can extend, expand, enhance, magnify, and amplify the reach and effectiveness of human interaction with technology and communication tools, but the underlying reality is that real people must converse with each other in the process of “becoming.”

Crowdsourcing the Tutor

If we are to close the 2-sigma gap, we must leave the LMS behind and the artificial walls it builds around arbitrary groups of learners who have enrolled in sections of a courses at our institutions. In the post-LMS world, we need to worry less about “managing” learners and focus more on helping them connect with other like-minded learners both inside and outside of our institutions. We need to foster in them greater personal accountability, responsibility and autonomy in their pursuit of learning in the broader community of learners. We need to use the communication tools available to us today and the tools that will be invented tomorrow to enable anytime, anywhere, any-scale learning conversations between our students and other learners. We need to enable teachers and learners to discover and use the right tools and content (and combinations, remixes and mashups thereof) to facilitate the kinds of interaction, communication and collaboration they need in the learning process. By doing so, we can begin to create the kinds of interconnections between content and individual learners that might actually approximate the personal, individualized “tutors.” However, instead of that tutor appearing in the form of an individual human being or in the form of a virtual AI tutor, the tutor will be the crowd.

While LMS providers are making laudable efforts to incrementally make their tools more social, open, modular, and interoperable, they remain embedded in the classroom paradigm. The paradigm—not the technology—is the problem. We need to build, bootstrap, cobble together, implement, support, and leverage something that is much more open and loosely structured such that learners can connect with other learners (sometimes called teachers) and content as they engage in the authentic behaviors, activities and work of learning.

Building a better, more feature-rich LMS won’t close the 2-sigma gap. We need to utilize technology to better connect people, content, and learning communities to facilitate authentic, personal, individualized learning. What are we waiting for?

A “Triggering” Opportunity?

April 16th, 2009 jonmott Comments

In 1997, Peter Ewell summarized “what we know” about institutional change:

  1. Change requires a fundamental shift of perspective. 
  2. Change must be systemic.  
  3. Change requires people to relearn their own roles.
  4. Change requires conscious and consistent leadership. 
  5. Change requires systematic ways to measure progress and guide improvement. 
  6. Change requires a visible “triggering” opportunity.

Of late I’ve spent a good deal of time wondering about how to bring about items 1-5. My thinking the past few days, however, has returned to my boss’s maxim regarding crises, wit and opportunities for significant improvement. For better or worse, we’re in the middle of the worst economic downturn since the Great Depression. We have a “triggering” opportunity the likes of which we may never see again in our lifetimes as educational technologists. How can we leverage our current situation to do things we might never have the opportunity to do again? A few suggestions, each paired with Ewell’s first five dimensions of institutional change:

  1. Current fiscal constraints and new accreditation requirements can be leveraged to force a fundamental shift in perspective. Our fundamental responsibility is to provide as much value as possible to our students with whatever resources we have. If our budgets are tight and we’re under-staffed, we have to be creative and figure out new ways to be even more effective than we have been in the past. Our perspective should change from a culture of entitlement to one of stewardship and accountability for student learning.
  2. Whatever our role in the academy, we can all identify and share effective practices being employed around the world to make learning more effective even in the face of resource constraints. Systemic change doesn’t have to be–an in most cases probably shouldn’t be–top-down. We can make systemic change by working together and sharing ideas with each other, both within and outside our institutions.
  3. As painful as it might be to work at an under-staffed institution, this can be a golden opportunity to rethink who does what and why in the learning process. Maybe we need an administrative assistant to support high-enrolling courses more than we need a full-time department secretary. That work might be more effectively passed on to students. And maybe we rethink how we use tools and technologies to build learning communities rather than to simply disseminate information. This list could go on. You get the idea.
  4. As implied in #2, leadership doesn’t always have to come from the top. We can all lead by example, by engaging others in thoughtful dialogue about our circumstances and challenges. But we should also take wise advantage of opportunities to engage in these discussions with academic leaders on our campuses. They are perhaps more open to these sorts of conversations than they ever have been or ever will be again. We need to find ways to help them solve their problems that also lead to the kinds of dramatic improvements in teaching and learning we’re all committed to.
  5. Finally, we have to be brutally self-honest, introspective and transparent about what we do, how we do it, why we do it (remember to begin with the end in mind!), and how we will measure success. If we propose a new approach or a new technology to address a teaching & learning challenge, we better be prepared to measure the impact of our innovation and be accountable for whether or not it worked. Some of what we try will be successful and some of what we try will not. We need to be explicit about this reality and its implications from beginning to middle to end.

We are in difficult times. It behooves us as would-be-agents of change to take advantage of this once-in-a-lifetime “triggering” opportunity to do some things that are truly innovative, revolutionary and transformational.

I don’t know about you, but I have work to do . . .

An Open (Institutional) Learning Network

April 9th, 2009 jonmott Comments

I’ve been noodling on the architecture of an open learning network for some time now. I’m making a presentation to my boss today on the subject and I think I have something worth sharing. (Nothing like a high-profile presentation to force some clarity of thought.)

I wrote a post last year exploring the spider-starfish tension between Personal Learning Environments and institutionally run CMSs. This is a fundamental challenge that institutions of higher learning need to resolve. On the one hand, we should promote open, flexible, learner-centric activities and tools that support them. On the other hand, legal, ethical and business constraints prevent us from opening up student information systems, online assessment tools, and online gradebooks. These tools have to be secure and, at least from a data management and integration perspective, proprietary.

So what would an open learning network look like if facilitated and orchestrated by an institution? Is it possible to create a hybrid spider-starfish learning environment for faculty and students?

The diagram below is my effort to conceptualize an “open (institutional) learning network.”

Open Learning Network 2.0

There are components of an open learning network that can and should live in the cloud:

  • Personal publishing tools (blogs, personal websites, wikis)
  • Social networking apps
  • Open content
  • Student generated content

Some tools might straddle the boundary between the institution and the cloud, e.g. portfolios, collaboration tools and websites with course & learning activity content.

Other tools and data belong squarely within the university network:

  • Student Information Systems
  • Secure assessment tools (e.g., online quiz & test applications)
  • Institutional gradebook (for secure communication about scores, grades & feedback)
  • Licensed and or proprietary institutional content

An additional piece I’ve added to the framework within the university network is a “student identity repository.” Virtually every institution has a database of students with contact information, class standing, major, grades, etc. To facilitate the relationships between students and teachers, students and students, and students and content, universities need to provide students the ability to input additional information about themselves into the institutional repository, such as:

  • URLs & RSS feeds for anything and everything the student wants to share with the learning community
  • Social networking usernames (probably on an opt-in basis)
  • Portfolio URLs (particularly to simplify program assessment activities)
  • Assignment & artifact links (provided and used most frequently via the gradebook interface)

Integrating these technologies assumes:

  • Web services compatibility to exchange data between systems and easily redisplay content as is or mashed-up via alternate interfaces
  • RSS everywhere to aggregate content in a variety of places

As noted in previous posts, we’re in the process of building a stand-alone gradebook app that is consistent with this framework. We’re in the process of deciding which tools come next and whether we build them or leverage cloud apps. After a related and thought-provoking conversation with Andy Gibbons today, I’m also contemplating the “learning conversation” layer of the OLN and how it should be achitected, orchestrated and presented to teachers and learners . . .

While there’s still a lot of work to do, this feels like we’re getting closer to something real and doable. Thoughts?

Innovation with a Purpose

March 7th, 2009 jonmott Comments

George Siemens recently blogged about a model for evaluating & implementing technology. He dubs it the “IRIS” model, flowing from Innovation, to Research, to Implementation to Systematization. His point is that we need to think about technology differently at each stage of the process:

When we encounter a new tool or a new concept, we are experiencing technology at the innovation level. We’re focused on “what is possible”, not what can be implemented. We’re more concerned about how a new idea/tool/process differs from existing practices. After we’ve had the joy of a shift in thinking and perspective about what is possible, we begin to research and implement. This is a cyclical process. Attention is paid to “how does it work” and “what is the real world impact”. At this level, our goal is to see how our new (innovative) views align with current reality. If a huge disconnect exists, reform mode kicks in and we attempt to alter the system. Most often, that’s a long process. I’m not focused on that option here. I’m making the assumption that many tools can be implemented within the existing system. Finally, once we’ve experimented with options and we have a sense of what works in our organization, we begin the process of systematizing the innovation

I think this is a great model that can guide our level of rigor and attention to detail at each phase of technology emergence. As I noted in a comment to his post, I think an additional dimension to the model ought to be considered at the innovation phase. Not only should we ask “What is possible?” but “Why would we want to do what this new technology makes possible?” Given the enormous amount of time, resources and political capital required to move through the next three phases so elegantly summarized in George’s model, I’m increasingly inclined to spend more time on this question when evaluating new technology.

While I’m sensitive to George’s concern that too much focus on the “why” question might throw a wet blanket on creativity and discovery during the innovation stage, I’m still inclined to ask the question as early as possible. Unless you have a budget set aside purely for research and development (something that seems increasingly unlikely in the current economic situation), it seems prudent to justify even the most innovative technology investigations in terms of the value they might add to teaching & learning. So I’m inclined to ask the “why” question early and often. How would the world be a better place with new technology x, y, or z? Maybe my penchant for asking this question is driven by where I sit at my institution (with responsibility for broad, campus wide technology implementations). But in the end, I think academic technologists of all stripes  have to strike the right balance between wide-open, blue sky innovations and explorations and the more mundane work of aligning resources with priorities and demonstrating the value of our technology investments.

I guess I’m in favor of innovation with a purpose. Is that too restrictive?

Technology is Still the Bogeyman

February 13th, 2009 jonmott Comments

In a recent Science Daily article, Patricia Greenfield’s research on the impact of technology on learning was summarized under the banner “Is Technology Producing A Decline In Critical Thinking And Analysis?” The piece depicts Greenfield’s research as casting technology in a decidedly negative light when it comes to facilitating the development of critical thinking and analysis skills. The original research report was published in Science (2 January 2009:Vol. 323. no. 5910, pp. 69 – 71), which you may or may not be able to access (depending on rights provided by an educational instituiton or libary with which you are affiliated).

There was an energetic discussion (provoked by Clay Burell’s blog post) about both the Science Daily article and the original research over at http://education.change.org. I won’t rehash in detail here whether or not the Science Daily write-up was a faithful interpretation of the underlying research or not. (For the record, I agree that Clay would have been on firmer ground had he read the original Science piece before writing his critique. But I’m also sympathetic to some of the comments about access to Science being limited.) In any case, it struck me that there are still a lot of folks out there who want to make technology the bogeyman. The Science Daily headline clearly implied that technology was the reason behind declining critical thinking skills. When laptops don’t work in the classroom, it must be that the technology wasn’t appropriate for the classroom. When wireless network initiatives result in distracted students during lectures, we blame the wireless technology. When kids don’t read for pleasure, we blame technology too. If they just weren’t so distracted by video games, cell phones, mp3 players, and social networking sites, maybe they’d read more and think deeper thoughts. Again, we blame the technology, and not the environment in which kids are educated and nurtured.

If you’re even an occasional reader of my ramblings here, you’re probably anticipating the soapbox I’m about to climb up on. Wait for it . . . Here it comes . . .

Technology can’t do anything by itself!

Neither can money! Or cars for that matter! These are all things that humans use for good or for ill. Technology doesn’t “produce” anything! It is the ways we use technology or the ways it is implemented that produce particular kinds of results.  As I said in my response to Clay’s post, you get what you design. If people design boring, process-driven, mindless “learning” experiences for students (with or without technology!), they shouldn’t be surprised that students hate it (or quickly tune out when there’s something more interesting to do, like browsing the web).

This is all simply a design issue and virtually every technology at our disposal is a design tool. If we want critical thinking we should design learning activities that promote and assessments that gauge critical thinking, using the appropriate mix of technologies. Technology can’t do anything by itself, but it seems to be an increasingly convenient bogeyman for people who don’t want to do the hard work of improving learning design and reforming education so learning–and not teaching–are the center of it all.

Tool & Content Malleability

January 28th, 2009 jonmott Comments

I’ve recently finished an article with Mike Bush (a colleague here at BYU) in which we coin what we believe to be a new term in the standards debate–”content & tool malleability.” Our piece is modestly titled “The Transformation of Learning with Technology: Learner-Centricity, Content and Tool Malleability, and Network Effects” and will appear in the March-April edition of Educational Technology Magazine. A couple of months after it’s published, I’ll be able to publish the article in its entirety here. For now, I want to provide a preview of our notion of malleability. We suggest that malleability has three key attributes: openness, modularity, and interoperability and that teaching and learning tools and content must become more malleable if they are to become authentically reusable, remixable, redistributable, repurposable, etc.

We cite IBM’s ultimately successful implementation of the principles of modularity and interoperability which enabled the PC-maker to call on outside vendors for parts for their machines, creating an essentially malleable computing platform:

Their rejection of proprietary technology in favor of openness created the opportunity for IBM to call on Microsoft to develop the operating system and for a host of other companies (including Microsoft!) to go on to create thousands upon thousands of software applications, guaranteeing he long-term success of IBM’s initial design. Furthermore, competing companies that chose a proprietary and closed approach for their hardware, software, or both, (e.g., Texas Instruments, Amiga, Atari, Commodore, and Radio Shack) are nowhere to be found among Twenty-First Century personal computers. Even Apple, with the initial version of their innovative Macintosh, came close to meeting disaster until they opened things up with their Macintosh II (Bush, 1996). In the end, the nature of IBM’s approach not only ensured success in their initial venture, but the continued application of the same principles over the years by IBM’s successors also makes it possible for today’s machines to run much of the same software that was created for the original IBM PC.

Among the principles of openness, modularity, and interoperability that brought success to the IBM-PC venture, the importance of modularity seems perhaps preeminent and has been documented in detail by scholars at the Harvard Business School (Baldwin & Clark, 2000). In their initial work, they analyzed how modularity evolved as a set of design principles during the period between 1944 and 1960. Then using Holland’s theory of complex adaptive systems as a theoretical foundation, they explain how the design principles they identified went on to radically transform the information technology industry from the 1960s through the end of the century. They show how modular design and design processes have fostered change in the industry as it moved from one consisting of a few dozen companies and dominated by IBM to one that involves over a thousand companies and in which IBM plays a significantly lesser role. For example, the “packaged software” sector in the information technology industry consisted of about seven firms in 1970 that were valued at just over $1 billion (as measured in constant 2002 dollars). Thirty-two years later that sector had grown to 408 companies with a market capitalization of $490 billion (Baldwin & Clark, 2006).

Unfortunately, the application of the principles that made such developments possible in the computer industry is rare to nonexistent in many areas of education today. The education   technology landscape is best characterized by monolithic, enterprise technology silos with rigid, often impenetrable walls. Course management systems (CMSs), for example, are generally “all-or-nothing” propositions for institutions, teachers, and students. That is, even if you use an open source CMS like Moodle, you are (without significant customization) bound to use Moodle’s content publishing tool, Moodle’s quiz tool, Moodle’s gradebook, etc. Moreover, the CMS paradigm itself, tied as it is to semester calendars and time-bounded learning experiences (courses), severely limits learning continuity and persistence. Teachers and students are not free to choose the right / best / preferred tool for each teaching or learning activity they undertake, thus creating a technology paradigm that artificially limits possibilities and forecloses optimal teaching and learning choices.

The monolithic and rigid nature of today’s learning tools and content mirrors the way content has traditionally been made available to faculty and students—books and other resources (including online courses) have generally been all-or-nothing, take-them-or-leave them propositions. A similar business model was prevalent in pre-Internet days, resulting in CD-ROM databases that were more expensive than many potential consumers could afford. One analysis compared this marketing approach to a public water distribution system that would require selling the whole reservoir to each household rather than placing a meter at individual homes.

New approaches to content distribution, however, particularly the OpenCourseWare (OCW) and Open Educational Resource (OER) movements, promise to make a vast array of content open to instructors and students to reuse, revise, remix, and redistribute. The OCW Consortium, beginning with MIT in 2002, has now grown to include hundreds of institutions around the world that have chosen to place course materials online.iv The efforts of these institutions have spawned a related effort, dubbed Open Educational Resources (OER), to make learning materials and content (as opposed to complete courses) freely available as well (Breck, 2007). Around the world, millions of people, inside and outside of academia, are publishing content under Creative Commons licensing, making that content open for others to use in a variety of ways. We are rapidly approaching the tipping point at which a critical mass of participants in open content and open learning is sufficient to exponentially increase the value of each additional participant in the network (as described in the next section).

The stunning reality of the new standard of openness is that it is quite simple. The key is to create lots and lots of open content and provide open, easy access to it. While technical standards and specifications, such as the Shareable Content Object Reference Model (SCORM), are important when it comes to producing indexing, discovering, sequencing, packaging, and tracking of content, openness by itself is a paradigm shifting approach in the teaching and learning world.

The fact that content is openly available and usable is just as important as any particular technical feature of that content. While openness stands by itself as a radical new innovation, we need to avoid the temptation to downplay the importance of standards and specifications, for they are essential to the realization of the vision of open, modular, and interoperable learning environments.

This reality is not without historical precedent. Printing became affordable and available in large part due to what we today call standards. Indeed, as one scholar declared, “This then—the standardization and rapid multiplication of texts—was what the Fifteenth Century invention of printing made possible” (Bühler, 1952). Bühler also pointed out that printing’s contributions went beyond the replication issue, stating that modern scholarship only became possible with the production of identical copies of texts. Although the value of mass duplication is not to be discounted, the fact that scholars could reference each other’s work represented enormous value. Given this standardization, they were thus able to criticize, comment upon, connect to, and build upon what had come before. In many ways, printing standards facilitated the first widespread appearance of mashups in human history. The existence of identical copies was but one characteristic that facilitated the eventual widespread availability of books. In addition, several other factors contributed to the production process itself, eventually increasing the opportunity for wider distribution.

…

Although SCORM is not perfect, it at least began to address the issue of establishing a framework within which learning content can be made to interoperate in a variety of settings. Just as SIF opens up the opportunity for reuse of information created and used by various operational elements of schools, SCORM still holds the promise to facilitate the sharing of learning content, not only across learning management systems but also across tools that facilitate the design and development of learning content. In addition, common authentication schemes (e.g., OpenID) built upon Web services interoperability will ultimately allow learners to seamlessly navigate multiple Web-based teaching and learning applications, opening up possibilities for personal learning environments in which multiple sources of content and experiences work together to help students learn in ways that are tailored to each individual.

With developments like SCORM 2.0 on the horizon, as well as increasingly powerful software, hardware, and networking tools, technological barriers are falling. The challenge now is to harness these new enabling technologies to create more open, modular, and interoperable learning content as well as production and learning tools that are each malleable with respect to their individual functionality. Together, these technologies will help further the transformation of education from a teaching-oriented enterprise to a learning-centered one.

As noted, the entire piece will be available soon. We hope this notion of malleability helps move the conversation forward. What are your thoughts? Are we on the right track?

References:

Baldwin, C. Y., & Clark, K. B. (2000). Design rules,volume 1: The power of modularity. Cambridge, MA: MIT Press.

Baldwin, C. Y., & Clark, K. B. (2006). Modularity in the design of complex engineering systems. In D. Braha, A. A. Minai, & Y. Bar-Yam (Eds.), Complex engineered systems: Science meets technology (pp. 175–205). New York: Springer.

Breck, J. (2007, Nov./Dec.). Introduction to special issue on opening educational resources. Educational Technology,
47(6), 3–5.

Bühler, C. F. (1952). Fifteenth century books and the twentieth century. New York: The Golier Club.

Bush, M. D. (1996, Nov.). Fear & loathing in cyberspace: Of heroes and villains in the information age. Multimedia
Monitor; http://arclite.byu.edu/digital/heroesa5.html.