Archive

Posts Tagged ‘LMS’

Jim Groom is Watching Me

February 12th, 2010 jonmott Comments

Jim Groom, aka Rorschach, is watching me. He apparently took umbrage with my ELI presentation in which I–very much tongue-in-cheek–suggested that he and Michael Chasen could live together in harmony, perhaps even sitting down to sing Kumbaya.

Jim appears to be concerned that I’m advocating a “middle-of-the-road” approach that validates the LMS paradigm. Lest anyone else be confused, let me state that nothing could be further from the truth. If you listen to my entire presentation, I hope it’s clear that I’m not advocating the perpetuation of the single, vertical, integrated technology stack that is the LMS. Rather, the AND that I’m really advocating is the blending of the secure, university network for private, proprietary data (e.g., student records) and the open, read-write Web.

David Wiley and I recently argued, the “open learning network” model is “revolutionary primarily in its refusal to be radical in either direction.” There is value in both the LMS and PLE paradigms. However, blending the best aspects of both does not mean keeping either or both in their current forms. It means leveraging the best of each and mashing them up into something completely new and different. By doing so we can create a learning network that is both private AND public, secure AND open, reliable AND flexible, integrated AND modular, and that is supportive of both teachers AND learners.

Loosely Coupled Gradebook Presentation @ TTIX 2009

The slides for my presentation at TTIX 2009 about BYU’s loosely coupled gradebook project.

And a link to the UStream capture of the presentation.

I’ve Seen the Future and the Future is Us (Using Google)

The past couple of weeks have been full of new technology announcements. Three in particular are notable because of the splash they received as “pre-releases” and how different one of them is from the other. I’ll readily admit that in these observations I have a particular bias, or at least a very narrow focus–I’m looking at the potential of these new technologies to transform and dramatically improve learning. By my count, one of the releases has the potential to do so. The other two? Not so much.

First, WolframAlpha launched amid buzz that it was the “Google killer.” It promised to revolutionize search by beginning to deliver on the much awaited “semantic web.” In their own worlds, Wolfram’s objective is to “to make all systematic knowledge immediately computable and accessible to everyone.” This ambitious vision is intended to make better sense of the data accessible via the web to help us answer questions, solve problems, and make decisions more effectively. In the end, though, the tool is in reality “a variation on the Semantic Web vision . . . more like a giant closed database than a distributed Web of data.” It’s still early to be drawing final conclusions, but Google is not dead. Life goes on much the same as it did before. And, most importantly from my perspective, it does not appear that learning will be dramatically transformed, improved, or even impacted by the release of WolframAlpha. (BTW, I loved @courosa’s observation that “the Wolfram launch was like that group that claimed they were going to reveal evidence of UFOs a few yrs back.” After the hoopla died down, there wasn’t all that much “there” there.)

The second big product announcement, and the month’s next contestant for “Google Killer” status, was Microsoft’s Bing, Redmond’s latest attempt to reclaim its Web relevance. Microsoft is spinning Bing as a “decision engine,” also aimed at dramatically improving search. Beginning with the assertion that “nearly half of all searches don’t result in the answer that people are seeking,” the Bing team promises to deliver a tool that will yield “knowledge that leads to action” and help you “make smarter, faster decisions.” (By the way, is it still 2009? A product release website with text embedded in SilverLight that I can’t copy & paste? Really?) Again, it’s early–Bing isn’t even publicly available yet–but even if MS delivers on its lofty claims, this sort of technology doesn’t seem poised to transform learning.

Maybe I’m missing something, but better search doesn’t seem to be our biggest barrier to dramatically improved learning. And both WolframAlpha and Bing are coming at the problem of data overload with better search algorithms, better data processing tools, and intelligent data sorting / presentation tools. I think all of this is great. But neither of these approaches touches the fundamental, core activity of learning–making sense out the world’s complexities in communities of learners. In my “Post-LMS Manifesto” earlier this month, I observed:

Technology alone cannot save us or help us solve our most daunting societal problems. Only we, as human beings working together, can do that. And while many still long for the emergence of the virtual Aristotle, I do not. For I believe that learning is a fundamentally human endeavor that requires personal interaction and communication, person to person. We can extend, expand, enhance, magnify, and amplify the reach and effectiveness of human interaction with technology and communication tools, but the underlying reality is that real people must converse with each other in the process of “becoming.”

Having seen WolframAlpha and Bing, I’m even more firm in this belief. Advanced, improved, more sophisticated search and data sorting technology is much needed and wanted. I’ll be the first in line to use the search engine that proves itself more effective than Google Search. But, as the MLIS students at UBC understand, “without the person, the information is just data.”

A People Problem, Not a Data Problem

Enter Google Wave. In striking contrast to the Wolfram & MS attempts to surpass Google’s search predominance, Google itself announced that it had reinvented e-mail. It’s no coincidence that the reigning king of search hasn’t been spending all of its time and resources on reinventing search (although I don’t doubt that the Google brain trust is spending at least a few cycles doing that). Google has instead focused substantial energy on improving the tools we use to collaborate and communication around data and content.

In it’s Bing press release, MS notes that there are 4.5 new websites created every second. Two years ago, Michael Wesch noted that the world was on pace to create 40 exabytes (40 billion gigabytes) of new data. And the rate of data creation is only accelerating. More recently, Andreas Weigand contends that “in 2009, more data will be generated by individuals than in the entire history of mankind through 2008.”

On the surface, this might seem less substantive than improved data search and analysis tools and, therefore, less relevant to the business or learning. But dealing with data overload and making sense out of it all is a fundamentally human problem. Again, we can extend, expand, enhance, magnify, and amplify the reach and effectiveness of our access to and analysis of data, but making sense out of it all requires individuals, groups and crowds to have conversations about the origins, interpretations and meanings of that data. That is essence of being human. We can outsource our memories to Google, but we cannot (should not!) outsource our judgment, critical analysis, and interpretive capacities to any mechanical system.

The Future of Learning and Learning Technology

<melodrama>I’ve seen the future. And the future is us.</mellodrama>. As we use–and even more importantly appropriate, adapt, and repurpose–tools like Google Wave, we can leverage technology to preserve and enhance that which is most fundamentally human about ourselves. I appreciated Luke’s reminder that teachers and learners should “take ownership of online teaching and learning tools” and, accordingly, “not be shy about reminding our users of their responsibilities, and our users shouldn’t be shy about asking for help, clarification, or if something is possible.” This is precisely what many users of Twitter have done. My startling realization about my Twitter activity is that it has become an indispensable component of my daily learning routine. It’s become a social learning tool for me, giving me access to people and content in a way I never imagined.

Based on an hour and 20 minute long video, Google Wave appears poised to dramatically improve on the Twitter model. Accordingly, the possibilities for enhanced interactions between learners are encouraging. And the ripples of the Wave (sorry, couldn’t resist) have profound implications. With Wave, entire learning conversations are captured and shared with dynamic communities of learners. Lars Rasmussen (co-creator of Wave) noted: “We think of the entire conversation object as being a shared object hosted on a server somewhere” (starting at about 6:22 into the presentation). The ability for late-joining participants to “playback” the conversation and get caught up is particularly intriguing. Elsewhere:

  • Jim Groom asserts that Google Wave will make the LMS “all but irrelevant by re-imagining email and integrating just about every functionality you could possibly need to communicate and manage a series of course conversations through an application as familiar and intimate as email.”
  • David Wiley wonders if Wave might “completely transform the way we teach and learn.”
  • Tim O’Reilly observes that the emergence of Wave has created a “kind of DOS/Windows divide in the era of cloud applications. Suddenly, familiar applications look as old-fashioned as DOS applications looked as the GUI era took flight. Now that the web is the platform, it’s time to take another look at every application we use today.”

All of this continues to point to the demise of the LMS as we know it. However, I agree with Joshua Kim’s observation that the LMS’s “future needs to be different from its past.” As he notes, he’s anxious to use Wave for group projects, but he wants his course rosters pre-loaded and otherwise integrated with institutional systems. This is, as I have previously noted, the most likely evolutionary path for learning technology environments–a hybrid between open, flexible cloud-based tools like Wave and institutionally managed systems that provide student data integration and keep assessment data secure. And this is bound to looking something a lot more like an open learning network than a traditional course management system.

As we adopt and adapt tools like Twitter and Google Wave to our purposes as learning technologists, we have to change the way we think about managing facilitating learning conversations. We can no longer be satisfied with creating easy to manage course websites that live inside moated castles. We have to open up the learning process and experience to leverage the vastness of the data available to us and the power of the crowd, all the while remembering that learning is fundamentally about individuals conversing with each other about the meaning and value of the data they encounter and create. Technologies like Google Wave are important, not in and of themselves, but precisely because they force us to remember this reality and realign our priorities and processes to match it.

I’ve seen the future of learning technology, and the future is us.

A Post-LMS Manifesto

In the wake of the announcement of Blackboard’s acquisition of ANGEL, the blogosphere has been buzzing about Learning Management Systems (LMSs) and their future (or lack thereof). The timing of this announcement came at an interesting time for me. A BYU colleague and I (Mike Bush) recently published a piece in Educational Technology Magazine with the unassuming title “The Transformation of Learning with Technology.” (If you read this article, you’ll recognize that much of my thinking in this post is influenced by my work with Mike.) I’ve also been working on strategy document to guide our LMS and LMS-related decisions and resource allocations here at BYU.

These ongoing efforts and my thoughts over the last twenty-four hours about the Bb-ANGEL announcement have come together in the form of a “post-LMS manifesto” (if can dare use such a grandiose term for a blog post). In the press release about Blackboard’s acquisition of ANGEL, Michael Chasen asserted that the move would “accelerate the pace of innovation and interoperability in e-learning.” As a Blackboard client, I certainly hope that’s true. However, more product innovation and interoperability, while desirable, aren’t going to make Blackboard fundamentally different than it is today—a “learning management system” or “LMS.” And that worries me because I continue to have serious concerns about the future of the LMS-paradigm itself, a paradigm that I have critiqued extensively on this blog.

Learning and Human Improvement

Learning is fundamentally about human improvement. Students flock to colleges and university campuses because they want to become something they are not. That “something” they want to become ranges from the loftiest of intellectual ideals to the most practical and worldly goals of the marketplace. For those of us who work in academe, our duty and responsibility is to do right by those who invest their time, their energy, and their futures in us and our institutions. It is our job to help them become what they came to us to become—people who are demonstrably, qualitatively, and practically different than the individuals they were before.

Technology has and always will be an integral part of what we do to help our students “become.” But helping someone improve, to become a better, more skilled, more knowledgeable, more confident person is not fundamentally a technology problem. It’s a people problem. Or rather, it’s a people opportunity. Philosophers and scholars have wrestled with the challenge and even the paradox of education and learning for centuries. In ancient Greece, Plato formulated what we have come to call “Meno’s paradox” in an attempt to get at the underlying difficulties associated with teaching someone a truth they do not already know. The solution in that age was to pair each student with an informed tutor—as Alexander the Great was paired with Aristotle—to guide the learner through the stages of progressive enlightenment and understanding.

More than two millennia later, United States President James Garfield underscored the staying power of this one-to-one approach: “Give me a log hut, with only a simple bench, Mark Hopkins [a well-known educator and lecturer of the day] on one end and I on the other, and you may have all the buildings, apparatus, and libraries without him.” I suppose President Garfield, were he alive today, would include LMSs and other educational technology on the list of things he would give up in favor of a skilled, private tutor.

The problem with one-to-one instruction is that it simply doesn’t scale. Historically, there simply haven’t been enough tutors to go around if our goal is to educate the masses, to help every learner “become.” Another century later, Benjamin Bloom formalized this dilemma, dubbing it the “2 Sigma Problem.” Through experimental investigation, Bloom found that “the average student under tutoring was about two standard deviations above the average” of students who studied in a traditional classroom setting with 30 other students (“The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring,” Educational Researcher, 13(6), 4-16). Notwithstanding this enormous gap, Bloom was optimistic that continued focus on mastery learning would allow us to eventually narrow the distance between individually-tutored and group-instructed students.

Moving Beyond the LMS

There is, at its very core, a problem with the LMS paradigm. The “M” in “LMS” stands for “management.” This is not insignificant. The word heavily implies that the provider of the LMS, the educational institution, is “managing” student learning. Since the dawn of public education and the praiseworthy societal undertaking “educate the masses,” management has become an integral part of the learning. And this is exactly what we have designed and used LMSs to do—to manage the flow of students through traditional, semester-based courses more efficiently than ever before. The LMS has done exactly what we hired it to do: it has reinforced, facilitated, and perpetuated the traditional classroom model, the same model that Bloom found woefully less effective than one-on-one learning.

For decades, we’ve been told that technology is (or soon would be) capable of replicating the role of a private, individual tutor, of providing a “virtual Aristotle” for each individual learner. But after the billions of dollars we’ve spent on educational technology, we’re nowhere near such an achievement. In fact, we can’t even say that we’ve improved learning at all! (See Larry Cuban’s Oversold & Underused for an excellent, in-depth treatment of this subject). And our continued investment of billions of dollars in the LMS is unlikely to get us any closer to our learning improvement goals either. Because the LMS is primarily a traditional classroom support tool, it is ill-suited to bridge the 2-sigma gap between classroom instruction and personal tutoring.

We shouldn’t be terribly surprised or disappointed that LMSs—or any other technology for that matter—have not revolutionized learning. As Michael Wesch and his students have so sagely reminded us, technology alone cannot save us or help us solve our most daunting societal problems. Only we, as human beings working together, can do that. And while many still long for the emergence of the virtual Aristotle, I do not. For I believe that learning is a fundamentally human endeavor that requires personal interaction and communication, person to person. We can extend, expand, enhance, magnify, and amplify the reach and effectiveness of human interaction with technology and communication tools, but the underlying reality is that real people must converse with each other in the process of “becoming.”

Crowdsourcing the Tutor

If we are to close the 2-sigma gap, we must leave the LMS behind and the artificial walls it builds around arbitrary groups of learners who have enrolled in sections of a courses at our institutions. In the post-LMS world, we need to worry less about “managing” learners and focus more on helping them connect with other like-minded learners both inside and outside of our institutions. We need to foster in them greater personal accountability, responsibility and autonomy in their pursuit of learning in the broader community of learners. We need to use the communication tools available to us today and the tools that will be invented tomorrow to enable anytime, anywhere, any-scale learning conversations between our students and other learners. We need to enable teachers and learners to discover and use the right tools and content (and combinations, remixes and mashups thereof) to facilitate the kinds of interaction, communication and collaboration they need in the learning process. By doing so, we can begin to create the kinds of interconnections between content and individual learners that might actually approximate the personal, individualized “tutors.” However, instead of that tutor appearing in the form of an individual human being or in the form of a virtual AI tutor, the tutor will be the crowd.

While LMS providers are making laudable efforts to incrementally make their tools more social, open, modular, and interoperable, they remain embedded in the classroom paradigm. The paradigm—not the technology—is the problem. We need to build, bootstrap, cobble together, implement, support, and leverage something that is much more open and loosely structured such that learners can connect with other learners (sometimes called teachers) and content as they engage in the authentic behaviors, activities and work of learning.

Building a better, more feature-rich LMS won’t close the 2-sigma gap. We need to utilize technology to better connect people, content, and learning communities to facilitate authentic, personal, individualized learning. What are we waiting for?

An Open (Institutional) Learning Network

April 9th, 2009 jonmott Comments

I’ve been noodling on the architecture of an open learning network for some time now. I’m making a presentation to my boss today on the subject and I think I have something worth sharing. (Nothing like a high-profile presentation to force some clarity of thought.)

I wrote a post last year exploring the spider-starfish tension between Personal Learning Environments and institutionally run CMSs. This is a fundamental challenge that institutions of higher learning need to resolve. On the one hand, we should promote open, flexible, learner-centric activities and tools that support them. On the other hand, legal, ethical and business constraints prevent us from opening up student information systems, online assessment tools, and online gradebooks. These tools have to be secure and, at least from a data management and integration perspective, proprietary.

So what would an open learning network look like if facilitated and orchestrated by an institution? Is it possible to create a hybrid spider-starfish learning environment for faculty and students?

The diagram below is my effort to conceptualize an “open (institutional) learning network.”

Open Learning Network 2.0

There are components of an open learning network that can and should live in the cloud:

  • Personal publishing tools (blogs, personal websites, wikis)
  • Social networking apps
  • Open content
  • Student generated content

Some tools might straddle the boundary between the institution and the cloud, e.g. portfolios, collaboration tools and websites with course & learning activity content.

Other tools and data belong squarely within the university network:

  • Student Information Systems
  • Secure assessment tools (e.g., online quiz & test applications)
  • Institutional gradebook (for secure communication about scores, grades & feedback)
  • Licensed and or proprietary institutional content

An additional piece I’ve added to the framework within the university network is a “student identity repository.” Virtually every institution has a database of students with contact information, class standing, major, grades, etc. To facilitate the relationships between students and teachers, students and students, and students and content, universities need to provide students the ability to input additional information about themselves into the institutional repository, such as:

  • URLs & RSS feeds for anything and everything the student wants to share with the learning community
  • Social networking usernames (probably on an opt-in basis)
  • Portfolio URLs (particularly to simplify program assessment activities)
  • Assignment & artifact links (provided and used most frequently via the gradebook interface)

Integrating these technologies assumes:

  • Web services compatibility to exchange data between systems and easily redisplay content as is or mashed-up via alternate interfaces
  • RSS everywhere to aggregate content in a variety of places

As noted in previous posts, we’re in the process of building a stand-alone gradebook app that is consistent with this framework. We’re in the process of deciding which tools come next and whether we build them or leverage cloud apps. After a related and thought-provoking conversation with Andy Gibbons today, I’m also contemplating the “learning conversation” layer of the OLN and how it should be achitected, orchestrated and presented to teachers and learners . . .

While there’s still a lot of work to do, this feels like we’re getting closer to something real and doable. Thoughts?

Getting from Here to There

July 11th, 2008 jonmott Comments

Two interesting posts this week at e-Literate that dovetail nicely with my ruminations about “open learning networks.”

First, Nathan Garrett launches a critique of the “modern CMS” with a picture of a young man sitting watching a video monitor (looks to be circa 1965). Garret asks: “Is this our modern course management system?” Garrett bemoans the fact that CMSs are primarily about one-way information dissemination. Alternatively, he argues that we should encourage the use of social software which promotes the ideals of student creation and ownership of content, peer learning, and public review of their work.

In another post, Glen Moriarty argues that today’s CMS/LMS falls short of its true potential because of a hesitancy to leverage the “Web 2.0 strengths of the Internet.” Moriarty is the CEO of Nixty where he intends to “create applications that intrinsically motivate people to learn and teach others.” Building on Google’s OpenSocial, OpenCourseWare and OpenID, he believes we can create an infrastructure which will “amplify learning for people and institutions around the globe.”

In my estimation, both critiques of the “modern” CMS and the proposals about where to go from here are right on the money. If we persist in simply automating what happens in the classroom (predominantly lecture and information dissemination), we’re not leveraging the power of the tools available to us. (As an aside, wouldn’t it be great if you could authenticate once into your institutional learning environment and be simultaneously logged in to Google, your Blog, etc. Or vice versa?)

But how do we convince others to change? That change is even necessary? How do we encourage administrators, faculty and students to make the kinds of changes, small and large, that will move us toward these ideals?

The challenge before us is a social and cultural one, not a technical one. As observers like Garrett and Moriarty rightly point out, we already have the technology before us to facilitate better learning. So why don’t we use it more and more effectively?

If you’ve read my previous posts (or even the title of my blog site), you’ve probably gathered that my philosophy of learning technology is more focused on learning than it is on technology. By this view, it’s actually backwards to start the conversation by talking about technology. In fact, with many of our colleagues we should avoid talking about technology (especially specific technologies) as much as possible, particularly at the outset. We should begin by talking about what we want to students to be, to become and be able to do.

Do we want students to be more literate? More capable of expressing themselves cogently and persuasively? Using a variety of media? We’d be hard-pressed to find anyone in higher education answer “No” to any of these questions.

Do we want students to feel more confident creating their own content, be that content text, graphics, animation, video, whatever? Do we want them to learn the value of testing their ideas (their “content”) in the market place of ideas, seeking and responding to others’ thoughtful responses to what they’ve created? Again, the answer to these questions is an emphatic “Yes!”

I concur with Garrett and Moriarty that Web 2.0 technologies and social software can be used to significantly transform and improve learning. But not everyone sees (or even sees the need for) such a future. As technology thought leaders in the academic community, we bear the responsibility of bringing others along, helping them see the proverbial light. As the old saw goes, “You catch more flies with honey than with vinegar.” As enamored as we can sometimes become with technology, the real “honey” we must use to convince others is the passion we share with them for learning.