Archive

Posts Tagged ‘CMS’

Jim Groom is Watching Me

February 12th, 2010 jonmott Comments

Jim Groom, aka Rorschach, is watching me. He apparently took umbrage with my ELI presentation in which I–very much tongue-in-cheek–suggested that he and Michael Chasen could live together in harmony, perhaps even sitting down to sing Kumbaya.

Jim appears to be concerned that I’m advocating a “middle-of-the-road” approach that validates the LMS paradigm. Lest anyone else be confused, let me state that nothing could be further from the truth. If you listen to my entire presentation, I hope it’s clear that I’m not advocating the perpetuation of the single, vertical, integrated technology stack that is the LMS. Rather, the AND that I’m really advocating is the blending of the secure, university network for private, proprietary data (e.g., student records) and the open, read-write Web.

David Wiley and I recently argued, the “open learning network” model is “revolutionary primarily in its refusal to be radical in either direction.” There is value in both the LMS and PLE paradigms. However, blending the best aspects of both does not mean keeping either or both in their current forms. It means leveraging the best of each and mashing them up into something completely new and different. By doing so we can create a learning network that is both private AND public, secure AND open, reliable AND flexible, integrated AND modular, and that is supportive of both teachers AND learners.

The CMS and the PLN

January 19th, 2010 jonmott Comments

It’s a been a long time since I blogged. Between sending my son off on a mission to Brazil, celebrating my 20th anniversary with my sweetheart, working on some offline writing projects, taking some time off for the holidays, getting back into the swing of things with the New Year and the new semester, and launching our loosely-coupled gradebook at BYU . . . Well, let’s just say I’ve been a little busy.

During my blogging hiatus, I did manage to get a paper published with my friend and colleague David Wiley. In the paper, we catalogue what we believe are the fundamental weaknesses of the CMS.

Writing this paper and taking some time away from blogging has allowed me to think some things through. As the title of my blog constantly reminds me, technology is only as good as the change and improvement it brings to teaching and learning. I have become supremely utilitarian when it comes to teaching and learning tools, applications and platforms. When it comes to the CMS and the Personal Learning Network (PLN), I readily admit that there are plusses and minuses to each. (For some thoughts ont the distinctions between PLNs and the PLE, see the references below).

I’m currently writing from ELI in Austin, Texas where I will make a presentation about open learning networks. As part of my preparation, I asked my PLN via Twitter (see below for a listing) for sources that delineate the strengths and weaknesses of the CMS and the PLN.

Here’s my meta-listing based on the research I did with David for our article, my own experience, and what I’ve gleaned from the resources shared by my online colleagues:

CMS Strengths

  • Simple, consistent, and structured
  • Integration with student information systems (SISs) so student rosters are automagically populated in courses
  • Private and secure (i.e., FERPA compliant)
  • Tight tool integration (e.g., quiz scores populated in gradebooks)
  • Supports sophisticated content structuring (e.g., sequencing, branching, and adaptive release)

CMS Weaknesses

  • As it is widely implemented, the CMS is time-bound (i.e., courses go away at the end of the semester)
  • Teacher, rather than student, centric
  • Courses are walled off from each other and from the wider Web, thereby negating the potential of the network effect
  • Limited opportunities for students to “own” and manage their learning experiences within and across courses
  • Rigid, non-modular tools
  • Interoperability challenges and difficulties (significant progress is being made on this front, but the ability to easily move data in and out of the CMS and to plug in alternative tools to replace or enhance native tools remains to be seen)

PLN Strengths

  • Almost limitless variety and functionality of tools
  • Customizable and adaptable
  • No artificial time boundaries–remains “on” before, during, and after matriculation
  • Open to interaction and connection with persons without regard to their official registration in programs or courses
  • Easily sharable with others both inside and outside of courses, programs, and institutions
  • Student-centric (i.e., each student selects and uses the tools that make sense for their particular needs and circumstances)
  • Compilable via simple technologies like RSS

PLN Weaknesses

  • Complex and difficult to create for inexperienced students and faculty members
  • Potential security and data exposure problems–FERPA issues abound
  • Limited institutional control over data
  • Absent or unenforceable SLAs–no ability to predict or resolve Web application performance issues, outages, or even disappearance

This is far from a comprehensive list, but it begins to clarify the picture in my mind. If we persist in an either-or debate about the CMS versus the PLN, we will be falling victim to what Jim Collins calls the “tyranny of or.” When faced with difficult decisions, we often cast them–artificially–as dichotomies. We must do this *or* that. Collins argues that the alternative is to find ways to leverage the “genius of and,” to bring together the best of both alternatives and create a chimerical best-of-both-worlds solution.

That is the vision of the open learning network–to bring together the best of the CMS and the best of the PLN to create a learning platform for higher education that meets the broad and diverse needs of faculty members and students engaged in the teaching and leaning process. Doing so is what I get paid to do–to provide technologies that will help teachers and learners be more effective without having to worry about technological complexities and navigating the swirling waters of apparently contradictory paradigms.

Please comment with your suggestions for improving my listing of strengths and weaknesses and I’ll edit the lists (with attribution). If you have additional resources to add, please share those as well.

More fun to follow soon . . .

_______________

RESOURCES

ELI’s “7 Things You Should Know About … Personal Learning Environments

Alec Couros: “What is a PLN? Or, PLE vs. PLN

Steve Wheeler: “It’s Personal: Learning Spaces, Learning Webs

David Hopkins: “Pedagogical Foundations for Personal Learning

John Seely Brown: “Minds on Fire: Open Education, the Long Tail, and Learning 2.0

Edublend: “Cloud Learning Environment – What is it?

grazadio_elearning’s PLE Bookmarks on Delicious

Things I’ve written on the subject …

Bush & Mott: “The Transformation of Learning with Technology: Learner-Centricity, Content and Tool Malleability, and Network Effects

Mott & Wiley: “Open for Learning: The CMS and the Open Learning Network

_______________________________________________________

Deja Vu All Over Again – Blackboard Still Stuck in the Innovator’s Dilemma

July 24th, 2009 jonmott Comments

It’s been a week (or so) since BbWorld. I’ve had the chance now to ruminate about what I saw and heard there. I wanted to let things rattle around in my head a bit before passing judgment on Blackboard’s message this year because it was increasingly clear to me that my assessment of BbWorld 2009 would be virtually the same as my assessment from 2008–Blackboard is improving at the margins, but is not addressing the fundamental weaknesses of the CMS (I use this term generically, lumping Blackboard together with Moodle, Sakai, D2L, etc.).

Improvement at the Margins

Given that BbWorld was in Washington, D.C. this year, it seems appropriate to use a political analogy here. In 1990, George C. Edwards III published At the Margins: Presidential Leadership of Congress. In this now classic study of the president’s ability to lead Congress and enact significant policy change, Edwards concluded that the presidency is so hemmed in by countervailing pressures and influences (in the form of 535 members of Congress, cabinet members, bureaucrats, interest groups, voters, the media, etc.) that presidents should not be expected, save under dramatic and rare circumstances, to effect significant change. Rather, Edwards concludes, presidents have historically been most effective when they have sought to influence change “at the margins,” moving policy incrementally in their preferred direction.

Blackboard, much like the President of the United States, is hemmed in by a large client base (variously represented by university administrators, IT staff, faculty, and students), competing visions within the company, a board of directors and stockholders, etc. It shouldn’t be surprising to anyone, then, that Blackboard’s innovations and improvements are “at the margins.” It’s exceedingly rare for a company with Blackboard’s inertia to make dramatic, revolutionary changes in the products they offer.

BbWorld Announcements & Observations

So what kinds of innovations and improvements did Blackboard announce at BbWorld 2009? I catalogued and reacted to what I heard via Twitter during the conference. For what it’s worth, I begin with a selection of my tweets, first from the keynote (I’ve opted to keep these in reverse-chronological order, as is the convention for a Twitter feed):

Then from the Listening Session:

Deja vu All Over Again

While I’m very encouraged by Blackboard’s announcement (re-iteration) of it’s intent to fully support the Common Cartridge standard and to move toward opening up its database schema for administrators, I’m left wanting much, much more. All of the major announcements–the partnership with Echo360, the acquisition of Terribly Clever, the integration with Wimba (not really new news), the Kindle integration, the focus on closing tickets faster–had little to do with the core concerns about the CMS I have been blogging about for the last year. All of Blackboard’s announcements were about improvements at the margin.

As a checkpoint, I went back and re-read my post from one year ago, written just after BbWorld 2008. Forgive me for regurgitating large portions of it here, but it’s striking how similar my reactions to this year’s BbWorld are to those of last year:

As Clayton Christensen has famously observed, the producers of innovative products gradually lose their creative, innovative edge as they acquire and then seek to protect market share. When a company’s innovations result in significant profits, managers generally find themselves face to face with the innovator’s dilemma. To remain successful, Christensen argues that companies need to listen “responsively to their customers and [invest] aggressively in the technology, products, and manufacturing capabilities that [satisfy] their customers’ next-generation needs.” However, these very same behaviors can create blind spots for innovators. By simply providing incremental improvements to existing products, companies run the risk of missing major, paradigm-shifting innovations in their market spaces. Likewise, they’re in danger of focusing too much on their existing customer bases instead of new potential customers who currently don’t user their products (non-consumers). These twin dangers leave erstwhile market leaders susceptible to disruptive technologies, provided by firms who aren’t stuck in current paradigms or too narrowly focused on pre-defined customer segments.

Blackboard finds itself squarely in the midst of this classic problem. They have a large and fairly stable customer base. Incremental feature enhancements, improved customer service and product stability are likely to keep most of their customers satisfied for time being. But what of the disrupters in the market place? If one considers open source CMS alternatives like Sakai and Moodle to be the most-disruptive players in the market, Blackboard’s strategies appear to be on the right track.

Perhaps not surprisingly (I suppose I predicted this), Blackboard did this year exactly what they did last year and exactly what Christensen cites as the pattern of incumbent market leaders: they announced new feature enhancements and indicated continued attention to and investment in customer service and product stability. As I wrote one year ago:

While I applaud these innovations as good steps in the right direction, there remain fundamental flaws with Blackboard’s (and virtually every other CMS provider’s) underlying infrastructure. For all of the new window dressing, Blackboard remains first and foremost a semester-based, content-delivery oriented, course management system. The software is not (at least noticeably) evolving to become a student-centered learning management system. And while the addition of wikis and blogs inside the Blackboard system is as welcome improvement, there is still little or no integration between student learning tools “inside the moat” and outside of it “in the cloud.”

It is for these reasons that I don’t count Sakai, Moodle, D2L or Angel [which Bb acquired since BbWorld 2008] amongst the biggest, long-term threats to Blackboard. Disruption will, I believe, come from another direction.

From whence will disruption come? More from my post last year:

In Christensen’s newest book, Disrupting Class, he and his co-authors argue that the real disruption in educational technology will come (and is already coming) via learner-centered technologies and networking tools. A rapidly growing number of people are creating their own personal learning environments with tools freely available to them, without the benefit of a CMS. As Christensen would say, they have hired different technologies to do the job of a CMS for them. But the technologies they’re hiring are more flexible, accessible and learner-centered than today’s CMSs. This is not to say that CMSs are about to disappear. Students enrolled in institutions of higher learning will certainly continue to participate in CMS-delivered course sites, but since these do not generally persist over time, the really valuable learning technologies will increasily be in the cloud.

Open learning networks have the potential to bring together the world of the CMS (or better yet “institutional learning networks”) and the world of PLEs together. The next big challenge ahead of us is to figure out ways to create autonomous, institution-independent “learner spaces” that provide home bases for learners that can bridge the two worlds. In these spaces, learners would ideally aggregate relationships, artifacts, and content from ALL of their learning activities, be they digital or analog, online or offline, synchronous or asynchronous, from one institution or many.

I heard virtually nothing at BbWorld this year which would suggest that Blackboard is actively engaged in adapting and evolving to address this challenge. Rather, they continue to innovate at the margins, maintaining their status quo, market-leading position.

What’s the Broken?

@z_rose recently blogged that the problem with the “one-stop-shop” Virtual Learning Environment (VLE) (another frequently-used term for the CMS) is that it is aimed at both learning administration and learning facilitation. These are not, she astutely notes, the same things and VLEs end up doing both poorly.

Both administration and pedagogy are necessary in schools. They are also completely different in what infrastructure they require. This (in my opinion) has been the great failing of VLEs – they all try to squeeze the round pedagogy peg into the square administration hole.

It hasn’t worked very well. Trying to coax collaboration in what is effectively an administrative environment, without the porous walls that social media thrives on, hasn’t worked. The ‘walled garden’ of the VLE is just not as fertile as the juicy jungle outside, and not enough seeds blow in on the wind.

That’s why I’m always cautious of the ‘one-stop-shop’ approach in education – administration and pedagogy are very, very different shops. It’s like having a fishmongers and a haberdashers sharing the same store – no discernible upsides, but a LOT of downsides (stinky fabric springs to mind).

Blackboard and every other CMS / VLE have become exceedingly efficient course content and course administrivia management tools. If data from BYU’s Blackboard usage surveys can be taken as a reasonable guide, most faculty members use Blackboard for administrative, not teaching and learing, purposes, i.e., content dissemination, announcements, e-mail, and gradebooking (70% plus use Bb for these purposes). Dramatically smaller portions (less than 30%) use the teaching and learning tools provided Blackboard (e.g., quizzes, discussion boards, groups, etc.). Increasingly, they’re going to the cloud to use tools that are far better and more flexible than those provided natively inside the CMS.

In 2004, George Siemens wrote, “The real issue is that LMS vendors are attempting to position their tools as the center-point for elearning – removing control from the system’s end-users: instructors and learners.” This is still all-t0o-true five years later. In most CMS implementations, it is exceedingly difficult (if not impossible) for teachers and especially individual learners to take control of the learning environment and shape it to their particular needs. For example, by default, students are generally not able to start their own discussion threads in CMS-delivered courses. Siemens elaborated on these end-user roadblocks, noting that LMSs roundly fail in three significant ways:

  • The rigidity and underlying design of the tool “drives/dictates the nature of interaction (instructors-learner, learner-learner, learner-content).”
  • The interface is too focused on “What do the designers/administrators want/need to do?” rather than on “What does the end user want/need to do?”
  • “Large, centralized, mono-culture tools limit options. Diversity in tools and choices are vital to learners and learning ecology.”

Notably, the absence of LMS-integrated synchronous conferencing and collaboration tools has been largely remedied within the various CMSs. But these other three substantial shortcomings have yet to be addressed. And I would add a critically important fourth weakness–today’s CMSs do not support continuous, cumulative learning throughout a student’s career at an institution, let alone throughout their life after they exit our institutions. As I have written previously, students are completely at the mercy of the institution when it comes to their “presence” and participation in a CMS. They are placed in arbitrarily organized sections of courses for 15-week periods and then “deleted,” as if they never existed in the system. As David Wiley has pondered, how many of us would use Facebook if Facebook deleted our friend connections and pictures every four months?

The fundamental dilemma with the CMS as we know it today is that it is largely a course-centric, lecture-model reinforcing technology with its center of gravity in institutional efficiency and convenience. As such, it is a technology that inclines instructors and students to “automate the past,” replicating previous practice using new, more efficient and more expensive tools instead of innovating around what really matters–authentic teaching, learning, and assessment behaviors.

Blackboard’s Opportunity?

Lest I come across as a Blackboard/CMS naysayer or doomsayer, I should note that, in their early days, Blackboard and other CMSs were the disruptive technology. They were the source of innovation and new thinking about how we organize to teach and learn. However, roughly a decade after the inception of the CMS, the academic community finds itself again ripe for disruption, not only of the technology we use to “manage courses” but in the very system itself. While many will continue to innovate at the margins, there are large crowds of non-consumers out there clamoring for something that meets their needs. At BYU, for example, 25% of our faculty members opt to use a blog, a wiki, or a custom-built course website instead of Blackboard or another CMS. These are the non-consumers Christensen reminds us that we need to figure out how to serve. The same goes for the “non-traditional” students who either aren’t wired to learn the way we’re organized to teach them (in course-sized chunks, bundled in units of time we call semesters) or who, for a variety of reasons, don’t have access to our institutions.

Blackboard still has the opportunity to facilitate discussion and innovation around these critical issues. The company took an important step in this direction by organizing the “Pipeline Matters” session the day before BbWorld, bringing together educational leaders from K12, community colleges, traditional “higher ed” instituitons, and edudcational associations. As a fortunate participation in this conversation, I recognize and appreciate Blackboard’s unique position (with its roughly 3000 client institutions) in the educational space to bring together a broad and diverse set of educational players to address issues such as the one we did last week–how can we improve our efforts to keep students in school and to help them easily re-enter and succeed when they, despite our best efforts, leak out of the “pipeline”?

These sorts of questions are critically important not only to educators, but also to Blackboard’s immediate future and direction because they can compel the company to get outside its comfort zone and rethink how it does what it does and why it does it.

Blackboard can still play a leading role in education. But it needs to think more about end-users and about non-consumers, not just about the universtity administrators who purchase and implement their products. That’s an admittedly tall order for a publicly-traded corporation to take on. But, as Christensen argues, they have to figure out a way to do so if they’re to remain relevant. That’s precisely the innovator’s dilemma.

As I conclude last year, if Blackboard doesn’t innovate, someone else will.

And it won’t be long.

PLNs, Portfolios, and a Loosely-Coupled Gradebook

June 16th, 2009 jonmott Comments

Note: In this post, I reference articles from the Winter 2009 edition of Peer Review, the theme of which was “Assessing Learning Outcomes.” I highly recommend the entire issue if you’re interested in student learning assessment and portfolios. I recently provided an overview of BYU’s loosely-coupled gradebook strategy at TTIX 2009. (We are currently building a standalone gradebook in partnership with Orem, Utah based startup Agilix.) As part of my presentation at TTIX, I also described our plans to leverage the same technology we’re building into the gradebook to implement a loosely-coupled portfolio assessment tool.

The driving purpose behind these efforts is to bridge the gap between our institutional network and the cloud, between the predominant “course management system” (CMS) paradigm and the emerging model of personal learning networks (PLNs), between student-centered and institution-centered portfolios, etc. I maintain that bridging this gap is a necessary condition for the significant transformation of learning via technology. Until learning tools and content become more malleable (i.e. open, modular, and interoperable) we will not realize the full potential of an interconnected, networked world in education.

Student Learning Portfolios & Institutional Assessment

Portfolios are increasingly at the nexus of student learning, institutional assessment, PLNs, CMSs, and assorted other aspects of the higher ed milieu. Student learning portfolios are essential in the movement toward more valid and authentic assessment in higher education. However, the focus on institutional and program assessment has, at least in some instances, diverted our attention from our primary objective of improving student learning. As Trent Batson observed in 2007, that the initial effort to enhance student learning with portfolios has been “hijacked by the need for accountability” to boards of education and accrediting bodies.

This trend is worrisome. If the focus on portfolios shifts primarily to institutional and program assessment, we will have missed out on the essential value of portfolios. Portfolios have the potential to galvanize and enhance student learning. As Miller and Morgaine have observed: “E-portfolios provide a rich resource for both students and faculty to learn about achievement of important outcomes over time, make connections among disparate parts of the curriculum, gain insights leading to improvement, and develop identities as learners or as facilitators of learning.”

Given the potential benefits of portfolios, I believe that student learning portfolios should, first and foremost, belong to and be maintained by individual learners. Gary Brown supports this view, maintaining that portfolios should be student (and not institutionally) operated: “A real student-centered model would put the authority, or ownership, of [ePortfolios] in the hands of the students: They could share evidence of their learning for review with peers, and offer that evidence to instructors for grading and credentialing.” Such an approach increases student ownership and responsibility for learning. It also affords portability and longevity since students are not dependent upon a particular institution (or set of institutions) to provide them with portfolio technology and storage space.

Clark & Enyon have argued that have to get past this tension between student and institutional portfolios: “The e-portfolio movement must find ways to address [institutional assessment] needs without sacrificing its focus on student engagement, student ownership, and enriched student learning.”

Bridging the Gap

So exactly how can we bridge the gap between student-centered learning portfolios and institutional assessment needs? I propose that the loosely-coupled gradebook strategy we’re pursuing can be leveraged to provide a viable solution to this problem.

Here are the dimensions of a loosely-coupled portfolio assessment strategy:

  1. Institutions of higher learning should focus on what they do best and on what only they can do. Namely, they should admit and register students, manage course enrollments and  degree program rosters, and maintain secure records and communications tools for faculty and students engaged in the learning process.
  2. We can then leverage the best online, third-party applications for student publishing, networking, and portfolio creation. Individual institutions (or even institutions working together) would be hard pressed to produce applications comparable in quality and stability to Google Docs, YouTube, Blogger, Acrobat Online, MS Office Live, Wikispaces, and WordPress.
  3. Teachers and learners should embrace the power of the network to enhance, extend and improve learning. Even if institutions could develop and deploy better tools than those freely available online, it would be a bad idea to do so. The fundamental value proposition of these apps is that, since they live in the cloud, they’re accessible anytime, anywhere, by anyone. The openness this affords promotes greater transparency and expanded opportunities for collaboration.
  4. Students should be encouraged to be effective, technologically literate, digital citizens who are proficient using a variety of online tools. As they participate in the learning process, they should regularly save and aggregate their work, packaging and repackaging it for various audiences. One of these audiences might be those responsible for degree program review and assessment.
  5. Once students have assembled their collections of learning artifacts, metacognitive commentary, and portfolios, they might then be required to simply “register” the URLs of their portfolios and artifacts with their institution. Those conducting program and institutional assessment would then use a lightweight “overlay” tool to review and assess submitted student work.

The various aspects of this approach and the associated work flow might look something like this: Portfolio Assessment Diagram

The Power of a Loosely-Coupled Strategy

The “open learning network” (OLN) strategy I’ve written about from time to time is based on several value propositions. One is that institutions should do what they do best (manage student data, facilitate secure communication between teachers and learners) while leveraging third-party, cloud-based applications for such things as personal publishing and collaboration. The loosely-coupled gradebook strategy described in previous posts is a key component of this larger idea.

Another central value proposition of the OLN is the connectedness it facilitates between teachers and learners both within and without institutional boundaries. When learners not only consume online content, but also refine, improve, remix, mashup, and create new content themselves during the learning process, their learning is more authentic, meaningful, and enduring. And they build deeper, more profound connections between facts, concepts, and the other human beings they interact with. This notion of of connectedness is a fundamental aspect of what we consider education and literacy today.

George Siemens blogged today that:

“Not only are we socially connected in our learning, but the concepts that form our understanding of a subject also reveal network attributes. Understanding is a certain constellation (pattern) of connections between concepts. . . . being a literate person is not so much about what you know, but about how you know things are connected.”

I concur. As we continue to pursue the OLN vision, it is essential that we facilitate opennes in the learning process to promote greater interaction and connections with content, people, cultures, and places. It is with this end in mind that we ought to promote personal learning networks (PLNs), OLNs, and loosely-coupled gradebooks. We want our students to be literate, connected, and efficacious life-long learners who make their homes, their communities, their workplaces and the world better places than they found them.

I’ve Seen the Future and the Future is Us (Using Google)

The past couple of weeks have been full of new technology announcements. Three in particular are notable because of the splash they received as “pre-releases” and how different one of them is from the other. I’ll readily admit that in these observations I have a particular bias, or at least a very narrow focus–I’m looking at the potential of these new technologies to transform and dramatically improve learning. By my count, one of the releases has the potential to do so. The other two? Not so much.

First, WolframAlpha launched amid buzz that it was the “Google killer.” It promised to revolutionize search by beginning to deliver on the much awaited “semantic web.” In their own worlds, Wolfram’s objective is to “to make all systematic knowledge immediately computable and accessible to everyone.” This ambitious vision is intended to make better sense of the data accessible via the web to help us answer questions, solve problems, and make decisions more effectively. In the end, though, the tool is in reality “a variation on the Semantic Web vision . . . more like a giant closed database than a distributed Web of data.” It’s still early to be drawing final conclusions, but Google is not dead. Life goes on much the same as it did before. And, most importantly from my perspective, it does not appear that learning will be dramatically transformed, improved, or even impacted by the release of WolframAlpha. (BTW, I loved @courosa’s observation that “the Wolfram launch was like that group that claimed they were going to reveal evidence of UFOs a few yrs back.” After the hoopla died down, there wasn’t all that much “there” there.)

The second big product announcement, and the month’s next contestant for “Google Killer” status, was Microsoft’s Bing, Redmond’s latest attempt to reclaim its Web relevance. Microsoft is spinning Bing as a “decision engine,” also aimed at dramatically improving search. Beginning with the assertion that “nearly half of all searches don’t result in the answer that people are seeking,” the Bing team promises to deliver a tool that will yield “knowledge that leads to action” and help you “make smarter, faster decisions.” (By the way, is it still 2009? A product release website with text embedded in SilverLight that I can’t copy & paste? Really?) Again, it’s early–Bing isn’t even publicly available yet–but even if MS delivers on its lofty claims, this sort of technology doesn’t seem poised to transform learning.

Maybe I’m missing something, but better search doesn’t seem to be our biggest barrier to dramatically improved learning. And both WolframAlpha and Bing are coming at the problem of data overload with better search algorithms, better data processing tools, and intelligent data sorting / presentation tools. I think all of this is great. But neither of these approaches touches the fundamental, core activity of learning–making sense out the world’s complexities in communities of learners. In my “Post-LMS Manifesto” earlier this month, I observed:

Technology alone cannot save us or help us solve our most daunting societal problems. Only we, as human beings working together, can do that. And while many still long for the emergence of the virtual Aristotle, I do not. For I believe that learning is a fundamentally human endeavor that requires personal interaction and communication, person to person. We can extend, expand, enhance, magnify, and amplify the reach and effectiveness of human interaction with technology and communication tools, but the underlying reality is that real people must converse with each other in the process of “becoming.”

Having seen WolframAlpha and Bing, I’m even more firm in this belief. Advanced, improved, more sophisticated search and data sorting technology is much needed and wanted. I’ll be the first in line to use the search engine that proves itself more effective than Google Search. But, as the MLIS students at UBC understand, “without the person, the information is just data.”

A People Problem, Not a Data Problem

Enter Google Wave. In striking contrast to the Wolfram & MS attempts to surpass Google’s search predominance, Google itself announced that it had reinvented e-mail. It’s no coincidence that the reigning king of search hasn’t been spending all of its time and resources on reinventing search (although I don’t doubt that the Google brain trust is spending at least a few cycles doing that). Google has instead focused substantial energy on improving the tools we use to collaborate and communication around data and content.

In it’s Bing press release, MS notes that there are 4.5 new websites created every second. Two years ago, Michael Wesch noted that the world was on pace to create 40 exabytes (40 billion gigabytes) of new data. And the rate of data creation is only accelerating. More recently, Andreas Weigand contends that “in 2009, more data will be generated by individuals than in the entire history of mankind through 2008.”

On the surface, this might seem less substantive than improved data search and analysis tools and, therefore, less relevant to the business or learning. But dealing with data overload and making sense out of it all is a fundamentally human problem. Again, we can extend, expand, enhance, magnify, and amplify the reach and effectiveness of our access to and analysis of data, but making sense out of it all requires individuals, groups and crowds to have conversations about the origins, interpretations and meanings of that data. That is essence of being human. We can outsource our memories to Google, but we cannot (should not!) outsource our judgment, critical analysis, and interpretive capacities to any mechanical system.

The Future of Learning and Learning Technology

<melodrama>I’ve seen the future. And the future is us.</mellodrama>. As we use–and even more importantly appropriate, adapt, and repurpose–tools like Google Wave, we can leverage technology to preserve and enhance that which is most fundamentally human about ourselves. I appreciated Luke’s reminder that teachers and learners should “take ownership of online teaching and learning tools” and, accordingly, “not be shy about reminding our users of their responsibilities, and our users shouldn’t be shy about asking for help, clarification, or if something is possible.” This is precisely what many users of Twitter have done. My startling realization about my Twitter activity is that it has become an indispensable component of my daily learning routine. It’s become a social learning tool for me, giving me access to people and content in a way I never imagined.

Based on an hour and 20 minute long video, Google Wave appears poised to dramatically improve on the Twitter model. Accordingly, the possibilities for enhanced interactions between learners are encouraging. And the ripples of the Wave (sorry, couldn’t resist) have profound implications. With Wave, entire learning conversations are captured and shared with dynamic communities of learners. Lars Rasmussen (co-creator of Wave) noted: “We think of the entire conversation object as being a shared object hosted on a server somewhere” (starting at about 6:22 into the presentation). The ability for late-joining participants to “playback” the conversation and get caught up is particularly intriguing. Elsewhere:

  • Jim Groom asserts that Google Wave will make the LMS “all but irrelevant by re-imagining email and integrating just about every functionality you could possibly need to communicate and manage a series of course conversations through an application as familiar and intimate as email.”
  • David Wiley wonders if Wave might “completely transform the way we teach and learn.”
  • Tim O’Reilly observes that the emergence of Wave has created a “kind of DOS/Windows divide in the era of cloud applications. Suddenly, familiar applications look as old-fashioned as DOS applications looked as the GUI era took flight. Now that the web is the platform, it’s time to take another look at every application we use today.”

All of this continues to point to the demise of the LMS as we know it. However, I agree with Joshua Kim’s observation that the LMS’s “future needs to be different from its past.” As he notes, he’s anxious to use Wave for group projects, but he wants his course rosters pre-loaded and otherwise integrated with institutional systems. This is, as I have previously noted, the most likely evolutionary path for learning technology environments–a hybrid between open, flexible cloud-based tools like Wave and institutionally managed systems that provide student data integration and keep assessment data secure. And this is bound to looking something a lot more like an open learning network than a traditional course management system.

As we adopt and adapt tools like Twitter and Google Wave to our purposes as learning technologists, we have to change the way we think about managing facilitating learning conversations. We can no longer be satisfied with creating easy to manage course websites that live inside moated castles. We have to open up the learning process and experience to leverage the vastness of the data available to us and the power of the crowd, all the while remembering that learning is fundamentally about individuals conversing with each other about the meaning and value of the data they encounter and create. Technologies like Google Wave are important, not in and of themselves, but precisely because they force us to remember this reality and realign our priorities and processes to match it.

I’ve seen the future of learning technology, and the future is us.

A Post-LMS Manifesto

In the wake of the announcement of Blackboard’s acquisition of ANGEL, the blogosphere has been buzzing about Learning Management Systems (LMSs) and their future (or lack thereof). The timing of this announcement came at an interesting time for me. A BYU colleague and I (Mike Bush) recently published a piece in Educational Technology Magazine with the unassuming title “The Transformation of Learning with Technology.” (If you read this article, you’ll recognize that much of my thinking in this post is influenced by my work with Mike.) I’ve also been working on strategy document to guide our LMS and LMS-related decisions and resource allocations here at BYU.

These ongoing efforts and my thoughts over the last twenty-four hours about the Bb-ANGEL announcement have come together in the form of a “post-LMS manifesto” (if can dare use such a grandiose term for a blog post). In the press release about Blackboard’s acquisition of ANGEL, Michael Chasen asserted that the move would “accelerate the pace of innovation and interoperability in e-learning.” As a Blackboard client, I certainly hope that’s true. However, more product innovation and interoperability, while desirable, aren’t going to make Blackboard fundamentally different than it is today—a “learning management system” or “LMS.” And that worries me because I continue to have serious concerns about the future of the LMS-paradigm itself, a paradigm that I have critiqued extensively on this blog.

Learning and Human Improvement

Learning is fundamentally about human improvement. Students flock to colleges and university campuses because they want to become something they are not. That “something” they want to become ranges from the loftiest of intellectual ideals to the most practical and worldly goals of the marketplace. For those of us who work in academe, our duty and responsibility is to do right by those who invest their time, their energy, and their futures in us and our institutions. It is our job to help them become what they came to us to become—people who are demonstrably, qualitatively, and practically different than the individuals they were before.

Technology has and always will be an integral part of what we do to help our students “become.” But helping someone improve, to become a better, more skilled, more knowledgeable, more confident person is not fundamentally a technology problem. It’s a people problem. Or rather, it’s a people opportunity. Philosophers and scholars have wrestled with the challenge and even the paradox of education and learning for centuries. In ancient Greece, Plato formulated what we have come to call “Meno’s paradox” in an attempt to get at the underlying difficulties associated with teaching someone a truth they do not already know. The solution in that age was to pair each student with an informed tutor—as Alexander the Great was paired with Aristotle—to guide the learner through the stages of progressive enlightenment and understanding.

More than two millennia later, United States President James Garfield underscored the staying power of this one-to-one approach: “Give me a log hut, with only a simple bench, Mark Hopkins [a well-known educator and lecturer of the day] on one end and I on the other, and you may have all the buildings, apparatus, and libraries without him.” I suppose President Garfield, were he alive today, would include LMSs and other educational technology on the list of things he would give up in favor of a skilled, private tutor.

The problem with one-to-one instruction is that it simply doesn’t scale. Historically, there simply haven’t been enough tutors to go around if our goal is to educate the masses, to help every learner “become.” Another century later, Benjamin Bloom formalized this dilemma, dubbing it the “2 Sigma Problem.” Through experimental investigation, Bloom found that “the average student under tutoring was about two standard deviations above the average” of students who studied in a traditional classroom setting with 30 other students (“The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring,” Educational Researcher, 13(6), 4-16). Notwithstanding this enormous gap, Bloom was optimistic that continued focus on mastery learning would allow us to eventually narrow the distance between individually-tutored and group-instructed students.

Moving Beyond the LMS

There is, at its very core, a problem with the LMS paradigm. The “M” in “LMS” stands for “management.” This is not insignificant. The word heavily implies that the provider of the LMS, the educational institution, is “managing” student learning. Since the dawn of public education and the praiseworthy societal undertaking “educate the masses,” management has become an integral part of the learning. And this is exactly what we have designed and used LMSs to do—to manage the flow of students through traditional, semester-based courses more efficiently than ever before. The LMS has done exactly what we hired it to do: it has reinforced, facilitated, and perpetuated the traditional classroom model, the same model that Bloom found woefully less effective than one-on-one learning.

For decades, we’ve been told that technology is (or soon would be) capable of replicating the role of a private, individual tutor, of providing a “virtual Aristotle” for each individual learner. But after the billions of dollars we’ve spent on educational technology, we’re nowhere near such an achievement. In fact, we can’t even say that we’ve improved learning at all! (See Larry Cuban’s Oversold & Underused for an excellent, in-depth treatment of this subject). And our continued investment of billions of dollars in the LMS is unlikely to get us any closer to our learning improvement goals either. Because the LMS is primarily a traditional classroom support tool, it is ill-suited to bridge the 2-sigma gap between classroom instruction and personal tutoring.

We shouldn’t be terribly surprised or disappointed that LMSs—or any other technology for that matter—have not revolutionized learning. As Michael Wesch and his students have so sagely reminded us, technology alone cannot save us or help us solve our most daunting societal problems. Only we, as human beings working together, can do that. And while many still long for the emergence of the virtual Aristotle, I do not. For I believe that learning is a fundamentally human endeavor that requires personal interaction and communication, person to person. We can extend, expand, enhance, magnify, and amplify the reach and effectiveness of human interaction with technology and communication tools, but the underlying reality is that real people must converse with each other in the process of “becoming.”

Crowdsourcing the Tutor

If we are to close the 2-sigma gap, we must leave the LMS behind and the artificial walls it builds around arbitrary groups of learners who have enrolled in sections of a courses at our institutions. In the post-LMS world, we need to worry less about “managing” learners and focus more on helping them connect with other like-minded learners both inside and outside of our institutions. We need to foster in them greater personal accountability, responsibility and autonomy in their pursuit of learning in the broader community of learners. We need to use the communication tools available to us today and the tools that will be invented tomorrow to enable anytime, anywhere, any-scale learning conversations between our students and other learners. We need to enable teachers and learners to discover and use the right tools and content (and combinations, remixes and mashups thereof) to facilitate the kinds of interaction, communication and collaboration they need in the learning process. By doing so, we can begin to create the kinds of interconnections between content and individual learners that might actually approximate the personal, individualized “tutors.” However, instead of that tutor appearing in the form of an individual human being or in the form of a virtual AI tutor, the tutor will be the crowd.

While LMS providers are making laudable efforts to incrementally make their tools more social, open, modular, and interoperable, they remain embedded in the classroom paradigm. The paradigm—not the technology—is the problem. We need to build, bootstrap, cobble together, implement, support, and leverage something that is much more open and loosely structured such that learners can connect with other learners (sometimes called teachers) and content as they engage in the authentic behaviors, activities and work of learning.

Building a better, more feature-rich LMS won’t close the 2-sigma gap. We need to utilize technology to better connect people, content, and learning communities to facilitate authentic, personal, individualized learning. What are we waiting for?

An Open (Institutional) Learning Network

April 9th, 2009 jonmott Comments

I’ve been noodling on the architecture of an open learning network for some time now. I’m making a presentation to my boss today on the subject and I think I have something worth sharing. (Nothing like a high-profile presentation to force some clarity of thought.)

I wrote a post last year exploring the spider-starfish tension between Personal Learning Environments and institutionally run CMSs. This is a fundamental challenge that institutions of higher learning need to resolve. On the one hand, we should promote open, flexible, learner-centric activities and tools that support them. On the other hand, legal, ethical and business constraints prevent us from opening up student information systems, online assessment tools, and online gradebooks. These tools have to be secure and, at least from a data management and integration perspective, proprietary.

So what would an open learning network look like if facilitated and orchestrated by an institution? Is it possible to create a hybrid spider-starfish learning environment for faculty and students?

The diagram below is my effort to conceptualize an “open (institutional) learning network.”

Open Learning Network 2.0

There are components of an open learning network that can and should live in the cloud:

  • Personal publishing tools (blogs, personal websites, wikis)
  • Social networking apps
  • Open content
  • Student generated content

Some tools might straddle the boundary between the institution and the cloud, e.g. portfolios, collaboration tools and websites with course & learning activity content.

Other tools and data belong squarely within the university network:

  • Student Information Systems
  • Secure assessment tools (e.g., online quiz & test applications)
  • Institutional gradebook (for secure communication about scores, grades & feedback)
  • Licensed and or proprietary institutional content

An additional piece I’ve added to the framework within the university network is a “student identity repository.” Virtually every institution has a database of students with contact information, class standing, major, grades, etc. To facilitate the relationships between students and teachers, students and students, and students and content, universities need to provide students the ability to input additional information about themselves into the institutional repository, such as:

  • URLs & RSS feeds for anything and everything the student wants to share with the learning community
  • Social networking usernames (probably on an opt-in basis)
  • Portfolio URLs (particularly to simplify program assessment activities)
  • Assignment & artifact links (provided and used most frequently via the gradebook interface)

Integrating these technologies assumes:

  • Web services compatibility to exchange data between systems and easily redisplay content as is or mashed-up via alternate interfaces
  • RSS everywhere to aggregate content in a variety of places

As noted in previous posts, we’re in the process of building a stand-alone gradebook app that is consistent with this framework. We’re in the process of deciding which tools come next and whether we build them or leverage cloud apps. After a related and thought-provoking conversation with Andy Gibbons today, I’m also contemplating the “learning conversation” layer of the OLN and how it should be achitected, orchestrated and presented to teachers and learners . . .

While there’s still a lot of work to do, this feels like we’re getting closer to something real and doable. Thoughts?

Loosely Coupled Gradebook Specs

January 30th, 2009 jonmott Comments

We’ve been making good progress on our loosely coupled gradebook project here at BYU. I’m posting a link to our working specifications document for review and comment.

The broad purposes of the project are to develop a web-based gradebook that will provide:

  • Integration our course management system(s)
  • Integration with our high-stakes proctored testing environment
  • Multiple “on-ramps” (integration points with other tools, systems and web 2.0 apps) for faculty to input and retrieve student performance measures
  • Easy importing and exporting of data
  • Flexible grade calculation
  • Direct interface with our student information system for grade posting

This is a key component of our larger open learning network strategy.

Feel free to review our specifications document and let us know what you think.

Bridging the Gap Between the Campus Enterprise and the Cloud

November 19th, 2008 jonmott Comments

PlugJam (http://plugjam.com) appears to offer a crucial piece of the open learning network puzzle. While it makes intuitive sense to allow seamless integration between campus-based apps and Web 2.0 apps, it’s much easier to think about such integrations than it is to actually pull them off. According the company’s website, “PlugJam is a solution for schools, colleges, and universities looking to bridge the gap between existing campus-based tools and Web 2.0 services, allowing students to use their favorite social networking environment or Web Service to access their campus-based resources.”

This illustration (also from PlugJam’s website) shows how the PlugJam open API facilitates interconnectivity between the campus enterprise and the cloud:

Among other things, PlugJam allows campuses to:

  • Create social and informal learning tools from your existing systems
  • Link your Web 2.0 photos, videos and bookmarks
  • Build dynamic e-Portfolios with Campus and Social Network Resources e.g. Flickr, YouTube, Delicious
  • PlugJam has already built ”connectors” for Blackboard, Moodle, Peoplesoft Student, “identity management servers” and “portal servers”.

    I can’t wait to see what this tool can do!

    (For another take on PlugJam, see “Bringing Student ‘Stuff’ to Campus Enterprise Systems“.)

    OpenEd 2008

    September 26th, 2008 jonmott Comments

    I attended OpenEd 2008 @ Utah State the last couple of days. Even though I missed the last day of the conference (today) it was one of the best, most inspiring, thought-provoking conferences I’ve attended in a long time.

    Here are some highlights /observations:

    • Seeing Yale’s OCW demo and being reminded that sometimes quality is more important than quantity. They have seven (yes, 7) courses online that have been viewed by 500,000 people. Not too shabby.
    • David Wiley’s declaration: “If my students can Google it, I don’t need to teach it.” The new knowledge economy is much more about what you can do with information than it is what you can memorize. (See my recent post RE ChaCha.)
    • Interesting observation by Yoshimi Fukuhara of Keio U that OCW sites are too focused on content and not enough on the learner experience.
    • Tusk Project at Tufts U facilitates “personal knowledge management” for students.
    • Terry Bays of OCWC suggests it’s critical for institutions to be clear about the goals their pursuing via Open CourseWare, i.e. what benefit(s) will it bring to the institution? Without such clarity, OCW efforts will be difficult to sustain over time. Jacque du Plessis made a similar argument in his presentation on the OCW lifecycle.
    • The hike up Logan Canyon. A very nice, refreshing break in the middle of the normal conference grind.
    • Finally meeting Brian Lamb after bumping into each other on Twitter and blogs for several months.
    • Confirmation from several folks after my presentation that a standalone, CMS-independent gradebook is a critical missing link for the creation of more open, flexible learning networks.
    • General mood / ideology / philosophy permeating the conference that learning and learners are much more important that institutional niceties, systems, vendors, etc. etc. etc.

    Great conference! Thanks to the organizers, presenters and participants!