The state of standards at Online Educa 2004
In amongst many, many other things, Berlin's Online Educa 2004 conference had almost a full day's worth of events on the topic of educational interoperability standards. Since the morning's session was mostly about content interoperability, and the afternoon session about collaborative technology, the panel discussion at the end made for some pretty clear contrasts about where different people think interoperability standards are.
Standardisation: Myth or Reality
The morning session was run by Erik Duval and Wayne Hodgins, both well known for their contributions to the IEEE Learning Object Metadata (LOM) standard, and much else besides. The session sought to give all comers a wide ranging overview of the state of development of the main standards, from both a historic and a visionary perspective.
Wayne established the ground with some warnings: first about the danger of 'perfecting the irrelevant' and second about any illusions that e-learning really is 'any learning anywhere and to anyone' at the moment.
The first was illustrated with the analogy of a speech the management of a US typewriter factory gave to workers on the occassion of the closing of their last plant: the last product that came off the line there was probably the best typewriter ever made, but completely irrelevant to any of the company's customers, who had long switched to PCs.
The second is a corrolary of the easily overlooked fact that only about nine percent of the world's population has access to the web. No ubiquitous web access means no ubiquitous e-learning either.
For those who do have the luxury of being able to access any learning, anywhere and at any time, Wayne projected that there would be a shift to mass personalisation. That is, not just every learning to everyone, but my learning to me.
Erik elaborated a bit more on methods that would make this possible: by enabling mass contribution of material. This would require, among other things, Erik's by now familiar 'death to all metadata forms' campaign. The argument here is that the manual marking up of metadata records doesn't scale very well, with an exponentially growing number of learning objects overwhelming the ability of information specialists (i.e. librarians, mostly) to tag them all. Erik's solution to this particualr problem lies in smarter ways of generating metadata records automatically and transparently, in the tools that content developers use.
The key point of the workshop, however, also proved its most controversial. Erik and Wayne argued that the basis for any future developments is already in place in the form of the current crop of interoperability standards, and more specifically those referenced by the Sharable Content Object Reference Model (SCORM) 2004. That is: you can stop waiting for the standards, all that's needed now is more implementations of SCORM 2004.
Using the analogy of the state of web technology standardisation circa early nineties, Erik argued that the specs and standards in SCORM 2004 were like HTML, HTTP and URLs back then: the basic building blocks that would enable the massive implementation growth and diversification that followed.
Drawing the analogy further, it was argued that, like the web technologies back then, e-learning interoperability specs would soon become entirely invisible to your average author and end-user. New tools would make wrangling the basic elements (much less the raw XML) of specs like Content Packaging and the LOM a thing of the past.
Likewise, any new breakthroughs in functionality would come from third party services, much like the Googles, Amazons and Yahoos did for the web.
Though no-one seemed to object to the notion that the specs and standards referenced in SCORM 2004 solve the problem of making content interoperable, some people did wonder aloud whether that should be the end goal of educational interoperability specifications. An issue taken up again in the panel session at the end of the day.
Large scale standardisation experiences
That e-learning standardisation can mean much more than technical content interoperability was amply demonstrated by the afternoon session led by Prof. Rob Koper of the Open University of the Netherlands.
Dr. Michelle Selinger, for example, outlined Cisco's well established and widely used Networking Academy Program, which is rather more about standardisation of learning objectives and content than content formats. Since the material is made available to partner institutions on Cisco's own servers, but not as actual content, content interoperability isn't a priority.
Bernard Zech of Cogito did present a technically interoperable solution of elive Learning Design that does address the desire for collaborative learning, but not necessarily just e-learning. The elive LD suite is a commercial, IMS Learning Design (LD) compliant graphic editor for didactical scenarios that is currently in beta. Though the output of the editor is to be suitable for use in an LD compliant learning environment such as CopperCore, its intended use is more aimed at making the precise nature of a learning experience explicit.
That is, the elive tool is designed to let an educator or team of educators capture, model and plan all the factors that together make up a learning experience at any level between an individual activity and a complete course, regardless whether that experience is to be delivered face to face, blended, or on-line.
To do that job, the elive team have made some interesting extensions to the current IMS LD specification. These are all about capturing the meaning and the mechanics of re-useable sub-assemblies in a learning design. An example would be common methods of Problem Based Learning. Other re-useable components include techniques such as brainstorming, and 'pedagogical patterns': abstract solutions to common design problems. Patterns are in widespread use in the software design world to capture and exchange ways of addressing common human processes and problems in software, and the technique has already been applied with some success in course design and lesson planning.
Patterns of a rather wider scale were dealt with at breakneck speed by Oleg Liber of CETIS: the story of nearly nine years of JISC work in educational technology interoperability, told in ten minutes flat. Running from the earliest identification of interoperability as an issue, via the early CETIS work with the then embryonic LOM and IMS Enterprise specifications to the successes of the eXchange for Learning (X4L) programme; the repeated pattern seems to be that while the problem of content interoperability has proven tractable, the question of enterprise data integration remains hard.
Hence the ten minute thumbnail of the E-Learning Framework (ELF), in the context of the wider JISC e-learning programme, presented by yours truly. This collaborative effort between Industry Canada, Australia's Department of Education, Science and Training and Carnegie Mellon's Learning Systems Architecture Lab is, in essence, a tool to break the problem of functional integration of information on a network down into more do-able chunks. Also, by separating out the means of integration (the services themselves), from the systems that need integration, a wide variety of new and existing systems can play.
Standardisation of a rather different kind was outlined by Malte Dreyer of the University of Applied Sciences Lübeck, Germany. Having tried the common international content interoperability specs some years ago with its partner, they concluded that the degree of interoperability these specs offered wasn't good enough. So they attacked the problem in a different way- by setting up a major, rigorous workflow of didacticians, instructional designers, graphic designers and subject specialists working on a single set of systems. Every bit of content that gets created is controlled "literally down to the last pixel", according to Malte.
Since every last aspect of the content production and deployment process is known and tested, few nasty surprises seem to occur. Though such an approach can't come cheaply, the expanding consortium of colleges and universities that participate in the system — now stretching right around the Baltic sea — make it feasable.
The panel was where the collaborative approach versus the self-paced, individual approach to e-learning came back, and was played out in full. With the good old contrast between those closely associated with SCORM making up one side of the panel (Erik Duval, Dr. Dexter Fletcher of ADL), and those associated with IMS Learning Design the other side (Oleg Liber, Dr. Colin Tattersall of the OUNL), some debate could be expected.
Sure enough, when Oleg argued that way too much attention is being paid to content, Erik did object strongly. Oleg's main contention was that if the packaging and transmission of knowledge is the end-all of pedagogy, we'd have universities in libraries rather than the other way round. Consequently, we shouldn't say that we've cracked the educational interoperability standards problem, because we've only addressed this tiny bit.
Erik countered by saying that content is the one problem that was and is the easiest to solve. It was where we had to start. Now that that problem is largely dealt with, we can move on to introduce successive waves of technologies that are much harder to do.
But that was about it: on the central theme of the panel — barriers to the adoption of interoperability standards — the panel showed a remarkable degree of consensus.
On the positive side, it was noted that the fact that people do run up against barriers in standards implementation indicates that the transfer from 'techies' such as the panel members to the 'teachers' is indeed taking place.
But that doesn't take those barriers away. Different communities of practice do have problems expressing that practice in standardised formats, in interoperable ways. Good tools that integrate the whole process of standardised resource editing, transparently, are still relatively rare.
From an educational standards and specs point of view, one way to address that issue is to be humble about what is specifically educational, and adopt technologies from other communities where possible. Another is to simplify things by converging similar or related specs, as Erik fully expects to happen in the metadata sphere within a few years.
Simplification can also be applied to the specs themselves, much like the rather complex and abstract SGML was reduced to HTML, as Rob pointed out. Erik saw that point, but contended that end-user should never even see HTML or equivalent.
But that led to the question of demand, and Wayne's point at the beginning of the day about perfecting the irrelevant. After all, when HTML was new, you had to wrangle it by hand, in a text editor. Yet large numbers of people did it, because the technology — the web — was absolutely compelling...
More about the conference is available on the Online Educa 2004 website.