skip to main page content CETIS: Click here to return to the homepage
the centre for educational technology interoperability standards

skip over the long navigation bar
Press centre

Inside Cetis
what is Cetis?
Contact us
Cetis staff
Jobs at CETIS


XML: Click here to get the news as an RSS XML file XML: Click here to get the news as an Atom XML file iCAL: Click here to get the events as an iCalendar file

what are learning technology standards?
who's involved?
who's doing what?

CETIS Groups
what are cetis groups?
what difference can they make?
Assessment SIG
Educational Content SIG
Enterprise SIG
Metadata SIG
Life Long Learning Group
Portfolio SIG
Accessibility Group
Pedagogy Forum
Developer's forum

Accessibility (310)
Assessment (74)
Content (283)
Metadata (195)
Pedagogy (34)
Profile (138)
Tools (197)
For Developers (569)
For Educators (344)
For Managers (339)
For Members (584)
SCORM (118)
AICC (18)
CEN (34)
DCMI (36)
EML (47)
IEEE (79)
IMS (302)
ISO (21)
OAI (24)
OKI (20)
W3C (37)

print this article (opens in new window) view printer-friendly version (opens in new window)

e-learning tech that is fit for purpose, innovative and sustainable

The overarching question to which interoperability standards are a partial answer is how to make e-learning tools that are fit for purpose, innovative and sustainable. Factors such as software development strategy, useability research, pedagogic theory and more all have a bearing on that, but an immediate factor lies in a simple question: for a new type of tool, do you agree an interoperability specification first, and then build applications, or build applications first, and then agree a spec later?

Chickens and eggs

Assuming that the user experience is paramount, the question seems a no-brainer: you have to figure out whether something works for the people it's intended for, before you start to worry about the plumbing that makes it possible. If you agree the infrastructure first, you may well end up with a bunch of low level tools only techies could love.

Fortunately, user interface design can be separated from plumbing and again from interoperability standards about that plumbing to a fair degree. For starters, though you could say there is such a thing as cognitive interoperability just think how difficult a completely different wordprocessing program would be , most people would leave that to free innovation. That is one of the things that interoperability specs set out to make possible: by agreeing how to get at the boring, well understood plumbing, developers are free to spend all their resources on the best possible user experience.

So where standardised interoperability matters is at the plumbing end- to allow other people and systems outside of the small world of the one application to get at the goodies. But even there the agreement that a standard or spec formalises is at the edge of what an application does. As long as the data can be shipped out or ingested in a standardised fashion, the application's own data and methods can take whatever forms it needs to take.

In other words: in a well designed application, the user experience is at two removes from an interoperability specification implementation. The two can vary quite a bit without affecting each other.

Quite a bit, but not completely. If, for example, you'd need to exchange a discussion forum with your static learning content from one VLE to another, you're out of luck at the moment. Several ways of rolling discussion fora into standards content packages are readily imaginable, but there's no provision in an agreed specification for the purpose. So, clearly, there still needs to be a functional link between what a user needs and what a spec supports, even if the way the feature is presented may vary. Given that link, the spec first or tool first question has some specific pertinence.

Assuming that the user interface is paramount, and assuming that not one single tool will satisfy everyone, then different groups of people could start a pre-spec application development stage. Only when there are several similarish applications in the same space would you try to get minimal agreement on the datamodel and behaviour, where that's needed. That's a pretty good way to develop specs.

It can also be a bit expensive, since you have to synchronise the plumbing of multiple systems that can be radically different in nature. If there was no IMS Question and Test Interoperability (QTI), for example, and say Canvas, TOIA and other assessment tools had developed independently, trying to get them to interoperate using a common spec after the fact would be pretty painful. It's more realistic to expect to build the specifications on lessons learned from the first generation of such tools, and then build the second, standardised generation of tools from scratch. Resources apart, that's the best method, since you know what is required functionally, and you know what's minimally required for interoperability; i.e. simple, small specs that hit the spot.

One other option is specs first, tools later. Though it sounds 'upside-down' at first. it may not always be a bad idea, since it allows you to get consensus before major resources are invested. But it is critical that such a pre-development spec is treated for what it is, however: a best guess strawman, an agreed target to shoot for, a shortcut that is guaranteed to be changed quite a few times when people start implementing the spec in production systems. Still, that's exactly what a specification (as opposed to a standard) is supposed to be.

The final option is to do implementation at the same time as spec building. It's not as good as doing a generation of tools that you basically throw away before doing the specs, because you won't get the breadth of experience. It's also not as good as agreeing a spec first, because the reference implementation you do may well skew the spec unduly, and you won't get a representative consensus as easily. Still, as a compromise, it's likely to expose any big mistakes before the spec is final, and if the reference implementation is open source, there's both a clear way to shake out issues, get interoperability by providing a clear target, and speed up implementation.

That's why the tools and frameworks strand in the ELF works in an iterative way. Slight difference being that some services have some specs against them, and others won't. The ones that don't, can go through a couple of development cycles by different groups of people before their agreement is submitted for specification.

Open source and open standards

If an open source reference implementation can do all of these things, why not skip the standardisation part and just get all relevant stakeholders to chip into the development of one open source tool?

At an immediate level, that presupposes that everyone will be happy with an open source solution to a particular requirement. That is an important and continuing discussion in its own right, but lets for a moment accept that it might be true in the near future.

Thing is, even an inclusive, well resourced, and very precisely targeted open source project is unlikely to be able to serve everyone's requirements equally well. Even an open source poster child like the Apache web server project has open source competitors, simply because some communities have very specific requirements. In the e-learning world, Moodle is the dominant open source VLE at the moment, but it is unlikely to ever be the only one for the same reason.

What's more, within large software development communities of any description, you get a degree of specialisation. In case of a VLE, one group worries about the group management bit, and another focusses on the assessment tool, for example. At some point, in order to stay out of each other's hair, these groups need to make some agreements about data and interfaces. In short, they'll agree an interoperability spec, even if that's not necessarily what it is called. Nor is it a matter of adjacent functional areas alone. The largest open source products are often available in many competing distributions (e.g. Linux), and all these versions need to interoperate with each other and third parties.

And all the other factors...

Clearly, both open source reference implementations and a good deal of user involvement in pre-spec prototyping can help make innovative e-learning technology sustainable and fit for purpose. But experience shows that neither of them are necessary or sufficient.

At a guess, other aspects that come into it could include a proper targeting of functionality in the interoperability spec that supports a new technology. Too broad a functional area, and adoption becomes too much of burden. More narrowly targeted functionality that leaves much to generic solutions will be much easier to adopt. IMS QTI 1.x, for example, had it's own rendering format. QTI 2.x could re-use relevant bits of XHTML- the latest, componentised version of the ubiquitous webpage language.

There's also timing. It's little use if you got a nicely specced working prototype of tool nobody wants. Similarly, you may have the right idea for a killer application, but the supporting technology just isn't mature enough yet.

Finally, there is sheer, blinding simplicity. Not just to make sure the spec is consistent and coherent, but mostly to make adoption as easy as possible. RSS and Atom are clear examples of what can be done there. Developers are what makes a technology work, after al.


A lot of the material for the feature came out of an email exchange with Howard Noble of Oxford University's Computing Services department, and some input from James Dalziel of the Macquarie E-learning Centre Of Excellence. Thanks!

Related items:


No responses have been posted

Creative Commons License This work is licensed under a Creative Commons License.

syndication |publisher's statement |contact us |privacy policy

 go to start of page content