skip to main page content CETIS: Click here to return to the homepage
the centre for educational technology interoperability standards

skip over the long navigation bar
Home
News
Features
Events
Forums
Reference
Briefings
Press centre

Inside Cetis
what is Cetis?
Contact us
Cetis staff
Jobs at CETIS


 




Syndication
XML: Click here to get the news as an RSS XML file XML: Click here to get the news as an Atom XML file iCAL: Click here to get the events as an iCalendar file

Background
what are learning technology standards?
who's involved?
who's doing what?

CETIS Groups
what are cetis groups?
what difference can they make?
Assessment SIG
Educational Content SIG
Enterprise SIG
Metadata SIG
Life Long Learning Group
Portfolio SIG
Accessibility Group
Pedagogy Forum
Developer's forum

Subjects
Accessibility (310)
Assessment (74)
Content (283)
Metadata (195)
Pedagogy (34)
Profile (138)
Tools (197)
For Developers (569)
For Educators (344)
For Managers (339)
For Members (584)
SCORM (118)
AICC (18)
CEN (34)
DCMI (36)
EML (47)
IEEE (79)
IMS (302)
ISO (21)
OAI (24)
OKI (20)
PROMETEUS (12)
W3C (37)

print this article (opens in new window) view printer-friendly version (opens in new window)

Developers content to bash code at CETIS

In the interest of science, about twenty-five developers were happy to have their precious programs maltreated during the CETIS content package codebash. Under the good offices of Learning and Teaching Scotland's (LTScotland) Gerry Graham and CETIS' Lorna Campbell, IMS content packages were freely swapped between systems. Conclusion: getting wrapped-up learning objects from one system to another is getting a lot smoother, but we're not quite there yet.



Though most developers were happy to bash face to face in LTScotland's Glasgow office, the people from SURF were joining in from the Netherlands, and WebMCQ / COLIS Project from Sydney, Australia. With their and many other people's generously donated IMS Content Package compliant -or intentionally not so compliant- bundles, Vitual Learning Environments (VLEs) like Blackboard 6 and FDLearning's lewere put through their paces. At the same time, editing tools like those from Canvas Studios and Digital Brain where turning out fresh packages for others to either chew on or make findable and retrievable in depositories like intraLibrary or the University of Huddersfield's HLSI system. One noticeable trend was the increasing number of people involved with deploying repositories of learning objects; a sure sign that the learning object economy that is behind the drive for interoperability standards is beginning to take of.

Another noticeable trend was the climbing percentage of packages that were successfully passed from one system to another. In spite of packages of dubious provenance, packages produced by tools that are not even in beta yet or packages that where made with the sole intent of torturing MLSs, the importing systems managed to handle about eighty to ninety percent of packages smoothly. Chains of systems up to five deep lobbed packages from one to the other.

New tools, new packages and new combinations of the two means that there were also a few new problems. Few people had expected, for example, that there can be incompatibilities between versions of the humble Zip format- the most popular method of wrapping and compressing a content package into one convenient bundle. It also turned out that the 'compression' bit is quite necessary. Some content packages can be tens or even hundreds of megabytes large, which raises the question whether there should be a limit on size.

Such a limit on size can only really be a convention, as the spec envisages that simple packages can be mixed and matched into very much larger packages. This ability is technically quite challenging, and not yet much called for. Nonetheless, most tools either support it already, or are planning to support it soon, as the possibility of combining small, existing learning objects into larger ones is essential for their much vaunted re-useability.

Like the difficulty of complex organisations, most of the minor snags that were found are concentrated in the XML based manifest of the package. This is the essential recipe for the whole learning object, and the instructions in it need to be very clear and precise. They are just that, most of the time, but things can go awry in unexpected ways: some tools presume that the order of adding similar ingredients at a particular stage shouldn't matter, others think it very important. Likewise, some authors don't include absolutely every ingredient and its labels when they're attached to something larger anyway. This is fine when cooking the package somewhere else for the first time, but these undeclared ingredients can get left out when passing it on after that.



The codebashers were pretty unanimous in their ideas for best practice, however. Hints include making sure that the namespace identifiers -stating which systems of weights and measures you use, if you like- are correct and match what the rest of the manifest says. The manifest should always be at the top of a package. Sounds easy, but when zipping up a set of folders, another folder level is easily added. Use proper file names and file paths in the package; no spaces, backslashes, smart quotes or em dashes. Other systems will be literally lost when these are present. Last but not least, most felt that it is pretty vital to make sure that the manifest is valid before exporting a content package. The more precise, correct and unambiguous the recipe, the less likely it is that it will go wrong further down the line. Ultimately, as Charles Duncan of Intrallect put it, "remember that what looks like a minor interoperability issue to us may still look like a major issue to users".

The developers were also unanimous in reckoning that get togethers like the codebash are the best way for smoothing such minor major issues out. Not that bashing code is an entirely altruistic activity- Bolton Institute's Phil Beauvoir remarked that you could learn about as much from the two days in Glasgow as from reading specifications for weeks. In that sense, Giunti labs' GianLuca Rolandelli also liked the fact that the codebash provided an opportunity to test not just the one, but many specifications at the same time. While the focus was on IMS CP, related standards and reference models like IEEE LOM and SCORM also got a look in. Combine that with the need to establish community practice signalled by Eddie Clarke of Staffordshire University, and it's clear that the bashing needs to be repeated until the interoperability is smooth.

Related items:

Comments:

No responses have been posted

copyright cetis.ac.uk
Creative Commons License This work is licensed under a Creative Commons License.

syndication |publisher's statement |contact us |privacy policy

 go to start of page content