The one standard, LOM and the semantic web.
In a lengthy and characteristically thought provoking presentation, Stephen Downes challenges both the need and the demand for just one Learning Object Metadata (LOM) standard. That done, the very existence of such beasts as learning objects is called into question. We examine the argument.
Stephen's main point is, in his own words, that "Objects are best described using multiple vocabularies. There is no way to determine which vocabulary will be relevant to either an author or a user of a given objects. Trying to stipulate a canonical vocabulary a priori needlessly reduces the effectiveness of a system of communication." The 'objects' are very delibarately distinguished from 'learning objects', for the simple reason that what may be a learning object to you, is a news article, archive content or a use case for somebody else. An object's meaning, in other words, depends on its context of use. And that is the crux.
The questionsBut what about the central questions in Stephen's title: "One Standard For All: Why We Don’t Want It, Why We Don’t Need It". And while we're at it, if not the one standard, then what?
To begin with the latter, the alternative is the W3C's RDF, the central technology in web pioneer Tim Berners-Lee's vision of the semantic web. Like IEEE LOM (or IMS' implementation of LOM: IMS Metadata), the Resource Description Framework is a technology to, well, describe resources; provide information about information, or, if you like, make metadata. Unlike LOM, RDF is not a controlled vocabulary, which means that rather than provide a known, finite collection of terms with which to describe learning objects, RDF is a structured method with which you can say almost anything about almost anything using whatever terms you please. Not just that, it is also specifically designed to be independent of the resource it describes. Anyone can make sophisticated statements about a resource, whether they own it or not.
Put very simply: LOM is words, RDF is grammar. LOM intentionally constrains, RDF is very flexible indeed.
A trivial point Given the differences between LOM and RDF, you could simply end Stephen Downes' argument right there: they're not the same thing, they do different things, one isn't just the alternative to the other. In fact, IEEE's LTSC is now busy binding the LOM words to the RDF grammar.
To put the argument in a slightly more sophisticated way: using LOM does not in any way preclude using RDF. Even before the IEEELTSC is finished, you could happily ignore all the IMS Metadata in a learning object repository, and go to some RDF providers you trust and whom you have found useful for finding the very same objects in the same repository.
But, as indicated by his plea for multiple vocabularies, Stephen Downes knows this. His point is that there shouldn't be just the one vocabulary.
A more substantial pointDo we neither want or need one standard at all, though? Downes argues that this is best illustrated by the analogy of a road versus a railtrack. A railtrack is constrained to the point that a slightly different wheel size can wreak havoc. A track is only suited to the use it was originally conceived for, and nothing else. A road, by contrast, is essentially a flat surface that accommodates anything you like: cars, motor bikes, lorries, drag races.
While RDF and LOM could be comparable to railtracks versus roads in conceptual isolation, in practice the RDF train will run very nicely on the road, among all the other traffic. Or, right now, the fact that there's already a railtrack makes it far easier to build a road. The heavy lifting has already been done.
Leaving the analogy, the point is that a framework like RDF needs some common semantics in order to work. It has great facilities for augmenting existing semantics, or tying in equivalences from different vocabularies. But it still needs some common ground, and the LOM should be able to fulfill that function just fine. It is widely used and understood, and, crucially, conceived for the needs of a particular use community.
The heart of the matterDownes' argument goes a bit further than just pointing out that the one standard is unlikely to be satisfactory or sufficient. He argues that sticking to the standard (IMS Metadata) will actually harm communication because its constraints are unlikely to support the actual uses people will put it to. The same terms will be put to different uses in different contexts, thereby defeating the purpose, he argues.
This is best explained by using his analogy with natural language, arguably the most sophisticated way to describe things yet devised. Natural language, compared to computer code, is vague, ambiguous and inextricably bound to use. That's why it works so well, according to Stephen. Specifically, "If standards were required for description and communication, then language would not be able to make description and communication possible at all." Except, of course, we do have standard human languages like metropolitan French or Standard English precisely to facilitate description and communication. Even in those cases where there is no codified language standard, there will almost certainly be some kind of stable variety known to everyone in the appropriate community.
Though not uncontroversial, the success of natural language is likely to be linked to its degree of ambiguity and the fact that it is so context bound. But you still need stable, widely known conventions for it to work. Consequently, Stephen's contention that "words do not have fixed meanings. You and I can use the same words to refer to different sets of objects. You and I can use different words to refer to the same set of objects. This is true even if we speak the same language." is only true if the different usages are known to us both. I am, for example, completely free to use the word "red" to refer to what is conventionally referred to as 'blue' in English. If, as is likely, you are not aware of my little language innovation (and you're not colour blind!), you literally won't know what I'm talking about. Furthermore, a system of communication without any such conventions at all might very well result in the proverbial tower of babel.
But that's all natural language. What about the actual business of describing properties of objects in a machine readable way? Downes argues that "There is no (and could never be) one standard ‘canonical vocabulary’ for describing properties. The same property (e.g., colour) may be described in different ways with differing precision in different vocabularies (e.g., hue, tint, shade, wavelength, 24 bit pixel colour). The choice of vocabulary depends on the context of use." Leaving aside for the moment that LOM is intended for one context of use, the fact that there can never be a definitive vocabulary to end all vocabularies doesn't mean that the more modest undertaking of agreeing a small vocabulary in one context of use is futile.
Furthermore, once different communities have standardised their practice, it becomes possible to make full or near enough translations between terms and therefore uses or use communities. For example, to continue the colour example, a standard interface widget on my machine can tell me that its idea of red, is also referred to as maraschino, ff0000, 255.0.0, C0 M100 Y100 B0 etc. depending on what I'm familiar with, or to what use I put the system colour picker. This sort of functionality should be one of the main uses of RDF. It is designed to be able to find and handle such equivalences. But both my system colour picker as well as my dreamed of RDF learning object finder presuppose that at least one community sat down and standardised the way they described colour properties or learning object elements and properties. Once one community has done that, it becomes that much easier for other communities of practice to come up with their own vocabularies to refer to the same objects.
ConclusionsOf Stephen's title, the main point is absolutely right. Trying to describe every possible learning object in just 60 odd elements, or finding them with the same, is not the definitive method of doing metadata. The flexible, decentralised, democratic and continuously growing accumulation of descriptions made possible by RDF represents a much more appealing way.
Whether we neither need nor want a standard vocabulary is a slightly different matter. Some vocabulary standardisation is simply necessary to get any metadata technology off the ground. Though, as Downes argues, it might be better to set standards after a period of use, you have to get the use done first. Also, once one schema has carved up the field, establishing and linking in others just becomes that much easier to do.
Ironically, the one thing that Stephen Downes cares most about in his presentation -the community- has not been emphasised enough. A community is practically defined by its common practice. Without some conventions, some shared understanding, there is no common practice.
ReferencesStephen Downes' OLDaily has the post about his presentation, and some links to a html version of the presentation (recent versions of Microsoft Internet Explorer only), and the original PowerPoint file. We host an unedited pdf print of the PowerPoint presentation.
For more on RDF, the W3C's RDF primer is lengthy, but well worth it.
More information on the semantic web is available on the W3C's semantic web activity page .
The newly launched CETIS Metadata SIG subsite has a wealth of information about all educational metadata issues, including IEEE LOM.