One can be many, et al
Posted on April 25 2003 by Stephen Lahanas
in reponse to The one standard, LOM and the semantic web.
Had to respond to this, I missed Stephen's presentation when it came out but we've tracked these topics at length on leaders over the past two years.
"In a lengthy and characteristically thought provoking presentation, Stephen Downes challenges both the need and the demand for just one Learning Object Metadata (LOM) standard. That done, the very existence of such beasts as learning objects is called into question. We examine the argument."
Keep in mind that one standard can include tiers of complexity, something we've advocated here in regards to IMS metadata, without flexibility like this you will run headlong into myriad of integration issues, thereby further retarding progress...
"Stephen's main point is, in his own words, that "Objects are best described using multiple vocabularies. There is no way to determine which vocabulary will be relevant to either an author or a user of a given objects. Trying to stipulate a canonical vocabulary a priori needlessly reduces the effectiveness of a system of communication." The 'objects' are very delibarately distinguished from 'learning objects', for the simple reason that what may be a learning object to you, is a news article, archive content or a use case for somebody else. An object's meaning, in other words, depends on its context of use. And that is the crux. "
A multi-object approach or flexible object is the only way to effectively address what is a context rather than an action-oriented solution set. Learning objects are not and should not be like OO code objects. OO code objects are abstractions of processes whereas learning objects are presentations of contextual information (without necessarily needing to know how that resource would be utilized in any variety of systems).
"To begin with the latter, the alternative is the W3C's RDF, the central technology in web pioneer Tim Berners-Lee's vision of the semantic web. Like IEEE LOM (or IMS' implementation of LOM: IMS Metadata), the Resource Description Framework is a technology to, well, describe resources; provide information about information, or, if you like, make metadata. Unlike LOM, RDF is not a controlled vocabulary, which means that rather than provide a known, finite collection of terms with which to describe learning objects, RDF is a structured method with which you can say almost anything about almost anything using whatever terms you please. Not just that, it is also specifically designed to be independent of the resource it describes. Anyone can make sophisticated statements about a resource, whether they own it or not.
Put very simply: LOM is words, RDF is grammar. LOM intentionally constrains, RDF is very flexible indeed. "
RDF is a framework for developing the constraints and not the constraints themselves (not unlike XML). LOM has placed a certain set of constraints mainly directed from instructional design paradigms, but the danger is that LOM boxes in e-learning at the system level. There is a middle ground in between the two - that ground could consist of multiple LOMs which may or may not adhere to RDF as a faciliting technology.
More importantly though, we're really talking about multiple grammars - the standards if not the objects themselves need to support multiple methodologes and processes. This is where it gets complicated as we need to anticipate interoperability issues while placing the least amount of constraint to the objects themselves. So you could imagine perhaps using RDF to create 30 distinct "grammars" each one containing multiple unique object types and having all of it adhere to data transfer and transformation in a consistent way. Only then do we begin to open up vast opportunities for object sharing on a system level w/ minimal integration impacts.
"...the point is that a framework like RDF needs some common semantics in order to work. It has great facilities for augmenting existing semantics, or tying in equivalences from different vocabularies. But it still needs some common ground, and the LOM should be able to fulfill that function just fine. It is widely used and understood, and, crucially, conceived for the needs of a particular use community..."
Not common semantics at all, rather a common framework for translating / sharing multiple semantics (which we do in our brains all of the time). LOM is already inadequate in the conceptual phase, it will only become less useful as more of it is deployed.
"Downes' argument goes a bit further than just pointing out that the one standard is unlikely to be satisfactory or sufficient. He argues that sticking to the standard (IMS Metadata) will actually harm communication because its constraints are unlikely to support the actual uses people will put it to. The same terms will be put to different uses in different contexts, thereby defeating the purpose, he argues. "
This is not future tense - it has already happened in regards to both IMS / LOM and SCORM too. The standards have failed thusfar.
"Though not uncontroversial, the success of natural language is likely to be linked to its degree of ambiguity and the fact that it is so context bound. But you still need stable, widely known conventions for it to work. Consequently, Stephen's contention that "words do not have fixed meanings. You and I can use the same words to refer to different sets of objects. You and I can use different words to refer to the same set of objects. This is true even if we speak the same language." is only true if the different usages are known to us both. I am, for example, completely free to use the word "red" to refer to what is conventionally referred to as 'blue' in English. If, as is likely, you are not aware of my little language innovation (and you're not colour blind!), you literally won't know what I'm talking about. Furthermore, a system of communication without any such conventions at all might very well result in the proverbial tower of babel."
The examination of natural language in this article quote is bypassing entirely the grammatical superstructure which subtle context difference ride upon. Also, the determination of conventions are not usually universal events, study any dialect and you'll find radical differences among much vocabulary, the core vocably that ties the dialects tends to be rooted in basic grammatic transformations. Natural language is infinitely more complex than its computational counterpart. It's success is that this complexity is abstracted from most of us.
"Leaving aside for the moment that LOM is intended for one context of use, the fact that there can never be a definitive vocabulary to end all vocabularies doesn't mean that the more modest undertaking of agreeing a small vocabulary in one context of use is futile."
I don't think any of us who have brought up these types of issues consider the quest futile, rather there is a more sophisticated approach required. The real question is whether the current efforts truly have value add or are they hindering us.
"Furthermore, once different communities have standardised their practice, it becomes possible to make full or near enough translations between terms and therefore uses or use communities. For example, to continue the colour example, a standard interface widget on my machine can tell me that its idea of red, is also referred to as maraschino, ff0000, 255.0.0, C0 M100 Y100 B0 etc. depending on what I'm familiar with, or to what use I put the system colour picker. This sort of functionality should be one of the main uses of RDF. "
Again we've confused function here - the example above doesn't serve well in the real world, we are artifically constraining a conceptual marker. Learning requires expansive rather than restrictive data typing. We've also perhaps added a bit of bias that wasn't appropriate.
"The flexible, decentralised, democratic and continuously growing accumulation of descriptions made possible by RDF represents a much more appealing way."
An open-ended toolset facilitator perhaps, but not a path.
"Whether we neither need nor want a standard vocabulary is a slightly different matter. Some vocabulary standardisation is simply necessary to get any metadata technology off the ground. "
Disagree, metadata can be made to be much more flexible. The fact that we as industry having been working 3 years plus to get the IMS right only shows that the effort has been counterproductive and tends to prevent us from getting off of the ground at all.
"Ironically, the one thing that Stephen Downes cares most about in his presentation -the community- has not been emphasised enough. A community is practically defined by its common practice. Without some conventions, some shared understanding, there is no common practice."
The learning community includes every other practice community in the world, now what :)
There cannot be one approach, there can be one framework which supports all approaches, and that extends beyond the potential of the technology itself (RDF, XML etc.).
Replies to this post: