skip to main page content CETIS: Click here to return to the homepage
the centre for educational technology interoperability standards

skip over the long navigation bar
Home
News
Features
Events
Forums
Reference
Briefings
Press centre

Inside Cetis
what is Cetis?
Contact us
Cetis staff
Jobs at CETIS


 




Syndication
XML: Click here to get the news as an RSS XML file XML: Click here to get the news as an Atom XML file iCAL: Click here to get the events as an iCalendar file

Background
what are learning technology standards?
who's involved?
who's doing what?

CETIS Groups
what are cetis groups?
what difference can they make?
Assessment SIG
Educational Content SIG
Enterprise SIG
Metadata SIG
Life Long Learning Group
Portfolio SIG
Accessibility Group
Pedagogy Forum
Developer's forum

Subjects
Accessibility (310)
Assessment (74)
Content (283)
Metadata (195)
Pedagogy (34)
Profile (138)
Tools (197)
For Developers (569)
For Educators (344)
For Managers (339)
For Members (584)
SCORM (118)
AICC (18)
CEN (34)
DCMI (36)
EML (47)
IEEE (79)
IMS (302)
ISO (21)
OAI (24)
OKI (20)
PROMETEUS (12)
W3C (37)

print this article (opens in new window) view printer-friendly version (opens in new window)

The next big thing? Three architectural frameworks for learning technologies

image:The next big thing? Three architectural frameworks for learning technologies

Introduction

Thr panel on Architectural Frameworks was a key event at the IMS symposium in Ottawa on August 2001. Representatives from IMS, MIT and Carnegie Mellon University put forward ways that learning systems of the future could be designed.

This new emphasis on architecture may herald an impending paradigm shift away from only providing compatible data files to designing a framework that would allow fully interoperable systems to be developed.

Taking a look at the current state of interoperability standards in learning, you could be forgiven for thinking that there was no software in learning technology; the majority of current standards are primarily about the interchange of data. So we have data models and appropriate file formats for metadata, for content packages, for assessments, and so on, but no common notion of what to do with all this data.

But creating learning solutions are about a lot more than just how to format data. There are also the methods that parts of the system use to talk to one another, so that the assessment system knows how to tell the student record system that student x took part in the exam and got an 'A'. This is the 'glue' that holds learning systems together.

At the moment, these kinds of functions are provided within Learning Management Systems (LMS). Systems such as Blackboard and WebCT provide a set of mutually-compatible services within a proprietary framework. It is up to the LMS to tightly integrate security, content, rights management, and assessment to provide a total solution for the educational institution. But will this always be the case?

In this feature we look at three open architectural frameworks proposed by speakers at the IMS symposium.

Mark Norton: Creating a service based model

Mark Norton is the Director of Specification Development at learning technology standards body IMS. Although IMS is not in the business of specifying system architectures, Norton believes that we need to discuss architecture to resolve some of the issues in current standards.

Norton sees the need for an architecture supporting a range of educational approaches; distance learning, student-driven independent learning, collaborative learning, and research-oriented learning: "The framework should address these different approaches: but one architecture will not meet all these needs. A framework should enable you to build specific architectures that meet specific needs."

The answer, according to Norton, is to build architectures based around components, each of which provide a set of common services. So, for example, you could buy a Digital Rights Manager component that provides a licensing service that your Content Repository component talks to before giving out course materials to your Learning Client component.

In this scheme, the 'client' - which could either be a permanent part of a network or a detachable device like a laptop - interfaces with the services it needs to do its work. These services could be learning object repository services (such as searching, cataloguing and retrieval), registration services, learner tracking services, student profile management services and so on. Each of these services would be provided by a common 'Service broker' that managed requests for different types of clients. So that authoring tools, as an example, would have access to a different set of services than content delivery tools.

Although Norton envisages that components will be assembled to meet specific learning needs, he identifies some core elements that will be present in any learning architecture. These include the communication between the client and the system, content delivery mechanisms, user and group management, access control, tracking and auditing, repository management, and a component to handle the installation of components and the publishing of services so that other components can make use of them.

Norton doesnít pin himself down to a single way these components can talk to one another, but instead sees three possible 'levels' of integration:

  • Integration at the software level, with components talking directly to one another using APIs (the interfaces programmers use to develop applications)
  • Message based integration, with components exchanging messages in plain text using a protocol such as SOAP (Simple Object Application Protocol).
  • Data based integration, using XML files or records in relational databases to share information.

According to Norton, systems should be flexible enough to allow integration at these different levels, making use of the performance benefits of the tighter integration methods as the situation requires.

There are a number of issues that affect the development of these type of frameworks. "There are challenges that are unique to this area," Norton noted, "Learning is about conceptual matters, the transmission of knowledge between people. This doesnít resolve to concrete things unlike other e-activities such as e-commerce. How do we transmit knowledge to a person?"

Aside from the fundamental issues of learning, there are specific technological challenges that need to be addressed, such as how components communicate and 'discover' services, and how actions and content are sequenced and aggregated.

Dan Rehak: Layers of services

Dan Rehak is Professor of Technical Learning Systems at Carnegie Mellon University, and one of a team working on research and development projects for ADL, creators of the SCORM (Sharable Content Object Reference Model) standard developed for military training systems.

Rehak is currently heading the CLEO (Customised Learning Experience Online) project funded by ADL. Rehak was keen to point out, however, that the views he was presenting did not represent ADL's position.

Just as Norton was keen to point out that there is no one-size-fits-all in education, Rehak stressed the diversity in requirements:

"We want different kinds of learning; we want different kinds of content; we want different kinds of clients; we want to satisfy diverse learning requirements, and we want to be able to fit learning to their needs and their context."

Rehak, like Norton, sees a component model as the solution to this diversity of requirements. "We want to create a variety of different systems and different infrastructure to provide different learning experiences." These learning experiences bring together the learner's context and environment, user profile, content, and learning approach in a managed environment.

ADL, like IMS, are today focussed heavily on describing data ("there is no architecture in ADL"), and has lots of missing pieces, like how to sequence content or model tutoring:

"Moving data is not the only thing we need to do. We know what the data is. We have very little knowledge about how this data should be processed [and] how to create a learning experience from this data."

According to Rehak, this is not a situation that will be resolved by incremental improvements: "We need a fundamental architectural basis and approach".

This architectural basis, according to Rehak, takes the form of a service stack, with fundamental infrastructure services supporting learning services, that support the topmost layer in the stack, the user experience provided by 'user agents':



The infrastructure layer takes in standard networking services like HTTP and TCP/IP. "We should assume this exists for use and not touch it", explains Rehak. As in Nortonís model, the way the user gets at content is by negotiating with services for querying, metadata, rights management, storage and resolution. These services then work with content repositories to return material to the user.

Scott Thorne: MIT's OKI framework

Scott Thorne is the project technical leader for the Open Knowledge Initiative (OKI) at MIT. The OKI is an ambitious program to build a set of open-source, reusable learning technology components.

Unlike the previous frameworks, MIT are building interfaces at the program level for components that communicate across the enterprise: "Iím looking at this problem from the bottom up, looking at how infrastructure fits with the service layer.", according to Thorne.

Whereas Norton saw a role for layering of integration, from data level to software level, MIT are focussing their efforts on defining a set of common interfaces using API's.

APIs (Application Programming Interfaces) provide a stable set of services for other programs to 'hook' into. This, according to Thorne, helps with the re-use of software code, and allows integration of components in 'real-time', which can be an advantage over data-centric approaches.

Although changes occur in the way that an interface is implemented (that is, how the code actually performs its task), the interface developers use to work with the component stays the same, so that software can evolve and change without breaking interoperability established with older versions.

API's are somewhat analogous to the controls of a car. Although what goes on behind the dashboard has changed considerably over the years, the steering wheel does exactly the same job now that it always did. In this way cars have remained 'compatible' with drivers across the intervening decades.

MIT are looking at the development of three layers of components and services; at the bottom are the common services, such as messaging and authentication, corresponding to Rehakís Infrastructure Layer. Above that are a set of OKI Services, including publishing, subject management, collaboration, and assessment. Finally, MIT envisage an optional set of common user interface objects (windows, frames, layouts, menus and so on) that applications could take advantage of.

For Thorne, the key problem is defining how the boundaries between the layers operate. If done correctly, this would provide a stable reference interface that developers can use to build new services and applications within the OKI framework.

Not everyone, however, is happy with the API approach. According to Bill Dwight, head of Java development of Oracle, "Some of the proposals are noble goals for stable APIís. But coming up with a service API for authentication is effectively a new standard."

Dwight agreed with Nortonís concept of three levels of interoperability; "with [data-driven interoperability] you can go a long way with the least risk". According to Dwight, loosely-coupled services using messaging technologies such as SOAP are at the next level of risk, while API-based integration provides the tightest cohesion, but is the highest-risk strategy.

A common approach?

Despite differences in presentation, and some issues of interoperation, there is a surprising amount of similarity in the three approaches. According to Rehak, "Weíre speaking from the same basic needs and working to the same basic goals".

For some delegates, these commonalties in the models hinted at common issues affecting them. Phillip Dodds, the chief architect at ADL, confessed to being "kept awake at night" by issues of learning content:

"How does content affect these models? Content is very fine-grained, constructed on the fly, or as discrete complete objects. The nature of content is a huge architectural issue, but the implications arenít visible in the models. What is tracked and how? This is of significant concern."

Norton responded to Doddsí criticism; "Should we be thinking about content or learning? Content is simply a service that gets delivered. We shouldnít have to worry about it if we think about it in a different way."

Another issue is one of fundamental network architecture. For example, does the architecture have to be based around a setup of clients and servers, or could you use a peer-to-peer network of equals, like Napster? Each of the panellists had considered this, and according to Dan Rehak; "thereís nothing in the service model that precludes peer-to-peer networks, and peer-to-peer is under active consideration by each of us".

However these models evolve, the fact that architectures are under discussion marks a new level of progress in interoperability. Each of the architectures presented took open standards as a given; in fact, a service-based model is impossible without them. And a components approach could revitalise the market to produce specialised tools for services like digital rights management, querying, and access management. As moderator Cliff Groen said at the closing of the panel, "Weíre trying to create a market for learning."

Some useful resources:

MITís Open Knowledge Initiative
Advanced Distributed Learning Network (ADLNet)
IMS Global Learning Consortium

Related items:

Comments:

copyright cetis.ac.uk
Creative Commons License This work is licensed under a Creative Commons License.

syndication |publisher's statement |contact us |privacy policy

 go to start of page content