Alternative Architectural Concept 2 - Federated Integration
This is the second in a series of articles discussing possible variations to the Learning Technology Systems Architecture (LTSA) model delineated by the IEEE Learning Technology Standards Committee (LTSC) working group. In a previous article I covered one of the core structural issues / content object taxonomy. With the taxonomy I implied some deeper philosophical predispositions which I feel are vital to the success of any potential learning architecture, they include the following assumptions:
- Learning (or e-learning) is a process (which can be thought of as a superset with a nearly infinite number of subsets contained within)
- Data / information without "personal transformation" is not and cannot be knowledge. Personal knowledge therefore represents the ultimate end-product of any learning process
- Learning process variation should not be restricted, any successful architecture must empower the process/es, not narrow them.
- Management of knowledge is in fact a learning process.
- Management of data without human (or AI ?) learning of the data is a mechanical function and adds little value. A library is an example of how knowledge management should work – the library is a physical repository of filtered / contextualized data ready for personal processing. The process though that the library serves is learning.
- Tightly coupled integration does not work well and costs as much as 10x or more as much as a loosely coupled paradigm.
- The Internet provides us the infrastructure for a learning architecture or framework, but does not represent the model for it necessarily. Learning architectures are an evolution perhaps of the Internet model to its next logical state. E-learning represents the best framework approach for advancing the Internet model.
Federated or Distributed ?
The last point referred to the Internet model as one that may be improved on rather than followed. In previous discussions on Leaders we’ve tried to define that model a bit, and the term 'distributed' has been used to try to quantify it. I think though that "distributed" perhaps doesn’t capture the true spirit of the Internet model – I’d think of it as Federated rather than distributed. The word ‘distributed’ implies that one system will be spread out to different locations. Federated supports the sense that we’re dealing with a host of systems, (which in truth are not really components if they can stand alone). These systems form a "cooperative" community which can be morphed into many manifestations simply by inclusion or exclusion of end-user configuration / access to them.
"Federated" then implies a higher level of abstraction and autonomy in the various systems which could be aggregated into a learning solution architecture. This in fact is an excellent evolutionary metaphor for the Internet. Having worked with enterprise integration, component development and web development, I can state with confidence that understanding the difference between these concepts is vital to the success of any effort in this regard. Proper logical conceptualization at this stage of what we think a "global" learning architecture should be, will make all the difference as to whether or not we can pull it off anytime soon.
Systems or Components ?
Sometimes this may sound like a debate in semantics, but it’s not.
A component is generally considered a piece of a system, whereas a system contains components and operates in a stand-alone mode. A component however has no guarantee of operating in a stand-alone fashion and is usually dependent on something else to add value. Any time we’re dealing with a global architecture, (which is very different from what LSTA had envisioned, but close to the types of distributed repositories that Stephen Downes has been describing), we must consider that component connectivity across firewalls and distances is simply not efficient.
In manufacturing or electronics a component is generally considered to be a semi-autonomous or completely dependent sub-assembly of a closed system. Now let's make an analogy with biological systems that will help further clarify the matter. Within a body there are many components (organs) that are required in order for the system (body) to function - those components are dependent, sometime redundant and highly specialized but not autonomous in that they are directed centrally (neuro-motor control). The components then are also sub-assemblies and cannot standalone, yet they are distributed geographically within the closed body system. Each closed body system however is autonomous and participates in a much larger eco-system of other autonomous systems which together comprise various logical frameworks (such as business a government etc). This is the difference between federated and distributed - autonomous systems within a system of systems versus dependent sub-assemblies of a single system.
What we see in any viable global architecture is the need to segregate application functionality and provide efficient data flows between them. The segregated application functionality then necessarily becomes a stand-alone application or service – more or less tantamount to a system. So we have a global network comprised of independent networks with a global learning architecture comprised of independent yet federated systems.
By nature then, autonomous systems cannot and should not be tightly integrated in that unnecessary dependencies will arise that increase complexity and reduce efficiencies. The problem is that most people in the information technology arena have been focused on delivering tight integrations for the past decade or more. There is a term for this - Enterprise Architecture Integration (EAI). An enterprise is in itself a mini-system of systems scenario but one limited to one organizational domain (often times there are many networks + web access as well so it isn't a one or closed network scenario). The fact that there is a single domain to integrate leads most to follow what may have worked reasonably well for client server technology - that is a consolidation and tight coupling of the application layer into interdependent components.
In the 1990s, the main technology used to facilitate this type of integration was CORBA although many other less rigid (more proprietary approaches were attempted). To describe all of this outside the technological climate would be a bit misleading, there were very good reasons for wanting to split applications / systems up in the '90s. The first move to client / server technology went from mainframes to mid-tier servers with Pentium 200MHz clock speeds and 100 MB hard drives. The hardware of the early and mid-'90s simply did not support deployment of powerful applications - they had to be spread out (a great motivation for early AI efforts that led to MPP "Massively Parallel Processing" the technology behind Teradata data warehousing for example).
What we found out though by the end of the '90s was the complexity of your average IT department that built interdependent systems across a weak mid-tier was consuming 1/2 or more of the yearly budgets to maintain, many IT departments are still stuck there. The methodologies and philosophies of that era are still with us in spirit anyway - one gets the real illusion of control when engaging in a tight integration effort and in fact that illusion becomes a necessity once embarked on that path else the solution simply won't work. (but then you've spent the majority of your dollars not on the requirement but on what was perceived to be a simple integration - unfortunately that's how they're always perceived).
Moving to expand on an Internet or federated model requires a paradigm shift in IT thinking. The entire success of e-learning as a global solution depends on this shift.
The key to any shift is the creation of a global understanding of the potential marketplace, Stephen Downes refers to something similar, "the Learning Object Economy," which describes part of it well but doesn't extend to the whole spectrum. On Leaders this Spring we took a stab at trying to redefine the marketplace in such a manner that would allow vendors and potential users alike to appreciate the potential that is now being overlooked in an LMS-centric, myopic approach to e-learning. The first step is to recognize the logical expansion of the term e-learning to encompass the cross-section of opportunity where education and technology are already co-located. This cross-section expands the current definition of the market by approximately ten-fold.
Another crucial step is for current e-learning luminaries to stop being so judgmental about what constitutes 'good' e-learning or 'quality' e-learning as this has artificially elevated market costs and planted unnecessary doubts and reservations in the minds of the very people who are most interested in adopting the technology. E-learning, like the Internet is about extension, expansion, access and new opportunity - naturally many feel threatened by its potential, but it is an unwarranted fear that will quickly disappear as adoption grows.
We won't dwell too long on marketplace issues just now, but it is important to keep in mind that everything we do with the technology / solutions will be guided by the assumptions we're working under. The technical paradigm shift needed is to push vendors towards more specialized development that will inter-operate at a federated level or serve a standalone purpose. For example, content development, assessment activities, simulations, labs, course, video-audio, content objects and games could all be accessed singly and add value or be aggregated per rule and add value.
One of the greatest follies in e-learning that I've witnessed (and I must admit, believed in at one time) is the idea of the intelligent system deciding things for the learner - this has been most often manifested through something called 'assessment based prescription.' The idea behind this is that based upon rules built into a system after an assessment a learner will be directed to the exact information they need automatically (they are of course many variations on this theme). Here's the problem:
- Building it adds 10x the complexity to the project
- We make the rather arrogant assumption that whoever decided the rules for all students in such scenarios will adequately support those students' needs. I can guarantee you here today that it will not, the possible number of variations and permutations necessary to suit every students' view into a particular subject cannot be adequately predicted and moreover doesn't have to be.
- So here's the revelation, letting the student have control actually reduces complexity, increases effectiveness and will certainly improve adoption. And this by the way, is the main interface process model for the Internet, so we have some real proof of the potential.
A battle cry for many years among educators - now we can actualize it in the most powerful manner imaginable. What we are doing in sense is relieving a lot process duties from integrated systems thereby allowing them to be loosely coupled and shifting that burden where it always should have been, the learner who defines her or his own processes. Now, this doesn't prevent us from providing help in those journeys, but removing the need for massively centralized control removes the integration burden that might otherwise occur - one that is strongly implied in the LTSA architecture even at the enterprise level and would be more complex if extended globally.
Many thanks to Stephen Lahanas for allowing the feature to be republished here.
The whole Alternative Architecture series can be found on the E-Learning Leaders Group (requires a Yahoo username). The first article deals mainly with object classification. The third article outlines a Federated E-learning Architecture. More articles will follow.
The IEEE-LTSC Architecture and Reference Model Working Group has made the working draft 9 of the LTSA (947 Kb, pdf) available on their website.