Scott's Workblog

scott.bradley.wilson@gmail.com


attention!
This blog has moved! Go to my new blog!


print this article (opens in new window) view printer-friendly version (opens in new window)

Making standards and specifications: Technical approaches

When I first started the specifications I worked on were based principally around lists or tables of elements, as many as people could think of, with an XML DTD. Since then I've seen the introduction of UML, Use Cases, WSDLs, REST, RDF and a whole host of other things into the specification process. Some of these work, some don't. Here's my personal view based on my experiences to date.



UML


I like UML, but I've seen it overused. In small doses, UML can bring clarity and simplicity to what can otherwise be an impenetrable wall of SHALL, MUST and MAYs. In large doses, it can bulk out a simple spec into a huge impenetrable tome full of arcane diagrams. I think a UML class diagram is a great way to summarise a data model. If you need more than one page for it, the spec is probably too complex. If you need more than one diagram, the spec may need breaking up into multiple smaller modules.


UML sequence diagrams can be handy when there is a very important choreography that needs to be implemented, particularly for things like security specifications where you need to understand how multiple parties interact (e.g. oAuth). However they aren't always very readable, even for developers, and so if there is a need for a sequence diagram then there is also a need for a step-by-step walkthrough. For example, Eran's simple oAuth workflow with pictures is much easier to follow than a UML sequence diagram. Without it I probably wouldn't have bothered trying to understand the detailed choreography.


Overall I think I would recommend using UML as an aid to explanation, and as a way of warning yourself when things are becoming too complex. During the specification process, using UML is also a good way to check mutual understanding of what the spec is and it current status, but must be heavily moderated for the actual specification documentation.



Use Cases and Requirements


Specifications really do need requirements, and there are several ways to do this. IMS uses Use Cases in a fairly traditional format. W3C uses use cases for brainstorming, and then captures Requirements from them as brief, but normative statements (see, for example, the Widgets 1.0 Requirements document). In CEN I've worked on specs using high level "business cases" which are similar to use cases but structured slightly differently to capture things like non-functional requirements and the business context (see, for example, CWA 15903.)

In general I don't think it matters too much how these things are documented. But it does matter how requirements are managed.

One particular problem is defining the specification scope. It is very easy to stretch the scope to fit an edge case, particularly in a small community with a few vociferous members, as someone can latch onto such a case and easily distort the whole process. It is really difficult sometimes to make a distinction between requirements that have a direct implementation need (that is, its part of an existing system or will be implemented as soon as the spec is in draft) versus those that are speculative with no identifiable implementation strategy. Its not necessarily a bad thing to design specifications so that they are flexible and can meet future needs - I think that is an excellent design goal (q.v.), but quite another to invent speculative requirements and use cases to justify it.

Overall I think we're getting better at requirements and scoping, but some specifications are still far too broad.

Another problem is the requirement defining its solution, which then hampers the process of coming up with the specification design to suit a range of implementations.


Design Goals


Something I like about the way the webapps group has worked in W3C is setting out some general design goals independently of specific requirements. I think these are a good checklist to use when evaluating the effectiveness of a specification as a whole, rather than whether it implements a particular requirement. I've also introduced this approach in other specification work, such as XCRI and HEAR and I think its one I'd recommend more widely.


RDF and Semantic Stuff


I'm a bit ambivalent about Semantic Web technologies, but I do think a way of modelling semantics is very useful and worth applying to specifications. Most specifications involve concepts that are implemented in information models, and the way RDF properties and classes are defined provides a good model for how to do this in a way that builds on and references concepts in other specifications. For example, explicitly relating properties in a specification to elements in the Dublin Core Element Set (aka ISO15836). Also, if an information model is expressed using the semantic web constructs of "classes", "properties", "domains" and so on, it is then very clear how to relate this to a UML class diagram summarising the specification, and makes it easier to cross-check.

Another really good idea that came from RDF is the idea of assigning a URI to each property and class. This makes it very simple to reuse individual properties defined in other standards as you can identify them unambiguously.

On the negative side, there is a lot of academic complexity and obscure terminology in these technologies and this really should be avoided for specifications where possible.


Singapore Framework


A technique that emerged from the metadata and semantic web world is to create a distinction between "vocabulary" specifications and implementation profiles. This is subtly different from the approach taken to create application profiles (e.g. of the LOM); vocabulary specifications defines only concepts, whereas profiles defines relationships and constraints.

The Singapore Framework sets out a methodology for constructing "domain models" and "description set profiles" based on Dublin Core, but which applies equally well to any specification based on reusing core vocabularies.

Use of this framework is being explored in specifications such as CEN's European Learner Mobility (EuroLMAI) standards and the UK HEAR specification. For example, CEN ELM defines a core vocabulary of classes and properties used in achievement data, a generic description set profile for "european learner mobility documents" and then a specific profile for the Europass Diploma Supplement.

As another example, CWA 15903 defines the concepts of learning opportunities and their properties. It doesn't offer any constraints on how many instances of a property a model can have, or really very much about their syntax. Other specifications can then take the concepts and define the constraints and bindings, for example XCRI.

However, I don't think the specific language and techniques for defining a Description Set Profile are of as much value as the distinction itself (however realized), so I'd suggest we learn from and be inspired by the framework rather than adopt it. For example, in EuroLMAI, the Description Set Profile is actually realised using constraint clauses (e.g. "each instance of ClassX MUST have exactly one PropertyY").

A side effect of separating concepts from implementation profiles is that you have a specification where you just focus on definitions. I think this can be really important; for example in recent IMS specifications for web services the information on what a field is for is tiny compared with the big UML interface diagrams and interface definition stuff, and in some cases has been pretty vague and even incorrect. This isn't to malign the authors (I was one of them!) - it is just that the format makes it harder to focus on providing good explanations of the meaning of properties and to provide good guidance on their use.

I think this approach may be useful to make better reuse of concepts shared across the domain, and for making it clearer when a specification actually needs a binding and technical conformance, and when it doesn't.


Conformance Testing


Testing is something we've struggled with as a community, and there has been some confusion over conformance testing, badging and certification and so on.


Overall I think its important to be able to test implementations of a specification. In W3C, there is a requirement for having tested implementations of specifications before they can be approved, and Marcos Caceres from Opera has produced a very interesting methodology for developing these tests. Having worked on an implementation myself I found the tests developed using this method easy to work with. Also, having a nice visible performance gauge for my work was a good motivation for improving the implementation.

I think this does point up something important about conformance testing - I think it has to be open, free, and transparent. There is a temptation to politicize conformance, or to make it into a revenue stream. I think this misses the point - conformance is also about making better specifications, and you don't want that to be distorted by a "pay to test" environment or have aspects of testing that are based on a nod-and-a-wink from some staffer. If necessary it may be a case of having neutral, free conformance testing alongside paid certification and marketing, but with a good clean separation of the two.

Another thing about testing - its useful to make the tests available early on, during the evolution of the specification. Often the tests themselves show up specification problems, and help identify scope issues. For example, if the specification mandates an untestable behaviour, maybe it should be optional; if its unclear what a fallback position is when something is missing, maybe it has to be mandatory or have a specified fallback behaviour that can be tested. Again I'd point to Marcos & Dominique's work here on test generation at W3C, as well as to the work of Ingo Dahn.



Open Source (Reference) Implementations


Again, something the community has struggled with over the years. Overall I think there is considerable value in having running code for new specifications, particularly things like basic libraries for a range of platforms. In some cases this is uncontroversial, but there have been problems in terms of ownership conflicts and sustainability. In general I think its important to have viable open source implementations, independent of the specification body itself, but not necessarily considered "reference" implementations. I think much of the trouble comes from the SSO endorsing particular implementations rather than relying on an open conformance process (see above) to allow users and implementers to draw their own conclusions.


There is also the issue of OSS projects having access to specifications under development, and OSS contributors contributing to specifications. In some cases this isn't really important (e.g. IETF) in others by having an MOU (e.g. W3C and the ASF). However I think given the value that OSS brings to standards, if the process of specification development doesn't allow ANY open source project to engage (not just cherry picking the most popular) then the development process needs rethinking.


Note that this only really applies to specifications that are aimed at direct implementation; "vocabulary"-style standards and domain models aren't implemented in this fashion. I guess a rule of thumb is, if there are conformance tests, then there should be OSS implementations.


If in doubt, throw it out


One final thing, not really a technology but certainly a technique, is to be really ruthless about what makes the final cut. That doesn't just apply to the appendices and guidance stuff kicking around in some specification documents, but also the core models and functionality. If the key implementations that are testing a spec can't find a use for a field or never use a method or interface, consider cutting it out completely. Keep the draft around as it might turn out useful in a revision. If there is a whole section of functionality that is only used by a few implementations, separate it out as a mini-spec published separately to keep the core as small and easy to understand and implement correctly as possible. This can continue right up to the end of the process - for example in the W3C Widgets specs we've removed API methods and properties at each stage of the spec, often very simply as a result of asking "is anyone using this?"

In the early days I think we were keen to capture as wide a set of requirements as we could and provided redundancy in the specification to avoid too many non-conforming extensions. I think one consequence was an explosion of application profiles, and just as many interoperability issues as if we'd kept the specifications lean and mean to begin with.


Also, large complex specifications need many, many more tests to check conformance. In my recent W3C work I think its on average about 20 tests per XML element. So if a spec has 100 elements, that's about 2000 conformance tests to pass if you're doing it to the same level of detail. (W3C Widgets has 10 elements; some of its sub-specifications like Widget Updates have just a couple.)


Summing up


So what should we do in future? Or at least, until something better comes along?



For standardising concepts:



For specifications aimed at implementation:


Related items: