Postel’s Law vs the Whiteboard Marker Rule

[28 June 2010]

Many people have been influenced by what is often called Postel’s Law, usually quoted as saying “Be conservative in what you send and liberal in what you accept”.

In a room with a whiteboard, however, the only workable rule is: when you find a marker that no longer writes, do not be liberal in what you accept, and do not put it back where you got it. Throw it away. Otherwise, every whiteboard you run into will soon have twenty-odd markers, some of which actually work, if you can find them, but most of them duds.

Cleaning up as you go is a good principle in cooking, and in managing the set of white-board markers. It’s also a good idea in data management; it’s a shame so many people misread Postel’s Law to mean the opposite.

Finding accessible color schemes for data graphics: one problem, four solutions

[11 June 2010]

Yesterday I spent a little time writing a stylesheet to make it easier to browse through some potentially complicated and voluminous data. I ran into a problem, and I found four solutions and learned several lessons, which I record here for those who face similar problems.

[“What stylesheet was that?” hissed my evil twin Enrique. I don’t know why he always whispers when asking these questions, but he does. “It doesn’t matter for the point at issue, but since you’re wondering, it was a stylesheet for the catalog files used to organize the XSD test suite. If you’re curious about it, you can see the stylesheet in use for a small sample catalog file.” “This is supposed to be easy to browse?” “Well, click View Source and compare it to the underlying XML.” “OK, I guess I see your point. But where’s the rest of the test suite? Don’t tell me the XML Schema working group has a test suite with … lessee, eighteen test cases?” “Hush. No, of course not. But the test suite as a whole is not set up for interactive browsing on the web. Yet.”]

[“And what was the problem?” “Stop interrupting and maybe you’ll find out.”]

To make it a little easier to see which test cases involve valid documents and which involve invalid documents Extract from catalog display showing lines with colored backgrounds(not to mention documents with [validity] = notKnown and test cases with implementation-defined results), I supplied a little pale green, pink, blue, or gray background for the different possible expected results.

So far, so good, but this morning Liam Quin mentioned casually that I might want to consider changing the colors, since the red and green backgrounds would be indistinguishable for some readers. He was right, of course, and the original color scheme was really just a sort of quick and dirty proof of concept, not intended for final use.

“Boy, do you sound defensive. Thought you could get away with it, huh?” asked Enrique. “Oh, hush. Like you never take shortcuts with things.”

So I tested the color scheme using the incredibly useful image analysis tool at VisCheck.com: A simulation of deuteranopy (red/green color blindness) on the image given earlier you upload an image, and can perform simulations of deuteranopy (red/green color deficit), protanopy (a different form of red/green deficit), and tritanopy (a blue/yellow color deficit). And Liam was absolutely right: the backgrounds for valid and invalid test cases were virtually indistinguishable in the output of the first simulation.

So I set about to fix it, and I learned some things.

First solution. First, I taught myself something about how not to go about this task. Using the helpful Color Sphere widget I downloaded some time ago from colorjack.com, I found a set of three colors that were pretty well distinct in all the different filters it provides: a green, a red, and a blue. They were too intense to use as background colors, so I spent some time with a color picker finding RGB values that matched those hues Screen shot fragment showing one version of the color schemebut had less saturation (select the RGB sliders pane, convert from hex to decimal because the widget doesn’t understand hex; change to the HSB pane, slide the saturation level down a bit, go back to the RGB pane, convert the decimal numbers back to hex, write them down) and Another version of the color scheme checking the resulting color scheme with VisCheck (update the stylesheet, refresh the page in the browser, take a screen snapshot of part of the page showing all the colors, go to VisCheck, upload, perform deuteranopy simulation, whoops, that won’t work, go back and change one of the colors, come back, upload, perform deuteranopy simulation, upload, perform protanopy simulation, upload, perform tritanopy simulation).

I’m very grateful for these tools, but sometimes I do wish it were more convenient to run all three filters on the input.

After a couple of passes and errors and false starts (which means; after an hour and a half or so), I had a color scheme Yet another version of the color scheme that worked for all the vision types I was checking.

The only problem was that I then decided the green was too virulent and intense.

No problem, I thought: Another version of the color scheme these three hues are good for all three types of color deficit, I’ll just lower the saturation to make them less intense, and I’m good to go. Wrong: on one of the simulations, there was no longer any distinction between the pink and the gray. And the green still bothered me.

So I learned an important lesson: In trying to make colors for data graphics accessible to all types of vision, it’s not just hue that counts: hue and saturation interact with the different deficits.

Second solution. I went back to the colorjack.com Color Sphere widget on my system, and noticed that I could control not just the hue of the primary color but its saturation as well: as you move toward the center of the color circle, saturation goes down. A new color scheme And as you move things around with the deuteranopy filter set, you can see for yourself that the same three hues can be distinct at one saturation and not at another. So I got another color scheme that worked.

Second lesson: if you know in advance that you want the colors to be unsaturated and unobtrusive, use the ability of the color picker / color scheme generator you’re working with, to get close to the saturation and brightness you need. The color picker is better at these kinds of conversions than you are.

The only problem was that as I continued working with the data I grew to like this new color scheme less and less. The green wasn’t virulent any more, but it also wasn’t green; more a greenish yellow.

So I gave up on the assocations green = go = valid outcome, red = stop = invalid outcome. All that matters is that the colors be distinct; they don’t need to be particularly mnemonic (the distinction is, after all, also carried by the text). A new color scheme So back to the color picker, once or twice. It gave me schemes that worked in the sense of being visually distinct for normal vision, deuteranopy, etc., but … well, I didn’t much like any of them visually.

Third solution. I remembered that some time ago, I had found a nice monochromatic color scheme for the XSD type hierarchy diagram, Much-reduced image showing the XSD type hierarchy, with a color scheme in various shades of blue which I had concluded was in fact more attractive than the polychrome color scheme we had started with. So I copied that, and that’s the scheme in use in the stylesheet now.

When I did that work on the type hierarchy diagram, a friend (not Enrique, I promise) asked me “So, can you write down an accessible color scheme I can use whenever I need one, so I don’t have to go through all this hassle of testing things and experimenting and changing things?” I told him no, I didn’t think so: too much depends on the information you’re trying to convey.

But I think one useful general rule can be formulated. If you are looking for a coherent color scheme for data graphics, and the only required function of the colors is that they be visually distinct from each other both for normal vision and for the various color deficits, then one very simple approach is to go to a color scheme generator like the one at colorschemedesigner.com, pick a hue, and generate a monochromatic color scheme (hmm, not all color scheme generators have this as an option). You should definitely use the color-deficit simulations on the result, just to make sure, but virtually all the variation among the colors of a monochromatic scheme is variation in saturation and brightness, not in hue, so the chances are much greater that the variation will be perceptible to all types of vision.

So: third lesson. Try a monochromatic color scheme.

Fourth solution. Another fairly quick and easy solution is to use the Daltonize tool at VisCheck.com, which makes the visual information in an image (in the case of coloring for data graphics, this means the information conveyed by distinctions of color) and makes it more readily perceptible to color-blind viewers by increasing the red/green contrast and by converting information conveyed by red/green distinctions into variations in brightness and on the blue/yellow axis. If you are working with something like false-color photography, where there are many variations in color and tone, things like color-scheme generators are not going to help you; Daltonize is a very cool tool for those applications.

Disclaimer: I know enough accessibility specialists, and enough people with good graphical skills, to know that I am neither one nor the other. I lay no claim to particular expertise in accessibility issues, only to a firm belief that they are important. I think in fact that they are too important to be the concern only of accessibility specialists, just as design is too important to be left only to designers: every maker of Web pages needs to think about the visual communication of information, and about making that information accessible to all the members of your audience. When you do, you’ll appreciate the contributions and the expertise of specialists all the more. (They will remind you, for example, that color-blindness is just one barrier to accessibility, among many. It just happens to be one that lends itself to pretty, colorful illustrations.)

Only when they are accessible to everyone who may need them will your web pages, and The Web, achieve their full potential.

Boomerangs, bad pennies, encores

[10 June 2010]

As of 1 June, W3C is paying for a quarter of my time, to work with the W3C XML Schema working group. Given the current state of XSD 1.1 (the working group is mostly waiting for implementations to be completed before progressing it to Proposed Recommendation), my time will mostly be devoted to work on the XSD 1.1 test suite and whatever else can be done to smooth the path of the implementors.

In the work group, the air has (predictably) been full of the expected boomerang references, Thomas Wolfe quotations, and Terminator jokes, with the occasional mention of encore performances. I leave the imagination of the scene as an exercise to the reader.

“Encore performances?” sneered my evil twin Enrique, when he read this over my shoulder. “I’ll tell you about encore performances!” Long ago, he said, he attended a recital where the pianist’s performance of a very difficult piece was greeted by thunderous applause. The astonished pianist had not prepared an encore, so as an encore he simply played the final piece over again. And the second time, said Enrique, the audience was still wildly enthusiastic. There were further cries of “Encore, encore!”, mixed with others of “Again! Again!” And so he played the piece yet again.

After the third rendition the audience was still calling for more, but Enrique swears he heard the man behind him shouting, above the din, “Again! Again! And you’re gonna keep playing it, until you get it right!”

[Thank you, Enrique, for that vote of confidence. I agree, at least, that it’s worth while to try to get a spec right.]

Three quarters of my time will continue to be spent on consulting and contract work, with a focus on using standards and descriptive markup to help make digital information more widely useful and give it longer life. Just as it’s best to combine theory and practice, if possible, so also it’s helpful to combine standards work with work on practical applications. For example: I just finished a project together with an archival collection at a major U.S. university; they are encoding their finding aids in XML using the Encoded Archival Description, and they are publishing the finding aids on the Web, in XML, with XSLT stylesheets to display them nicely for humans. Because they don’t have a separate EAD-to-HTML translation step, their workflow is simpler: they update a collection record in their archival management system, save the collection description as XML, copy it to their Web server, and it’s published. Their finding-aids site is very cool.

In other words, they are using the Web and XML in just the ways their originators hoped they could be used: for sharing semantically rich information. It’s a great pleasure to work on exploiting the open standards W3C has produced, and I look forward to helping produce more of them.

Aquamacs, XEmacs, and psgml

[26 April 2010]

The other day I thought perhaps it was time to try Aquamacs again, a nice, actively maintained port of FSF Emacs to Mac OS X. I’ve been using a copy of XEmacs I compiled myself years ago, with Andrew Choi’s Carbon XEmacs patches, but recently it has accumulated some problems I don’t have the patience to diagnose.

Got Lennart Staflin’s psgml package (a major mode for SGML and XML documents) installed and compiled; this is a pre-requisite for any Emacs being habitable for me. (Why is package management such a dirty word in FSF Emacs, by the way?) And discovered that on one of my larger documents, psgml takes 50% longer in some tests (9 seconds vs 14 seconds) to parse a large document in Aquamacs than in XEmacs. In other tests, it was 9 seconds vs. 90 seconds (or so — I kept getting bored and losing count between sixty and eighty seconds).

I may not be leaving XEmacs after all (undiagnosed problems or no undiagnosed problems).

[Postscript, 7 December 2012. I did eventually move to Aquamacs, and have been very happy with it. The performance issues reported here have not recurred on the Intel CPU I’m currently using.]

Posted in XML

Counting down to Balisage paper submission deadline

[6 April 2010; addenda 7 April 2010]

After discovering earlier this year that the definition of the XPath 1.0 data model falls short of the goal of guaranteeing the desired properties to all instances of the data model, I’ve been spending some time experimenting with alternative definitions, trying to see what must be specified a priori and what properties can be left to follow from others.

It’s no particular surprise that the data model can be defined in a variety of different ways. I’ve worked out three with a certain degree of precision. Here is one, which is not the usual way of defining things. For simplicity, it ignores attributes and namespace nodes; it’s easy enough to add them in once the foundations are a bit firmer.

Assume a non-empty finite set S and two binary relations R and Q on S, with the following properties [Some constraints are shown here as deleted: they were included in the first version of this list but later proved to be redundant; see below] :

  1. R is functional, acyclic, and injective (i.e. for any x and y, R(x) = R(y) implies x = y).
  2. There is exactly one member of S which is not in the domain of R (i.e. R(e) has no value), and exactly one which is not in the range of R (i.e. there is one element e such that for no element f do we have e = R(f)).
  3. Q is transitive and acyclic.
  4. The transitive reduction of Q is functional and injective.
  5. It will be observed that R essentially places the elements of S in a sequence without duplicates. For all elements e, f, g, h of S, if Q includes the pairs (e, f) and (g, h) and if g falls between e and f in the sequence defined by R (or, more formally, if the transitive closure of R contains the pairs (e, f), (e, g), and (g, f)), then h also falls between e and f in that sequence.
  6. The transitive closure of the inverse of R (i.e. R-1*) contains Q as a subset.
  7. The single element of S which is not in the domain of R is also neither in the domain nor the range of Q.

It turns out that if we have any relations R and Q defined on some set S, then we have an instance of the XPath 1.0 data model. The nodes in the model instance, the axes defining their interrelations, and so on can all be defined in terms of S, R, and Q.

[It also turns out that several of the constraints included in the list above are redundant. The fact that relation R is functional and injective, for example, follows from the others shown. Actually it follows from a subset of them. The deletions above show one way of reducing the number of a priori constraints: they all follow from the others and can be dropped. None of the remaining items follows from the others; if any of them are deleted, the constraints no longer suffice to ensure the properties required by XPath.]

For the moment, I’ll leave the details as an exercise for the reader. (I also realize, as I’m about to click “Publish”, that I have not actually checked to see whether the set of constraints given above is minimal. I started with a short list and added constraints until S, R, and Q sufficed to determine a unique data model instance, but I have not checked to see whether any of the later additions rendered any of the earlier additions unnecessary. So points for any reader who identifies redundant constraints in the list given above.)
[9 April 2010]

Just a week to go before paper submissions for Balisage 2010 are due. Time for the procrastinators, the delayers, the mañana-sayers to buckle down and get their papers written and submitted.

Speaking of which, I have some work to do now K thx bye.