There must be fifty ways …

[7 April 2010]

The old Paul Simon song “There must be fifty ways to leave your lover” keeps running through my head. I can see close to fifty ways to define the XPath 1.0 data model in terms of (a) a set of nodes and (b) two relations defined on that set, which are taken as primitive; all other relations (i.e. all the other axes of XPath) are defined in terms of those two primitive relations.

Strictly speaking, I make it forty-eight ways. First, pick any single relation from any of the following four groups:

  1. parent, child, ancestor, descendant
  2. prevsib, nextsib, preceding-sibling, following-sibling
  3. prevnode, nextnode, document-order preceding (>>), document-order-following (<<)

That’s twelve possibilities.

Second, pick any single relation from either of the other two groups; that makes eight possible choices, times twelve first choices, for ninety-six ordered pairs of relations. But the order doesn’t matter, so we have forty-eight distinct pairs.

In recent days, taking some pairs not quite at random, defining the constraints they must satisfy in order to be a suitable basis for defining an XPath 1.0 tree, and defining all the other relations in terms of the chosen primitives, I have learned a couple of mildly interesting things.

  • It’s more convenient to take parent as a primitive, than child.
  • It’s more convenient to take one of the single-step relations (parent, child, nextsib, prevsib, nextnode, prevnode) than one of their transitive closures (ancestor, descendant, etc.).

The nextsib relation, for example, needs to be acyclic, functional, injective, and not transitive. If it is, then its transitive closure following-sibling will automatically be suitable. But if you start with following-sibling and specify (as you will need to) that it is transitive and acyclic, that does not suffice to guarantee that its transitive reduction nextsib is functional and injective. You can of course simply say that a following-sibling relation is suitable if and only if (a) it’s transitive, (b) it’s acyclic, and (c) its transitive reduction is functional and injective, but now you’re forcing the reader to work with two relations, not just one: both following-sibling and its transitive reduction. It would be interesting either to find a way to constrain the closure directly to ensure the necessary properties in the reduction, or else to find a proof that there is no way to constrain a closure to ensure that its reduction is functional and injective without explicitly referring to the reduction.

  • From any relation in any group, the other relations in that group are (relatively) easy to derive in terms of inversion, transitive closure, or transitive reduction. Defining a relation in the third group typically proves more interesting. And while it’s more convenient to choose the primitive relations from among the reductions, it turns out that at least in some cases it’s easiest to define the third group in terrms of one of the closures. For example, given the parent and next-sibling relations, it proves easier to define document-order-following in terms of the primitives than to define next-node.

It occurs to me to wonder whether there are ways to define XPath 1.0 trees that don’t reduce to or include one of these forty-eight.

One way to define the XPath data model

[6 April 2010; addenda and copy editing 7-8 April 2010]

After discovering earlier this year that the definition of the XPath 1.0 data model falls short of the goal of guaranteeing the desired properties to all instances of the data model, I’ve been spending some time experimenting with alternative definitions, trying to see what must be specified a priori and what properties can be left to follow from others.

It’s no particular surprise that the data model can be defined in a variety of different ways. I’ve worked out three with a certain degree of precision. Here is one, which is not the usual way of defining things. For simplicity, it ignores attributes and namespace nodes; it’s easy enough to add them in once the foundations are a bit firmer.

Assume a non-empty finite set S and two binary relations R and Q on S, with the following properties [Some constraints are shown here as deleted: they were included in the first version of this list but later proved to be redundant; see below] :

  1. R is functional, acyclic, and injective (i.e. for any x and y, R(x) = R(y) implies x = y).
  2. There is exactly one member of S which is not in the domain of R (i.e. R(e) has no value), and exactly one which is not in the range of R (i.e. there is one element e such that for no element f do we have e = R(f)).
  3. Q is transitive and acyclic.
  4. The transitive reduction of Q is functional and injective.
  5. It will be observed that R essentially places the elements of S in a sequence without duplicates. For all elements e, f, g, h of S, if Q includes the pairs (e, f) and (g, h) and if g falls between e and f in the sequence defined by R (or, more formally, if the transitive closure of R contains the pairs (e, f), (e, g), and (g, f)), then h also falls between e and f in that sequence.
  6. The transitive closure of the inverse of R (i.e. R-1*) contains Q as a subset.
  7. The single element of S which is not in the domain of R is also neither in the domain nor the range of Q.

It turns out that if we have any relations R and Q defined on some set S, then we have an instance of the XPath 1.0 data model. The nodes in the model instance, the axes defining their interrelations, and so on can all be defined in terms of S, R, and Q.

For the moment, I’ll leave the details as an exercise for the reader. (I also realize, as I’m about to click “Publish”, that I have not actually checked to see whether the set of constraints given above is minimal. I started with a short list and added constraints until S, R, and Q sufficed to determine a unique data model instance, but I have not checked to see whether any of the later additions rendered any of the earlier additions unnecessary. So points for any reader who identifies redundant constraints in the list given above.)

[When I did check for minimality, it turned out that several of the constraints included in the list above are redundant. The fact that relation R is functional and injective, for example, follows from the others shown. Actually it follows from a subset of them. The deletions above show one way of reducing the number of a priori constraints: they all follow from the others and can be dropped. None of the remaining items follows from the others; if any of them are deleted, the constraints no longer suffice to ensure the properties required by XPath.]

Alloy as logical desk calculator

[26 March 2010]

Long ago I used a wonderful file-oriented database system called Watfile, which was designed as a sort of desk-calculator for data. It was designed for personal use, not industrial-strength data management, and its designers successfully resisted the temptation to add more features and more power at the cost of a more complex user interface. Watfile was to a full enterprise-class database as a desk calculator of the 1960s was to … oh, perhaps to Fortran. For suitable problems, the ease of setup far outweighed any considerations of power or completeness.

The experience of using Watfile for data manipulation tasks established in my mind the class of ‘desk-calculator-like’ packages for various kinds of problem.

Today I experimented with Alloy as a sort of logical desk calculator, and I’m happy to report that it passed the test with flying colors.

For reasons I won’t go into here, I’ve wondered a bit recently what it might look like to apply the technique of distinctive-feature analysis (originally developed for phonological descriptions of sound systems of language) to writing systems. When I sat down a few months ago with pencil and paper to see if I could devise a smallish set of typographic features which could (say) distinguish the twenty-six letters of the alphabet as I was taught it in first grade, I rapidly found that working solely with pen and paper made me impatient: it was too tedious to look at the set of features already identified and see which letters could not yet be distinguished (because they had the same value for all the features in question).

When I came back to this problem this afternoon, I thought for a few minutes about what questions I’d like to be able to ask the machine. Given a specified set of graphemes (as a first exercise, I chose the lower-case alphabet) and a specified set of binary features (does the letter have an ascender? a descender? a vertical stroke? a full or partial circle or bowl? Is the stroke to the left of the bowl? …), with information about which graphemes have the feature in question, I want to be able to ask, first, whether a particular set of features suffices to distinguish each individual grapheme? Or are there two or more graphemes which have the same value for all features in the set? And of course, if there are such sets of indistinct graphemes, what are they?

It occurred to me to solve the problem in Prolog; it would take just a relatively simple set of Prolog predicates to do what I wanted. But as I was preparing to launch X Windows, so that I could launch Prolog, I realized that I already had the Alloy Analyzer running. And so I wrote the predicates I wanted in Alloy instead of Prolog, to see whether it would work.

The upshot is: yes, it worked, and it was probably a bit easier to do than it would have been in Prolog. When I was thinking about how to set up the problem in Prolog, I found myself wondering about the best data representation to choose, and so on, almost as much as about the structure of the problem. I won’t say that Alloy posed no analogous problems — I did have to think for a moment or two about the best way to describe graphemes and distinctive features. But the high level of abstraction afforded by Alloy made the decision feel less binding, and made me feel a bit more comfortable experimenting. (It sounds strange to put it this way: after all, Prolog’s high level of abstraction is one of its great design strengths. But Prolog is also designed to be an efficient and effective programming language, which means that some details are exposed which have only procedural significance, and sometimes you find yourself thinking about them, even in situations where questions of execution efficiency don’t arise.

In very short order, I found it possible to define a suitably abstract representation of graphemes and features, specify some useful functions and predicates for asking the questions described above, and specify a small set of features (ten) which have a certain degree of typographic plausibility and which suffice to distinguish the graphemes in question. (Ten binary features for twenty-six graphemes may seem high, given that the theoretical minimum is only five, and that ten bits suffice to distinguish a thousand objects, not just twenty-six. But writing, like natural language, has some redundancy. Feature sets used to analyse natural language sound systems are also often very inefficient.) The visualization tools did not prove very helpful, but the Evaluator feature of the Alloy Analyzer was a great help.

If I pursue this work any further, I probably will put it into Prolog, where the interactive interface for expression evaluation is perhaps a bit more convenient than in Alloy. But it’s nice to know that Alloy can be used for this kind of problem, too.

Interested readers can find both the generic definitions and the specific graphemes and features for lower-case Latin letters (as used in Anglophone countries) on the Black Mesa Technologies web site.

The axes of XPath

[25 March 2010; error noticed by Dimitre Novatchev fixed 29 March 2010]

Steve DeRose and I have been discussing the XPath [1.0] data model recently (among other things), and in the course of the discussion an interesting observation has emerged.

it’s obvious that some of the axes in XPath expressions are inverses of each other, and also that some are transitive closures of others (or, going the other way, that some are transitive reductions of others). What surprised me a little was that (if for the moment you leave out of account the self and the XYZ-or-self axes, the attribute axis, and the namespace axis [and also preceding and following) all of the XPath axes fit naturally into a pattern that can be represented by three squares. (Will table markup work here? I wonder.) The first square represents the up/down axes:

Up/down
parent ancestor
child descendant

The next square covers sibling relations. Unlike parent and child, which are just short-hand for single steps along the up or down axis, XPath provides no syntactic sugar for preceding-sibling :: * [1] and following-sibling :: * [1], so I’ve invented the names “nextsib” and “prevsib” (marked with a star here to signal that they are invented):

Sideways
*prevsib preceding-sibling
*nextsib following-sibling

The third square describes overall document order; again, I’ve invented names for the single-step relations [note that the names used here for the transitive relations are given by XPath 2.0; XPath 1.0 doesn’t provide notation for them]:

Overall document order
*prevnode >>
*nextnode <<

[In the first version of this post, the right-hand columns were labeled preceding and following, but Dimitre Novatchev reminded me gently that these axes do not in fact correspond to document order: preceding excludes andestors and following excludes descendants. That’s a plausible exclusion, since no one in their right mind would say that chapter one of Moby Dick precedes the first paragraph of Moby Dick. Contains, yes; precedes, no. In fact, I remember getting into an argument with Phil Wadler about this, early on in the days of the XML Query working group, not realizing (a) that the document ordering he was describing was actually prescribed by XPath 1.0, nor (b) that saying that ancestors precede their descendants in document order didn’t mean that the ancestors would have to be present on the preceding axis. Thank you, Dimitre! And sorry, Phil!]

In each table row, the relation on the right is the positive transitive closure of the one on the left, and the one on the left is the transitive reduction of the one on the right.

In each table column, the relations in the top and bottom rows are inverses of each other.

The tables make it easy to see that it suffices to take a single pair of relations on nodes as primitive (e.g. child [or better first-child] and nextsib, or parent and prevsib); everything else in the tree can be defined in terms of the two primitive relations. (It’s not completely clear to me yet whether any two relations will do as long as they are from different tables, or not. Taking nextnode and parent seems to work, as does the pair nextnode and child but nextnode and first-child seems to pose problems — why can child be replaced by first-child in some situations but not others? Hmm.)

There seem to be implications for the formalization of the data model (which is how we got here in the first place), but maybe also for teaching new users how to think about or learn XPath.

A small gotcha in Alloy’s closure operator

[24 March 2010]

Consider the following Alloy model:

sig Node {}
one sig gl { r, s : Node -> Node }{ s = *r }
run {}

It has no instances, and the Alloy Analyzer comments laconically “Predicate may be inconsistent.” But the similar model below does have instances. Why?

sig Node {}
one sig gl { r, s : univ -> univ }{ s = *r }
run {}

I ran into this phenomenon today, when I was trying to do some work relating to the definition of document order in the XPath data model. It’s convenient to have both a relation like the successor relation of arithmetic on the natural numbers, which gives you the next item in document order, and a relation like the less-than-or-equals relation of arithmetic, which is the reflexive transitive closure of the successor relation. And if you want to have both, you will want to specify, as is done above, that the one is the reflexive transitive closure of the other. And when you do, it’s a bit alarming to be told (and mostly very quickly) that nothing in your model has any instances at all.

It took me a couple hours to reduce the problem to the terms shown above (there were several complications which looked more likely to be the cause of the trouble, and which took time to eliminate), and then a little more time to realize that declaring the relations in question as Node -> Node or as univ -> univ made a difference.

I invite the interested reader, if a user of Alloy, to consider the two models and explain why one has instances and the other doesn’t. No peeking at what follows until you have a theory.

Found it? If you found it without spending a couple hours on it, my hat’s off to you.

The * operator produces the reflexive transitive closure of a binary relation r by essentially taking the union of the positive transitive closure of r and the identity relation iden. That is, for all relations r, *r is defined as ^r + iden.

The problem is that iden covers everything in the model, including the gl (globals) object and the automatically included integers. And the upshot is that s cannot satisfy the constraint s = *r while also satisfying its declaration (Node -> Node).

In his book Software Abstractions, Daniel Jackson remarks on the oddity of * including ‘irrelevant’ tuples in the relation, but (as he points out) it almost never matters, because in many context the irrelevant tuples are dropped out during evaluation of the containing expression. The consequence is that it’s possible to work with Alloy (as I have obviously been doing) with a mental model in which * is somehow smart enough to add only those tuples which are ‘relevant’ to the relation whose closure is being taken. That mental model proves to be an over-complexification.

One reason I like Alloy a lot is that it allows the user to operate at a fairly high level of abstraction, if you can just find the right level. Working in Alloy presents fewer of the small issues of syntax and data typing and so on that can bedevil attempts to explore problems even in high-level programming languages, and so you mostly get to your results a lot faster. But I guess no formal system is ever completely free of cases where the user stares blankly at the screen for some indefinite period of time, trying to figure out why the system is producing such a counter-intuitive and obviously wrong result.

In the words of the old MVS error message: Probable user error. Solution: correct and re-submit.