Treating our information with the care it deserves

[21-22 July 2008]

I don’t make a habit of recording here all of the interesting, useful, or amusing things I read. But I am quite taken with Steve Pepper’s account of the situation in which many large organizations find themselves in. In a blog post devoted to a different topic (the history of Norway’s vote on OOXML), he describes (his understanding of) one organization’s point of view and motivations:

They are a big MS Office user, they participated in TC45 (the Ecma committee responsible for OOXML) and they clearly feel that OOXML is important to them.

I can understand why. An enormous amount of their intellectual capital is tied up in proprietary formats ?~~ in particular Excel ?~~ that have been owned and controlled by a vendor for the last 20 or so years. StatoilHydro has literally had no way of getting at its own information, short of paying license fees to Microsoft. Recently the company has started to realize the enormity of the mistake it has made in not treating its information with the care and respect it deserves.

As he points out, they are of course not alone in having made this mistake, particularly if one includes other proprietary formats beyond Office, and other vendors than Microsoft.

Several points occur to me:

  • It’s easy for me to feel superior and to lack interest in the problems of converting legacy data: I stopped using proprietary formats about twenty years ago, not very long after I had acquired a personal computer and gained the opportunity to start using them in the first place, and so with very few exceptions pretty much every piece of information I have created over my career is still readable. (A prospective collaboration did collapse once when at the end of a full-day meeting, as we were deciding who would draft what piece of the grant proposal, I asked what DTD we would be using, and my soon-to-be-former prospective collaborators said they had planned to be working in a proprietary word processor.) But feeling superior is not really a useful analysis of the situation.

Enrique: Are you saying that there are no proprietary data formats on mainframes? Whom are you trying to kid?

Me: No, but all my mainframe usage was on university mainframes; we don’t seem to have been able to afford any seriously proprietary software, at least any that was interesting to me. I was mostly doing document preparation, and later on database work. And for a while I maintained the terminal translation tables.

Enrique: The what?

Me: Never mind. There used to be things called terminals, and … Sorry I brought it up.

Enrique: And your databases didn’t use proprietary formats?

Me: Internally, sure. But they could all dump the data in a reusable text file format. I think I translated the Spires dump of my bibliographic data to XML once. Or maybe that was just something that went on the Someday pile.

Enrique: You’re right. Feelings of superiority are not really an adequate analysis of a complex situation. Even if the feelings were justified, which in this case, Bucko, does not seem to be the case.

  • The right solution for these organizations is, perhaps, to move away from such closed systems once for all, and use semantically richer markup. Certainly that’s where my immediate sympathies lie. It’s not impossible: lots of organizations use surprisingly rich markup for data they care about.
  • But how are they to get there, starting from where they are now? Even if the long-term benefits are substantial (which is close to self-evident for me, but is likely to sound very unproven to any serious organizational IT person), you have to get through the short term in order to reach the long term. So the ideal migration path starts paying off very quickly, even before you’ve gone very far. (Paoli’s Law: if people put five cents of effort in, they want to see a nickel in return, and quickly.) Can there be such a migration path? Or is going cold turkey the only way to go?
  • The desire to get as much benefit for as little work as possible seems to make everyone with a legacy-data problem easy prey for snake-oil salesmen. I don’t see any prospect of this changing, though, ever.

Enrique: Nah. Snake oil, now there’s a growth stock.

Posted in XML

Six-month retrospective and evaluation

[16 July 2008]

This klog started about six months ago, as an experiment. In an early post, I wrote:

So I’m going to start a six-month experiment in keeping a work log. Think of it, dear reader, as my lab notebook. (I was going to do it starting a year ago, but, well, I didn?~~t. So I?~~m going to start now.)

My original plan was to make it accessible only to the W3C Team, so that I could talk about things that probably shouldn?~~t be discussed in public or in member space. Norm Walsh has blown a hole in that idea by pointing to this log [Hi, Norm!]. So public it is. (Ideally, I?~~d have a blog in which each item could be marked with an ACL, like resources in W3C date space: Team-only, Member-only, World-readable. Maybe later.)

Next year about June, if I remember, I will evaluate the experiment and decide whether it?~~s been useful for me or not.

So, as one of my teachers used to say at the beginning of a group evaluation of some student work: what works, what doesn’t work?

Things that don’t work as well as I would like:

  • As might have been predicted, the fact that Messages in a Bottle is public, not private, has encouraged me to be circumspect in ways that fight with the lab-notebook goal. I don’t want to be carelessly rude about colleagues or others in public, the way one can be in private conversations and to their faces. Across a dinner table, one can greet a claim made by a colleague with a straightforward cry of “But that’s bullcrap!” without impeding a useful discussion. (This depends in part on the conversational style cultivated by individuals and groups, of course. But as some readers of this post will know, this is not a speculation but a report.) It doesn’t feel quite right, however, to say in public of something proposed by someone acting in good faith that it’s just bullcrap. You have to spend some time thinking of another way to put it. Enrique comes in handy here, since he will say anything. It has not been proven, however, that Enrique will never piss anyone off.
  • For the same reason, I have not yet found a good way of recording issues and concerns I don’t have good answers for. In a lab notebook, or a private conversation, one can talk more forthrightly about things that are going wrong, or things that have gone wrong, and how to right them. But in public, members of a Working Group, and editors of a specification, do better to accept a sort of cabinet responsibility for the work product. You do the best you can to lead the group to what you believe is the right decision, and then you accept the decision and defend it in public. I have not yet found a way to combine the acceptance of that joint responsibility, and the concomitant need to avoid bad-mouthing decisions one is responsible for defending, on the one hand, with forthright analysis of errors on the other. Sometimes careful phrasing can do the job, but any need for care in phrasing constitutes a tax on the writing of posts about tricky subject matter.
  • So try as I might to keep pushing these posts toward being a work log, the genre keeps pushing back and trying to make them into something like a first-person newspaper column. That’s a fine and worthy thing, and I can’t say I don’t enjoy that genre, but it’s not quite what I was aiming for when I started. As a result, one cannot read back through the archives and get the kind of record one wants in a lab notebook, and I’m not sure Messages in a Bottle is working optimally as a means for me to communicate with myself, or with those I work with most closely.

And on the other side, some things do seem to work.

  • At one level of abstraction, the primary goal of this worklog is to improve communication between me and those I work with. There is some evidence, both in the comments here and in other channels, that some of those I work with do read these postings and find them useful, or at least diverting. I have never bothered to try to check the server logs for hit or visitor counts — my guess, based on my Spam Karma 2 reports, is that humans are strongly outnumbered by spambots among my readers, and I’d just as soon not have that demonstrated in quantitative detail — but it’s clear that more people read these posts at least sporadically than I would ever dream of pestering by sending them email meditations on these topics. If they read these posts and derive any insight from the reading, then this klog would appear to have improved communication at least somewhat.
  • It’s probably not actually a bad thing that I think of this as a public space. It makes me a bit more likely to try to write coherently, to supply relevant context, and to do the other things that help ensure that a communication can be read with understanding by readers distant in time, space, sentiment, or context from the author. If I occasionally indulge in a private joke or two, I hope you will bear with me.
  • It’s easier for me to find records of points of view and analyses that have gone into posts here than to find records kept only in files on my hard disk or on paper shoved into the shelves behind me.
  • So far, no one has complained even about the really boring technical discussions about regular grammars, even though it’s clear some of my readers would rather be reading about Enrique.

In sum, I think I believe the experiment can be adjudged modestly successful, and I will continue it for another six months.

Enrique on what RDF gets us

[14 July 2009]

As reported in my previous post, I’ve been thinking about RDF a bit lately. So I’ve decided to dust off some meditations on the subject that originated several years ago.

I was feeding the dogs one evening when Enrique dropped by and complained bitterly about the shortcoming of various colleagues’ attempts to persuade people (including me) of the value of RDF: the overstatement, the misrepresentations of other technologies (both XML and relational databases), the overselling of RDF’s virtues. “True, they would make anyone with any marketing sense tear their hair out,” I said. “But it’s not rational to infer that there are no arguments for RDF, just because its advocates make such a poor show of arguing for it. If you want to understand what RDF does, without overstatement and without mischaracterization of other technologies, why don’t you try constructing a dispassionate account of what RDF does and doesn’t get us?”

Enrique’s response was something like what follows.

It should be noted that Enrique focuses here on RDF itself, not RDF + OWL. OWL was still very new at the time, and Enrique was reacting to years of rhetoric about how RDF, by itself, was semantically richer than XML. I have also corrected a slip or two in Enrique’s original effort; he couldn’t remember the term phatic, for example.

I wonder (Enrique said) if RDF can be summarized in three points:

  • It proposes a way to think about information: there are things, they have properties.
  • It proposes that we use a single universe of names for all individuals: URIs.
  • It provides a single model of property attribution, namely the binary predicate, and thus gives us three well known roles (subject, verb, object, or relation-name, first-argument, second-argument) for participants in relations.

These may be worth some commentary.

It proposes a way to think about information: there are things, they have properties.

There’s no proof that all information, or all knowledge, or all propositions, can be thought of as being about things with properties. In fact, there are many very bright philosophers who deny it outright. But those who deny it don’t provide anything of similar convenience for machine processing.

Formal logic as usually taught today similarly tells us how to talk about things with properties. It’s quite plausible that there are things we can’t express conveniently or at all in formal logic — just look at the mess formal logicians are in trying to justify the truth table for material implication — but just as formal logic can be useful even if there are things it cannot do, so also for any way of talking about things and properties.

Things and properties, as usually considered, don’t capture very well the expressive, conative, metalingual, or phatic aspects of language, as Jakobson calls them (let alone the poetic), just the representational. Again, like logic.

It proposes that we use a single universe of names for all individuals: URIs.

URIs are interesting in part because they are simultaneously a unified set of names and a distributed system. Using them, we can eliminate ambiguity (if URIs are correctly used), though not synonymy.

Contrast naming disciplines in SQL, DTDs, programming languages, first-order predicate calculus, or natural language, with uncoordinated naming.

Contrast also naming disciplines involving central authority (‘use-mine-or-nothing’). If I remember correctly, there are central authorities who control Linnaean nomenclature, and names for specific geological formations, and the names of compounds given in official pharmacopeias.

It provides a single model of property attribution, namely the binary predicate, and thus gives us three well known roles (subject, verb, object, or relation-name, first-argument, second-argument) for participants in relations.

This simplicity, together with the lack of ambiguity in URIs when properly used, means that merger of arbitrary sets of triples is safe and easy. When predicates of arbitrary arity are allowed, merger can be more complex, or less effective, because when two sets of normalized relations are merged in a straightforward way, the result is not necessarily normalized. When things are resolved to triples, they are always in normal form. So the primary reasons for sub-optimal results after merging sets of triples are failures to merge owing to undetected synonymy, entailment relations other than synonymy (variation in specificity), variation in methods of currying n-ary predicates, and orbis-tertius variation.

“Orbis-tertius variation? What on earth are you talking about?” “Nothing on earth! It’s my short-hand way of talking about the radically underdetermined nature of our ad hoc ontologies. It’s a reference to Borges’s story Tlön, Uqbar, Orbis Tertius, my favorite treatise on ontology. Think of it as …” “… kind of an hommage. Right,” I said.

Ontological variation, by contrast, which shows up in variable-arity systems as difference of opinion about just what domains should be regarded as involved in an n-ary relation, does not cause problems for triples.

After he left, I realized I have no idea what Enrique meant by this.

Like the element/attribute distinction and child/parent, sibling/next-sib relations in SGML, this is a very thin standardization layer; it means almost nothing (which is why there can be so many sources of semantic variation). And again like the element/attribute model of SGML, that little turns out to be quite a lot, merely because that thin layer of standardization provides hooks that allow software to provide meaningful and useful operations defined in terms of those three roles. These operations can be performed without the software having the least idea of the meaning of the data (which is one reason it is so bizarre that Semantic Web enthusiasts insist so fervently on the implausible claim that the semantics of RDF data are overt in ways the semantics of other formats are not — I suspect the problem is that those particular enthusiasts think the distinction between circles and arrows counts as ‘semantics’.)

The semantic advantage of both SGML and RDF over some of their more obvious alternatives is this: precisely because they don’t define a prescribed semantics, the user can model whatever the user is interested in, using the primitive objects and relations built into the system to model whatever they wish to take as the primitive objects and relations of the system they are interested in representing. When this works well, operations on the primitive objects and relations can be used to model operations in the application domain, and the user has the feeling of being able to work ‘directly’ with the concepts of the application domain, with reduced need to pay attention to details of the representation. The ‘universality’ achieved by such a semantically thin layer of primitive notions is exactly parallel to the universality of s-expressions and relations and it is not surprising that the advantages we feel to accrue from XML and/or RDF are very similar to the advantages claimed for s-expressions by Lisp enthusiasts and for the relational model by Codd, Date, and the relational warriors of the 1970s and 1980s.

Because the primitive notions of RDF (things and properties) are explicitly tied to ideas of modeling, they feel (at least to believers) more nearly ‘semantic’ than the notions of other systems (e.g. XML or s-expressions). The thinness of the triple layer can be an advantage, not only in simplifying the universe of possible primitive operations, but also in reducing threshold anxiety. (More elaborate modeling systems invariably require something like a leap of faith; RDF’s tenets are thin enough and bland enough to make its required leap of faith somewhat smaller and less frightening.) And thin as it is, the subject/verb/object model does allow an infrastructure that knows nothing of the semantics of the information to do a lot of useful things, just as the semantics of the relational model allow RDBMS which understand nothing of application semantics to do a lot of useful things.

Some things it does not do (although prominent exponents of the Semantic Web sometimes speak as if it did):

  • provide ‘self-describing data’ (if such a thing exists at all)
  • ensure that ‘the semantics’ of data are always explicit or always understood
  • guarantee that data from different sources are usefully mergeable
  • tell us how to understand, model, formalize our data
  • tell us how to validate our data
  • tell us how to express complex relations clearly (this problem is not only not addressed by RDF; RDF does as much as any notation or model can to render it insoluble)
    This may have been true when Enrique first wrote this, but the situation has perhaps changed with the publication of the Note Defining N-ary Relations on the Semantic Web, published in 2006, which recommends that tuples be reified with a gay abandon that might cause even avowed Platonists to pause and wonder whether all of those things really should be treated as individuals by our logic. Determined nominalists may be horrified by the recommendation, but it’s no longer true to say that the RDF community doesn’t say how to handle the problem.

Some fans of RDF will perhaps feel that Enrique has shortchanged RDF here, but I have to say that Enrique’s arguments have gone a long way toward making me think RDF could be useful, even if I am still not a committed as I suspect some of my friends in the W3C’s Semantic Web Activity would like.

If this message in a bottle is ever read by anyone, I will be interested to hear back from you on whether you find Enrique’s analysis persuasive.

Eleemosynary RDF

[14 July 2008]

I spent last week in meetings with (among others) a number of enthusiastic proponents of RDF. The meetings were for the most part quite useful and constructive: we spent a lot of time trying to come to grips with the fact that W3C is investing a lot of time and effort in what look like parallel and competing stacks for RDF and XML, and trying to find our way to a simple story about how the two relate. And as one colleague said: no one was trying to “win”, everyone was just trying to understand and solve the problem.

My evil twin Enrique elbowed me in the ribs when he heard this, and suggested that this charitable generalization had some exceptions. You have to make some allowances for Enrique: re-encountering the rhetoric of RDF advocates at close range had put him in a bad mood at what he regards as the tone-deaf style of some arguments for using RDF. A full catalogue would take a long time (and would only lead to bad feeling), but during a lull in the meeting, Enrique whispered to me “Listen! If you listen carefully to the rhetoric, what you hear is that none of these RDF peeople believe in their hearts that using RDF is useful for the person or institution who actually creates and maintains the data! It’s all about making things easy for other people, about you eating the vegetables so I can eat dessert, about taking one for the team. I bet the records of Nurmengard were kept in RDF!” “Hush!” I said. “People will hear you.” But I have to admit, he had a point.

You should use RDF, the argument frequently goes, because if you do, then we can reuse your data much more conveniently.

When one of the meetings was considering a possible list of speaking points, I suggested (to keep Enrique quiet) that the point about using RDF so that other people could reuse the data more easily might perhaps be recast to suggest that using RDF could help the creators of the data better achieve their own goals. Sometimes the primary goal of data collection to make the data available for others to use. But often, in the real world, those who collect data do so primarily for their own purposes, and asking them to incur a cost in order that others may benefit seems to require a higher level of altruism than commercial, educational, or governmental institutions always exhibit.

No, a colleague replied emphatically, the point of the semantic web is that you incur costs now so that others can benefit later.

After several days, I’m still uncertain whether he was indulging in sarcasm, irony, or persiflage by parodying my paraphrase of the draft speaking point, or whether he was stone cold serious. At the break, Enrique went out and painted “For the greater good” over the entrance to the building where the meeting was being held, and wrote “Welcome to Nurmengard” on the whiteboard, but thankfully someone erased it before the meeting resumed.

I was reminded of something I once heard Jean Paoli say about persuading people to try a new technology. Using an unfamiliar technology requires an investment of effort, and the user you are trying to persuade needs to see that investment paid back very quickly. If someone puts five cents of effort in, Jean said, they want to see a nickel paid back in return, and preferably right away.

Note: Enrique reminds me that not everyone who reads English can keep the colloquial terms for American coins straight. A “nickel” is a coin worth five cents. (So Jean Paoli was saying people want to break even right away on their effort, not that they want to show a profit.) Oh, and Nurmengard is the prison built by the dark wizard Grindelwald to house his opponents; it had “For the greater good” carved over its entrance. (Enrique says that that is overkill: there isn’t any adult left who hasn’t read the Harry Potter books, so that gloss is unnecessary.)

One of the reasons I found Tom Passin’s talk about his use of RDF persuasive and interesting, last August in Montreal, was that he suggested plausibly that in the situation he described, using RDF might have short-term benefits, not just pie in the sky by and by. I think I’m as interested in long-term benefits as the next person. But a technology seems likely to achieve better uptake if using it brings some benefit to those who use it, independent of the network effect. Why do so many proponents of RDF behave as though they can’t actually think of any benefit of RDF, except the network effect?

Digital Humanities 2008

After two days of drizzle and gray skies, the sun came out on Saturday to make the last day of Digital Humanities 2008 memorable and ensure that the participants all remember Finland and Oulu as beautiful and not (only) as gray and wet and chilly. Watching the sun set over the water, a few minutes before midnight, by gliding very slowly sideways beneath the horizon, gave me great satisfaction.

The Digital Humanities conference is the successor to the annual joint conference of the Association for Computers and the Humanities (ACH) and the Association for Literary and Linguistic Computing (ALLC), now organized by the umbrella organization they have founded, which in a bit of nomenclature worthy of Garrison Keillor is called the Association of Digital Humanities Organizations.

There were a lot of good papers this year, and I don’t have time to go through them all here, since I’m supposed to be getting ready to catch the airport bus. So I hope to do a sort of fragmented trip report in the form of followup posts on a number of projects and topics that caught my eye. A full-text XML search engine I had never heard of before (TauRo, from the Scuola Normale Superiore in Pisa), bibliographic software from Brown, and a whole long series of digital editions and databases are what jump to my mind now, in my haste. The attendance was better than I had expected, and confirmed what some have long suspected: Kings College London has become the 800-pound gorilla of humanities computing. Ten percent of the attendees had Kings affiliations, there was an endless series of reports on intelligently conceived and deftly executed projects from Kings, and Kings delegates seemed to play a disproportionately large role in the posing of incisive questions and in the interesting parts of discussions. There were plenty of good projects done elsewhere, too, but what Harold Short and his colleagues have done at Kings is really remarkable — someone interested in how institutions are built up to eminence (whether as a study in organizational management or because they want to build up some organization) should really do a study of how they have gone about it.

As local organizer, Lisa-Lena Opas-Hänninen has done an amazing job, and Espen Ore’s program committee deserves credit for a memorable program. Next year’s organizers at the University of Maryland in College Park have a tough act to follow.