shameless…

In a shameless attempt to keep your attention, and get some people to comment, during this period of limited blogging I pose you the following scruple…

You stumble across some photographs, online, of a colleague. The photographs are of them naked. It is not clear from the context if the person in the photographs knows they have been posted online or not. What do you do?

I should add that this has not happened to me. This was the subject of conversation between some folks I was over-hearing. Judging by the look on one of the faces it had happened to him…

Paget, MVC, RMR and why words matter

I wrote a little while back about Pages, Screens and MVC. The motivation for the post was to help me explain why my thoughts around software and the web had changed over time. It also tied in nicely with Ian’s first post on Paget. The second round of Paget makes substantial changes and improves on the original design in many ways.

Following that Ian points us at Paul James’s post, Introducing the RMR Web Architecture. Ian says The Web is RMR not MVC and in the comments on that post we see some discussion about RMR being simply MVC using other names. We had a similar discussion over email internally.

Naming things is important to how we think about them, as Paul James says:

Alan, partly this is just a question of naming, but then a difference in naming can lead to a difference in thinking.

His second statement goes on to say what I said in Pages, Screen, MVC and note getting it.

As Ian points out above, it is more about binding actions to resources (models) rather than to controllers and of removing (or limiting) the need for routing. You say “The Controller is there to bind the system to HTTP”, but I feel that there should be no need for any binding as long as we work with HTTP to begin with rather than forcing our ways upon it.

Working with HTTP rather than forcing our ways upon it is very much the same thing as building something that is of the web rather than simply on the web.

The key reason I agree RMR is a better model is that in MVC the things we address with URIs are either views or controllers. As Andrew Davy comments:

I think the danger of MVC is that unless you explicitly use it as Alan does you default into an RPC design. (ending up with “URIs” like /customer/1/delete .. shudder!)

And this is the crucks of it for me – URIs are nouns, not verbs and they address resources and representations of resources.

By thinking about the problem in terms of RMR rather than MVC we naturally change the way we structure the code to provide different representations or how we map particular methods to code that handles them. RMR provides a way of talking about the problem that is of the web rather than of SmallTalk. RMR provides a language and a way of thinking that doesn’t obscure the mechanics of the web.

That seems like more than just a change of words to me.

History, Context and Interpretation

I was talking about the US election with a friend recently. He’s a historian by degree, though not by trade. We were discussing the different ways in which you could choose to understand speeches by Obama, McCain, Biden and Palin and how much we really knew about their background.

The reason for the conversation was lost by the end, we started out trying to decide what McCain had meant by something he’d said. I can’t for the life of me remember what it was.

The discussion, though, was about having to understand the background of the candidates in order to be able to interpret what they were saying and what they intended to do. This is an interesting thing to think about when speaking as well. The reverse of what Humpty Dumpty said in Chapter 6 of Through the Looking Glass.

When I use a word,’ Humpty Dumpty said, in rather a scornful tone, `it means just what I choose it to mean — neither more nor less.

The reverse of this being that words mean whatever the listener decides they mean. Everything is based on context and experience. In philosophy the distinction is between knowledge that is true regardless of experience and knowledge that is true based on experience is A priori and a posteriori and what Humpty and Alice are discussing is basically the notion that there is no a priori definition of language – that words mean one thing to the speaker and another to the receiver.

It strikes me that the same is true for solutions in computing. Beauty is most definitely in the eye of the beholder. I have said, and believed, for a long time now that  particular technologies, techniques or approaches are not “better” in an absolute sense than others unless discussed in the context of how they apply to a particular problem. Even when looking at the application of, say a language, to a particular problem it’s not that one is better than another; merely that they have different tradeoffs. The common discussions about statically typed languages versus dynamically typed languages are a great example of this. Chris Smith wrote an excellent piece on what to know before debating type systems back in 2007.

I had a great conversation about some of this stuff with Daniel at work on Friday. We argued chatted for so long that he was late leaving and got the dreaded “where are you?” call on his mobile. Sorry Daniel.

We started off talking about Google Web Toolkit, GWT.

With Google Web Toolkit (GWT), you write your AJAX front-end in the Java programming language which GWT then cross-compiles into optimized JavaScript that automatically works across all major browsers. During development, you can iterate quickly in the same “edit – refesh – view” cycle you’re accustomed to with JavaScript, with the added benefit of being able to debug and step through your Java code line by line. When you’re ready to deploy, GWT compiles your Java source code into optimized, standalone JavaScript files. Easily build one widget for an existing web page or an entire application using Google Web Toolkit.

This is obviously worthy of consideration as the code comes from Google – so it must be good. But good for what? It strikes me as odd that you would want to develop an application in Java to be “compiled” into JavaScript. The approach to development promoted by the GWT folks is that you develop in Java running in the JVM, so you can debug your code, then compile down to JavaScript for deployment. This separation between the development and deployment execution environments obviously has to be handled carefully to keep everything working identically. This puts me off as it seems like unnecessary risk and complexity when writing an AJAX app in JavaScript has never seemed tough. So I wanted to understand who’s using GWT to try and understand what problem their using it to solve.

The post, from the start of this year, lists 8 interesting applications. Looking through them I see one of the first and most obvious things about them all is they’re software delivered into the browser. By that I mean that they’re windows-style GUI applications that happen to use the browser a convenient distribution mechanism. GWT makes a lot of sense in this context as it supports that conceptual model. If you want to write something that’s more native to the web, with Cool URIs, a RESTful interface and that works as a part of a larger whole then it may make less sense.

So the number one problem it solves is to abstract away the web so that delivering more traditional software interfaces into the browser is made easier. That seems like a sensible thing to do. What else is it trying to do? An insight into it’s philosophy can be gleaned, perhaps, from a post on the Google blog from August last year entitled Google Web Toolkit: Towards a better web.

Instead of spending time becoming JavaScript gurus and fighting browser quirks, developers using GWT spend time productively coding and debugging in the robust Java programming language, using their existing Java tools and expertise.

So, there are several things all wrapped up in that sentence. The implication that learning JavaScript is time consuming, that browsers have lots of quirks, that Java supports productive coding and that existing java tools and expertise are strong. All of those things will be true in some contexts and not in others. Given the lack-lustre take-up of GWT within Google perhaps it’s not true within Google.

On the other hand, listening to a couple of the GWT team present on it at Google Developer Day 2008 we get a different impression.

If you’re looking to deliver a fairly traditional gui within the browser and you’re happy and productive working in Java then GWT looks like a good tool, maybe it works well for more webby apps as well, but that’s not what they’re showcasing. But GWT is just one thing, and we make decisions about what technologies, techniques and approaches to adopt every single day. Like anything else, it’s all about context.

Schroedinger's WorldCat

Karen Calhoun and Roy Tennant od OCLC have recorded a podcast with Richard Wallis as part of the Talking with Talis series (disclosure: I work for Talis). The podcast discusses the recently published changes to OCLC’s record usage policy. I wrote about the legal aspects of OCLC’s change from guideline to policy before and why OCLC’s policy changes matter. It’s great that they’ve come on a podcast to talk about this stuff.

I do think it’s a shame though that this podcast didn’t form November’s Library 2.0 gang. There are several regulars on the gang who would have some great angles to pick up on in this discussion. I guess it just didn’t work out right in everyone’s diaries.

Broadly the content of the podcast covers the background to the change, the legal situation, how the policy may affect things like SaaS offerings, competitors to WorldCat, OCLC’s value-add, the non-profit status, OCLC’s role as “switch” for libraries on the web and finally some closing comments. This is an hour well-filled with insights into why the policy says what it says and why it says it how it does.

I’m going to start with Karen’s and Roy’s closing comments as they seem to be the most useful starting point to understanding the answers that precede them.

Roy – @54:09 : Yeah, well I just want to make it clear that really we are trying to make it easier for our member institutions to use their data in interesting new ways. To become more effective, more efficient. I think we’re backing that up with real services, were exposing their data to them in useful ways that can be processed by software. So I think this a good direction for us and I think the new policy is a part of that new direction.

Karen – @54:48 : Well I guess I would just like to re-iterate that we have tried to make the updated policy as open as it can possibly be. To make it possible to foster these innovative new uses for WorldCat data to make it underpin a whole process of exposing library collections in lots of places on the web, the basis for our being able to partner with many organisations, both commercial and non-commercial, to encourage that process of exposing library collections and helping libraries to stay strong. So we’ve had to balance that against some economic realities of where our funding comes from and the need to protect our ability to survive as an organisation. So it’s not perfect, it’s really far from perfect. It represents this kind of uncomfortable balancing act and our hope is that this updated policy will be merely a first step in being able to facilitate more partnerships and more sharing of data and further loosening our data sharing policies as the years go by, so I guess that’s how I’d like to close.

Roy is doing his best here. I’ve met him and talked about stuff. I like Roy and he’s smart. I suspect, from reading between the lines, that he thinks the only way to change OCLC is from the inside. The mere fact that Karen and Roy recorded a podcast on this stuff is a huge leap forward from the OCLC of a couple of years ago. But on this policy I feel he is misguided. There is a constraining factor to working with services like OCLC’s grid that means only member libraries can innovate and only in ways that happen to be facilitated by the grid services. They’re a piece of the puzzle, but only one piece. Making the entire database available for anyone to innovate on top of is another piece – and probably the most important piece if libraries are to be allowed to really innovate.

I agree with Ed Corrado’s wrapping up in his post Talis Podcast about OCLC WorldCat Record Use Policy with Karen Clahoun and Roy Tennant

I believe Roy and other OCLC employees when they say that want to make it possible for libraries to “use their data in interesting new ways to become more effective, more efficient.” Roy and the other people I know who work for OCLC really do care about libraries. I just don’t see how the policy does this. While the current people speaking on behalf of OCLC may want to approve as many WorldCat record use requests as possible, they may not always be the ones making the decisions. This is why I want as much of these rights enumerated in the policy, instead of hiding behind a request form that “OCLC reserves the right to accept or reject any proposed Use or Transfer of WorldCat Records which, in OCLC’s reasonably exercised discretion, does not conform to the Policy for Use and Transfer of WorldCat Records.”

Karen must be praised for her incredible candor in her closing remarks. The policy, she says, is far from perfect and has to be so in order to protect OCLC’s business position. You see, OCLC face the classic innovator’s dilemma. To truly innovate they must cannibalize their own revenue stream. Normally when faced with the innovator’s dilemma an established company faces the prospect of someone else innovating faster than they can. This is what OCLC fears and is trying to prevent. Keeping the data locked away gives them time to innovate by preventing anyone else from damaging their revenue stream before they’re ready. The question you have to ask is how long do libraries have left to innovate their way out of decline and is it long enough for the OCLC tanker to turn itself around?

Karen herself gives us an answer in the podcast. She refers back to OCLC’s 2005 Perceptions of Libraries Report in which they say that 84% of information seekers start with a search engine. Libraries are in danger of being marginalised in the web environment, she says. The context of the answer is a discussion about the need for OCLC to act a giant “internet switch” on the web, directing searches from the likes of Google Books to a local library.

In answer to the same question, why do libraries need a switch, Roy says:

Roy – @43:47 : I can’t imagine that search engines want a world where they go and crawl everyone’s library catalog and they end up with 5 million, you know, versions of Harry Potter. That just makes no sense whatsoever. I think from the perspective of both end users and search engines really what they are going to want is the kind of situation that we’ve been able to provide which is you know there is one place where you can go to for an item and then you get shunted down to the local library that has that item again very quickly and painlessly. I think back to the days when Gopher was around and the Veronica search engine and when people exposed their library catalogs that way it was horrifying. You would do a search in the system and you’d find a library in Australia had a book but you couldn’t do anything with that information. so I don’t think that’s the world we want to see necessarily. I wouldn’t want to see it.

Let’s just look at that opening sentence again.

Roy – @43:47 : I can’t imagine that search engines want a world where they go and crawl everyone’s library catalog…

Really? I would think that’s exactly what the search engines want. The web is a level playing field where anyone, anywhere can get the number one spot on any search engine. Not through being a big player with the budget to buy the top slot, but by being the most relevant result. Reconciling different references to the same concept is a core strength for the search engines. And that’s without even considering the disambiguation and clarification potential of web-based semantics. The switch that OCLC describe is an adequate way of addressing the problem that libraries have right now – a few dominant search engines, opacs that do not play nicely for search engines and a lot of the data in a central place at OCLC.

How does the OCLC model scale though? What about all the libraries that can’t be part of the OCLC game? OCLC wants to be the single, complete source for this data, but the barriers to entry (mostly cost) are too high for this to be possible. The barrier to publishing data on the web is very low, that’s one of the many great things about it. And seriously, Roy, are you really comparing the capabilities of Veronica with what Google, Yahoo and MSN do today? Have you seen SearchMonkey?

A few moments later, in response to a question about location information, Roy goes on to say

Roy – @45:12 : Oh boy, I’d sure like to see them try. I mean, again, I don’t think they’re even interested in that problem. Again, I don’t think they could do an effective job at it and I don’t think they would want to. You know the Google’s of the world are making deals with Amazon, you know, we’re not necessarily the folks that they really want to do business with. The fact that we’re big enough we can sit down and talk to them on behalf of our members I think is an important point. For us to think that individual libraries would have enough leverage to get that kind of attention I think is obviously ridiculous.

I’m not sure what Roy was getting at with this, but the search engines sure do seem interested both in little sites, like this blog and with location data and while the guys at OCLC are smart I’d put a whole heap of cash on Google being able to do location based search ranking a whole lot more effectively than they can. Not sure? Google has mapped all the hairdressers on the web, and will show me hairdressers local to Bournville. The are no doubt many reasons search engines aren’t doing this kind of thing and more for libraries – the quality of the data presented by the opac is one reason. The restrictive agreements data providers like OCLC put on the libraries is another. Both of these issues can be fixed. A monopoly player to centralize and restrict access to all the data is not a necessary component for libraries to be a valuable part of the web.

Following my earlier post on OCLC’s Intellectual Property claims I was looking forward to hearing what OCLC had to say on this. I know that Richard had many questions about this sent in following his request for questions. This was Karen’s response…

Karen – @17:14 : Well, I know from reading the guidelines, which is pretty much the extent of what I know, that the whole issue of the copyrighting of the database goes back to 1982. I’m really not familiar with that history and I don’t know a whole lot about Copyright Law, so I really don’t feel knowledgeable enough to talk about all the details around the copyrighting of the database. I do know that the copyright is on the compilation as a whole and that’s about the extent of what I know. I also don’t have a legal background so I just don’t feel like I’m qualified to answer that. I have been forwarding all of the questions and commentary about that to our legal department and they are working on those issues, I’m not sure what will come of that but they are working on the commentary and the questions that they got.

Richard then asks specifically about the 1982 Copyright date. That precedes the Feist Publications v. Rural Telephone Service that both myself and Jonathan Rochkind keep pointing out.

Karen – @18:56 : I don’t know a whole lot about it Richard. I can tell you, that having been the head of the database quality unit for so many years, OCLC makes a tremendous investment in WorldCat. It isn’t just a pile of records that we’ve gotten from the members. And I don’t mean to denigrate the value of those records in any way. As they come to OCLC and come into the database, over the years we have invested a very large effort in maintaining the quality of that database and even improving it. When I was in charge of the database quality group for example we wrote an algorithm, probably the world’s best algorithms at that time to automatically detect duplicate records and to merge them. It was an artificial intelligence approach at that time, very very state of the art, we also created a number of algorithmic methods for managing the forms of heading, doing automated authority control in WorldCat and we corrected millions of headings. Since my return I’ve become familiar with all the things that have come out of the office of research and been moved into production in worldcat that FRBRise the records in the database, that have created worldcat identities based on what we learned from ding that automated authority control back in the early 90s. so it’s really not the same database that we get that we get from members, it’s really much improved, we continue to do a huge amount of work to make the database as valuable as it is. So we have a stake, not just the members have a stake in worldcat, OCLC is a big stakeholder and a curator of the worldcat database.

As Vice President WorldCat and Metadata Services for OCLC I am saddenned that Karen should be so ill-prepared to answer questions on the intellectual property aspects of WorldCat. What is clear, though is that Karen is true to her word about her level of understanding. Copyright is a temporary monopoly, an exclusivity, granted to the creator of something original and expressive. Legislatures all over the world developed Copyright as a means of encouraging creative expression by protecting the creators ability to make a living from it for a period of time. Feist Publications v. Rural Telephone Service is a crucial case as it specifically addresses the compilation right that Karen refers to. Not only that but it specifically stated that the compilation was specifically not to reward the effort involved in collecting information, but to promote the progress of science and useful arts.

That is, the court does not want organizations to be able to monopolize data. They want people to be able to innovate freely.

What Karen describes is a vast amount of knowledge of the data and the domain and how to do fantastic stuff with it. Like I’ve said many, many times, OCLC has lots of smart people and they have an important part to play. I believe that part will earn them money, but there is no basis other than contract law under which they can prevent the propagation of the WorldCat data. That’s why they’re attempting to change the contract libraries operate under to include the previously voluntary guidelines.

But OCLCs business model is what needs to change, not its contract with libraries. It’s Schroedinger’s WorldCat, it is both alive and dead at the same time, and as long as they can keep the lid of the box shut nobody knows for sure which it is. The library world doesn’t need a cat in box, it needs a free cat.

There is so much more to talk about in this podcast. You have to listen to it. Also must read posts:

Annoyed Librarian: How I Learned to Stop Worrying and Love OCLC

To use a prison metaphor, it’s clear that librarians dropped the soap decades ago.

Karen Coyle’s Metalogue (the comments)

Jonathan Rochkind: more OCLC

The most important negative part of the policy, which it doesn’t sound like they discussed much in the interview (?) is that any use is prohibited which “substantially replicates the function, purpose, and/or size of WorldCat.” That means that clearly OCLC would deny permission for uses they believe to be such, but also that OCLC is asserting that with or without such an agreement, such use is prohibited, by libraries or by anyone else.

Ed Corrado: Talis Podcast about OCLC WorldCat Record Use Policy with Karen Clahoun and Roy Tennant

One of the key things that Karen and Roy repeated a few times during the podcast (and OCLC people have mentioned previously in other venues) is that the goal with this policy is to drive traffic to libraries museums and archives. They also have repeated that they hope it will make it easier for libraries, museums, and archives to use their data. It is not that I am not hearing them on this second point, but I still do not see how this “tiger’s role (territorial and instinctive)” approach accomplishes this.

Stefano Mazzocchi: Rule #1 for Surviving Paradigm Shifts: Don’t S**t Where You Eat

You could think of OCLC like a Wikipedia for library cards, but there is one huge difference: there is no freedom to fork. Basically, by using OCLC’s data you agree to protect their existence.

More OCLC Policy…

There have been quite a few great posts and ideas circulating on how to respond to OCLC‘s change in Record Usage Policy. The key thing is to act now – you really don’t have long to stop this if you want to. OCLC wish the new policy to come into effect in Spring 2009.

This change is important as it moves the policy from being guidelines, that member libraries are politely asked to adhere to, to part of your contract with OCLC, that can be enforced. That’s a major restriction on the members and far from being more open.

So, to the ideas… Let’s start with Aaron Swarz’s post. Aaron is one of the folks responsible for OpenLibrary, so has a significant stake in this. His post is entitled Stealing Your Library: The OCLC Powergrab is a great explanation of why you should care about this. He finishes by asking that we sign up to a petition to Stop the OCLC powergrab!

Next up we have various suggestions circulating about libraries providing their own licensing statements. For example, in More on OCLC’s policies Jonathan Rochkind suggests putting it in the 996, just like the OCLC 996 Policy Link.

Whether submitting the cataloging to OCLC, or anywhere else. Add your own 996 explaining that the data is released under Open Data Commons.

I had a little think about this and would suggest specifically using the ODC PDDL link as follows.

996 ‡ a ODCPDDL ‡ i This record is released under the Open Data Commons Public Domain Dedication and License. You are free to do with it as you please. ‡ u http://www.opendatacommons.org/odc-public-domain-dedication-and-licence/

Just like the OCLC policy link. This doesn’t go as far as Aaron asks, with his suggestion

Second, you put your own license on the records you contribute to OCLC, insisting that the entire catalog they appear in must be available under open terms.

The problem here is that there really isn’t any Intellectual Property in a MARC Record. It may take effort, skill and diligence to create a good quality record. Creative or original, in terms of Copyright, it is not.

The issue of OCLC claiming these rights in catalog data has even made it onto Slashdot where they’re covering how this Non-Profit Org Claims Rights In Library Catalog Data. Unfortunately the comments are the usual slashdot collection of ill-informed, semi-literate ramblings based on nothing more than a cursory glance of the post. Someone even appears to confuse OCLC and LoC in their response. ho hum.

Also worth mentioning is that Ryan Eby is keeping track of news and happenings with the OCLC Record Usage Policy on the Code4Lib Wiki.

Richard recorded a new Talking with Talis podcast yesterday. This will be posted on Talis’s Panlibus blog. I’ll be covering that as soon as I’ve had chance to listen to it properly.

If you’re on any mailing lists where this is being discussed, or spot other blog posts I should read then let me know in the comments.

An Editable Web, Semantics, JavaScript and RDFa

With the recent publication of RDFa as a W3C recommendation I’ve been thinking about what that makes possible. For a while I’ve also been thinking about the problem of editing web content within the browser. Putting the two ideas together seems to make a lot of sense to me, but the idea is not new.

I was delighted to find a paper from Sebastian Dietzold, Sebastian Hellmann and Martin Peklo in this year’s SFSW conference entitled Using JavaScript RDFa Widgets for Model/View Separation inside Read/Write Websites. This paper describes a generic approach and JavaScript library for editing RDFa within a HTML page and keeping track of the changes for submission back to the server for persistence.

This is the model I’ve been playing with, but while Dietzold et al have been aiming for a generic library I’m happy to write some specific, with a view to generalising later if it looks appropriate.

Taking the example of a simple playlist, we have an underlying model described in RDF and also a HTML representation intended for humans to see.

Challenge: The editing interface needs to work in the HTML view, as that is where the users live, but needs to manage the underlying graph.

Task: Change the order of tracks within the playlist (represented as an RDF Sequence)

Task: Change the tags applied to a track (represented using the simplest approach in the Redwood Tagging Ontology)

Reordering the tracks should be a relatively simple case of implementing a basic JavaScript drag-n-drop, we should be able to extend that later to cover adding new tracks too. For the sake of brevity I’ll not be worrying about browser compatibility right now, just making the idea work in Firefox.

So first up is marking up the HTML with RDFa. We add in URIs for the playlist and the individual items within it. To save the changes we need to keep track of them, so first we need to extract the graph as it is at the start:

var initialGraph
function init() {
initialGraph = $.extractGraph();
}

Then to position items between others we’ll tag in some targets where all the draggable items can be dropped:

Dragging an item then simply becomes a case of making its position follow the mouse (but not directly beneath it, so we can see where we’re dropping) and dropping become a case of removing the item from its original location in the DOM and popping it in where the drop target is.

Re-ordering leaves the RDF Sequence properties out of order, but as the DOM is ordered we can simply walk the dom to reset the sequence values, either after each move or at the point we extract the new model for saving.

Essentially, by embedding the model into the page in RDFa and being careful to keep the RDFa consistent as we edit the DOM we can do in-page WYSIWYG graph editing 🙂

 

OpenLibrary

This post is also published on the n2 blog.

I thought it was about time I got around to taking a better look at what might be possible with the OpenLibrary data.

My plan is to try and convert it into meaningful RDF and see what we can find out about things along the way. The project is an own-time project mostly, so progress isn’t likely to be very rapid. Let’s see how it goes. I’ll diary here as stuff gets done.

To save me typing loads of stuff out here, today’s source code is tagged and in the n2 subversion as day 1 of OpenLibrary.

Day one, 3rd October 2008, I downloaded the authors data from OpenLibrary and unzipped it. I’m also downloading the editions data from OpenLibrary, but that’s bigger (1.8Gb) so I’m playing with the author data while that comes down the tubes.

The data has been exported by OpenLibrary as JSON, so is pretty easy to work with. I’m going to write some PHP scripts on the command line to mess with it and it looks great for doing that.

Each line of the JSON in the authors file represents a single author, although some authors will have more than one entry. Taking a look at Iain Banks (aka Iain M Banks) we have the following entries:

 {"name": "Banks, Iain", "personal_name": "Banks, Iain", "key": "\/a\/OL32312A", "birth_date": "1954", "type": {"key": "\/type\/type"}, "id": 81616} {"name": "Banks, Iain.", "type": {"key": "\/type\/type"}, "id": 3011389, "key": "\/a\/OL954586A", "personal_name": "Banks, Iain."} {"type": {"key": "\/type\/type"}, "id": 9897124, "key": "\/a\/OL2623466A", "name": "Iain Banks"} {"type": {"key": "\/type\/type"}, "id": 9975649, "key": "\/a\/OL2645303A", "name": "Iain Banks "} {"type": {"key": "\/type\/type"}, "id": 10565263, "key": "\/a\/OL2774908A", "name": "IAIN M. BANKS"} {"type": {"key": "\/type\/type"}, "id": 10626661, "key": "\/a\/OL2787336A", "name": "Iain M. Banks"} {"type": {"key": "\/type\/type"}, "id": 12035518, "key": "\/a\/OL3127859A", "name": "Iain M Banks"} {"type": {"key": "\/type\/type"}, "id": 12078804, "key": "\/a\/OL3137983A", "name": "Iain M Banks "} {"type": {"key": "\/type\/type"}, "id": 12177832, "key": "\/a\/OL3160648A", "name": "IAIN M.BANKS"} 

In total the file contains 4,174,245 entries. First job is to get a more manageable set of data to work with. So, I wrote a short script to extract 1 line in every 10 from a file. The resulting sample author data file contains 417,424 entries. This is more manageable for quick testing of what I’m doing.

So now we can start writing some code to produce some RDF. Given the size of these files, I need to stream the data in and out again in chunks. The easiest format I find for that is turtle which has the added benefit of being human readable. YMMV. Previously I’ve streamed stuff out using n-triples. That has some great benefits too, like being able to generate different parts of the graph, for the same subject, in different parts of the file then being them together using a simple command line sort. It’s also a great format for chunking the resulting data into reasonable size files as breaking on whole lines doesn’t break the graph, whereas with rdf/xml and turtle it does.

So, I may end up dropping back to n-triples, but for now I’m going to use turtle.

I also like working on the command line and love the unix pipes model, so I’ll be writing the cli (command line) tools to read from STDIN and write to STDOUT so I can mess with the data using grep, sed, awk, sort, uniq and so on.

First things first, Let’s find out what’s really in the authors data. Reading the json line by line and converting each line into an associative array is simple in PHP, so let’s do that, keep track of all the keys we find in the arrays and recurse into the nested arrays to look at them – then dump the result out. The arrays contain this set of keys:

alternate_names
alternate_names
alternate_names\1
alternate_names\2
alternate_names\3
bio
birth_date
comment
date
death_date
entity_type
fuller_name
id
key
location
name
numeration
personal_name
photograph
title
type
type\key
website

So, they have names, birth dates, death dates, alternate names and a few other bits and pieces. And they have a ‘key’ which turns out to be the resource part of the OpenLibrary url. That’s means we can link back into OpenLibrary nice and easy. Going back to our previous Iain Banks examples, we want to create something like this for each one:

 @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix bio: <http://vocab.org/bio/0.1/> . @prefix foaf: <http://xmlns.com/foaf/0.1/> . <http://example.com/a/OL32312A> foaf:Name "Banks, Iain"; foaf:primaryTopicOf <http://openlibrary.org/a/OL32312A>; bio:event <http://example.com/a/OL32312A#birth>; a foaf:Person . <http://example.com/a/OL32312A#birth> bio:date "1954"; a bio:Birth . 

This gives us a foaf:Person for the author and tracks his birth date using a bio:Birth event. While tracking the birth as a separate entity may seem odd it gives the opportunity to say things about the birth itself. We’ll model death dates the same way, for the same reason. I’ve written some basic code to generate foaf from the OpenLibrary authors.

Linking back to the OpenLibrary url has been done here using foaf:primaryTopicOf. I didn’t use owl:sameAs because the url at OpenLibrary is that of a web page, whereas the uri here (http://example.com/a/OL32312A) represents a person. Clearly a person is not the same as a web page that contains information about them.

The only thing worrying me is that the uris we’re using are constructed from OpenLibrary’s keys. This makes matching them up with other data sources hard. Matching with other data sources requires a natural key, but there’s not enough data in these author entries to create one. The best I can do is to create a natural key that will enable people to discover the group of authors that share a name.

 @prefix mine: . sl:name_of ; a sl:Name . 

These uris will enable me to find authors that share the same name easily, either because they do share the same name or because they’re duplicates. The natural key is simply the author’s name with any casing, whitespace or punctuation stripped out. That might need to evolve as I start looking at the names in more detail later.

Next step is to look in more detail at the dates in here, we have some simple cases of trailing whitespace or trailing punctuation, but also some more interesting cases of approximate dates or possible ranges – these occur for historical authors mostly. The complete list of distinct dates within the authors file is in svn. If you know anything about dates, feel free to throw me some free advice on what to do with them…