Linked Data, Big Data and Your Data

Week five of my new challenge and I figured I really should get around to scribbling down some thoughts. I talked in my last post about RDF and graphs being useful inside the enterprise; and here I am, inside an enterprise.

Callcredit is a data business. A mixture of credit reference agency (CRA) and consumer data specialists. As the youngest of the UK’s CRAs, 12 years old, it has built an enviable position and is one of few businesses growing strongly even in the current climate. I’ve worked with CRAs from the outside, during my time at Internet bank Egg. From inside there’s a lot to learn and some interesting opportunities.

Being a CRA, we hold a lot of data about the UK population – you. Some of this comes from the electoral roll, much of it from the banks. Banks share their data with the three CRAs in order to help prevent fraud and lower the risk of lending. We know quite a lot about you.

Actually, if you want to see what we know, check out your free credit report from Noddle – part of the group.

Given the kind of data we hold, you’d hope that we’re pretty strict about security and access. I was pleased to see that everyone is. Even the data that is classed as public record is well looked after; there’s a very healthy respect for privacy and security here.

The flip side to that is wanting those who should have access to be able to do their job the best way possible; and that’s where big data tools come in.

As in my previous post, variety in the data is a key component here. Data comes from lots of different places and folks here are already expert at correcting, matching and making consistent. Volume also plays a part. Current RDBMS systems here have in excess of 100 tables tracking not only data about you but also provenance data so we know where information came from and audit data so we know who’s been using it.

Over the past few weeks I’ve been working with the team here to design and start building a new product using a mix of Hadoop and Big Data® for the data tiers and ASP.net for the web UI, using Rob Vesse’s dotNetRDF. The product is commercially sensitive so I can’t tell you much about that yet, but I’ll be blogging some stuff about the technology and approaches we’re using as I can.

Fun 🙂

In summary, I don't like writing more code than I have to…

* This post first appeared on the Talis Consulting blog.

I opened my mailbox the other morning to a question from David Norris at BBC. They’ve been doing a lot of Linked Data work and we’ve been helping them on projects for a good while now.

The question surrounds an ongoing debate within their development community and is a very fine question indeed:

We are looking at our architecture for the Olympics. Currently, we have:

1. a data layer comprised of our Triple Store and Content store.
2. a service layer exposing a set of API’s returning RDF.
3. a presentation layer (PHP) to render the RDF into the HTML.

All fairly conventional – but we have two schools of thought:

Do the presentation developers take the RDF and walk the graph (say
using something like easyRDF) and pull out the properties they need.

Or:

Do we add a domain model in PHP on top of the easyRDF objects such that
developers are extracted from the RDF and can work with higher-level
domain objects instead, like athlete, race etc.

One group is adamant that we should only work with the RDF, because that
*is* the domain model and it’s a performance hit (especially in PHP) and
is just not the “Symantec Web way” to add another domain model.

Others advocate that adding a domain model is a standard OO approach and
is the ‘M’ in ‘MVC’: the fact that the data is RDF is irrelevant.

My opinion is that it comes down to the RDF data, and therefore the
ontology: if the RDF coming through to the presentation layer is large
and generic, it may benefit from having a model on top to provide more
high-level relevant domain objects. But if the RDF is already fairly
specific, like an athlete, then walking through the RDF that describes
that athlete is probably easy enough and wouldn’t require another model
on top of it. So I think it depends on the ontology being modelled close
enough to what the presentation layer needs.

What do you think? I’d be really interested in your view.

Having received it I figured a public answer would be really useful for people to consider and chime in on in the comments, David kindly agreed.

First up, the architecture in use here is nicely conventional; simple and effective. The triple store is storing metadata and the XML content store is storing documents. We would tend to put everything into the triple store by either re-modelling the XML in RDF or using XML Literals, but this group need very fast document querying using xpath and the like, so keeping their existing XML content store is a very sensible move. Keep PHP, or replace it with a web scripting language of your choice, and you have a typical setup for building webapps based on RDF.

The question is totally about what code and how much code to write in that presentation layer and why, the data storage and data access layers are sorted, giving back RDF. Having built a number of substantial applications on top of RDF stores, I do have some experience in this space and I’ve taken both of the approaches discussed above – converting incoming RDF to objects and simply working with the RDF.

Let’s get one thing out of the way – RDF, when modelled well, is domain-modelled data. With SQL databases there are a number of compromises required to fit within tables that create friction between a domain model and the underlying SQL schema (think many-to-many). Attempting to hide this is the life’s work of frameworks like Hibernate and much of Rails. If we model RDF as we would a SQL schema then we’ll have the same problems, but the IAs and developers in this group know how to model RDF well, so that shouldn’t be a problem.

With RDF being domain-modelled data, and a graph, it can be far simpler to map incoming RDF to objects in your code than it is with SQL databases. That makes the approach seem attractive. There are, however, some differences too. By looking at the differences we can get a feel for the problem.

Cardinality & Type

When consuming RDF we generally don’t want to make any assumptions about cardinality – how many of some property there will be. With properties in our data we can cope with this by making every member variable an array, or by keeping only the first value we find if we only ever expect one. Neither is ideal but both approaches work to map the RDF into object properties.

When we come to look at types, classes of things, we have a harder problem, though. It’s common, and useful, in RDF to make type statements about resources and very often a resource will have several types. Types are not a special case in RDF, just as with other properties there can be many of them. This presents a problem in mapping to an OOP object model where an object is of one type (with supertypes, admittedly). You can specify multiple types in many OOP languages, often through the use of interfaces, but you do this at the class level and it is consistent across all instances. In RDF we make type statements at the instance level, so a resource can be of many types. Mapping this, and maintaining a mapping in your OOP code will either a) be really hard or b) constrain what you can say in the data. Option b is not ideal as it can prevent others from doing stuff in the data and making more use of it.

Part of this mismatch on type systems comes from the OOP approach of combining data and behaviour into objects together. Over time this has been constrained and adapted in a number of ways (no multiple inheritance, introduction of interfaces) in order to make a codebase more manageable and hopefully prevent coders getting themselves too tied up in knots. RDF carries no behaviour, it’s a description of something, so the same constraints aren’t necessary. This is the main issue you face mapping RDF to an OOP object model.

Programming Style

What we have ended up with, in libraries like Moriarty, are tools that help us work with the graph quickly and easily. SimpleGraph has functions like get_subjects_of_type($t) which returns a simple array of all the resource URIs of that type. You can then use those in get_subject_subgraph($s) to extract part of the graph to hand off to something else, say a render function.

Moriarty’s SimpleGraph has quite a number of these kinds of functions for making sense of the graph without ever having to work with the underlying nested arrays directly. This pairs up very nicely with functions to do whatever it is you want to do.

$events = $graph->get_subjects_of_type(Ontologies::Sport.’Event’);
foreach ($events and $event) {
render_sporting_event($event);
}

Of course, functions in PHP and other scripting languages are global, and that’s really not nice, so we often want to scope those and that’s where objects tend to come back into play.

Say we’re rendering information about a sporting event the pseudocode might look something like this:

$events = $graph->get_subjects_of_type(Ontologies::Sport.’Event’);
foreach ($events and $event) {
SportingEvent::render($event);
}

This approach differs from a MVC approach because the graph isn’t routinely and completely converted into domain model objects, as that approach is very constraining. What it does is combine graph handling using SimpleGraph with objects for code scoping, but by late-binding of the graph parts and the objects used to present them, the graph is not constrained by the OOP approach.

If you’re using a more templated approach, so you don’t want a render() function, then simple objects that give access to the values for display is a good approach and can make the code more readable than using graph-centric functions throughout and also offer components that can be easily unit-tested.

Conclusion

Going back to the question, I would work mostly with the graph using simple tools that make accessing the data within it easier and I would group functionality for rendering different kinds of things into classes to provide scope. That’s not MVC, not really, but it’s close enough to it that you get the benefits of OOP and MVC without the overhead of keeping OOP and RDF models totally aligned.

Official Google Research Blog: Large-scale graph computing at Google

from Official Google Research Blog: Large-scale graph computing at Google.

If you squint the right way, you will notice that graphs are everywhere. For example, social networks, popularized by Web 2.0, are graphs that describe relationships among people. Transportation routes create a graph of physical connections among geographical locations. Paths of disease outbreaks form a graph, as do games among soccer teams, computer network topologies, and citations among scientific papers. Perhaps the most pervasive graph is the web itself, where documents are vertices and links are edges. Mining the web has become an important branch of information technology, and at least one major Internet company has been founded upon this graph.

Just like Map/Reduce, Logic programming or OO, having more ways of thinking about a problem is a good thing 🙂

One Div Zero: A Brief, Incomplete, and Mostly Wrong History of Programming Languages

1987 – Larry Wall falls asleep and hits Larry Wall’s forehead on the keyboard. Upon waking Larry Wall decides that the string of characters on Larry Wall’s monitor isn’t random but an example program in a programming language that God wants His prophet, Larry Wall, to design. Perl is born.

from One Div Zero: A Brief, Incomplete, and Mostly Wrong History of Programming Languages.

Domain Specific Editing Interface using RDFa and jQuery

I wrote back in January about Resource Lists, Semantic Web, RDFa and Editing Stuff. This was based on work we’d done in Talis Aspire.

Several people suggested this should be written up as a fuller paper, so Nad, Jeni and I wrote it up as a paper for the SFSW 2009 workshop. It’s been accepted and will be published there, but unfortunately due to work priorities that have come up we won’t be able to attend.

A draft of the paper is here: A Pattern for Domain Specific Editing Interfaces Using Embedded RDFa and HTML Manipulation Tools.

The camera ready copy will be published in the conference proceedings. Feedback welcomed.

Coghead closes for business

With the announcement that Coghead, a really very smart app development platform, is closing its doors it’s worth thinking about how you can protect yourself from the inevitable disappearance of a service.

Of course, there are all the obvious business type due diligence activities like ensuring that the company has sufficient funds, understanding how
your subscription covers the cost (or doesn’t) of what you’re using and so on, but all these can do is make you feel more comfortable – they can’t provide real protection. To be protected you need 4 key things – if you have these 4 things you can, if necessary, move to hosting it yourself.

  1. URLs within your own domain.
  2. Both you and your customers will bookmark parts of the app, email links, embed links in documents, build excel spreadsheets that download the data and so on and so on. You need to control the DNS for the host that is running your tenancy in the SaaS service. Without this you have no way to redirect your customers if you need to run the software somewhere else.

    This is, really, the most important thing. You can re-create the data and the content, you can even re-write the application if you have to, but if you lose all the links then you will simple disappear.

  3. Regular exports of your data.
  4. You may not get much notice of changes in a SaaS service. When you find they are having outages, going bust or simply disappear is not the time to work out how to get your data back out. Automate a regular export of your data so you know you can’t lose too much. Coghead allowed for that and are giving people time to get their data out.

  5. Regular exports of your application.
  6. Having invested a lot in working out the write processes, rules and flows to make best use of your app you want to be able to export that too. This needs to be exportable in a form that can be re-imported somewhere else. Coghead hasn’t allowed for this, meaning that Coghead customers will have to re-write their apps based on a human reading of the Coghead definitions. Which brings me on to my next point…

  7. The code.
  8. You want to be able to take the exact same code that was running SaaS and install it on your own servers, install the exported code and data and update your DNS. Without the code you simply can’t do that. Making the code open-source may be a problem as others could establish equivalent services very quickly, but the software industry has had ways to deal with this problem through escrow and licensing for several decades. The code in escrow would be my absolute minimum.

SaaS and PaaS (Platform as a Service) providers promote a business model based on economies of scale, lower cost of ownership, improved availability, support and community. These things are all true even if they meet the four needs above – but the priorities for these needs are with the customer, not with the provider. That’s because meeting these four needs makes the development of a SaaS product harder and it also makes it harder for any individual customer to get setup. We certainly don’t meet all four with our SaaS and PaaS offerings at work yet, but I am confident that we’ll get there – and we’re not closing our doors any time soon 😉

Ruby Mock Web Server

I spent the afternoon today working with Sarndeep, our very smart automated test guy. He’s been working on extending what we can do with rspec to cover testing of some more interesting things.

Last week he and Elliot put together a great set of tests using MailTrap to confirm that we’re sending the right mails to the right addresses under the right conditions. Nice tests to have for a web app that generates email in a few cases.

This afternoon we were working on a mock web server. We use a lot of RESTful services in what we’re doing and being able to test our app for its handling of error conditions is important. We’ve had a static web server set up for a while, this has particular requests and responses configured in it, but we’ve not really liked it because the responses are all separate from the tests and the server is another apache vhost that has to be setup when you first checkout the app.

So, we’d decided a while ago that we wanted to put in a little Ruby based web server that we could control from within the rspec tests and that’s what we built a first cut of this afternoon.

require File.expand_path(File.dirname(__FILE__) + "/../Helper")
require 'rubygems'
require 'rack'
require 'thin'
class MockServer
  def initialize()
    @expectations = []
  end
  def register(env, response)
    @expectations << [env, response]
  end
  def clear()
    @expectations = []
  end
  def call(env)
    #puts "starting call\n"
    @expectations.each_with_index do |expectation,index|
      expectationEnv = expectation[0]
      response = expectation[1]
      matched = false
      #puts "index #{index} is #{expectationEnv} contains #{response}\n\n"
      expectationEnv.each do |envKey, value|
        puts "trying to match #{envKey}, #{value}\n"
        matched = true
        if value != env[envKey]
          matched = false
          break
        end
      end
      if matched
        @expectations.delete_at(index)
        return response
      end
    end
    #puts "ending call\n"
  end
end
mockServer = MockServer.new()
mockServer.register( { 'REQUEST_METHOD' => 'GET' }, [ 200, { 'Content-Type' => 'text/plain', 'Content-Length' => '11' }, [ 'Hello World' ]])
mockServer.register( { 'REQUEST_METHOD' => 'GET' }, [ 200, { 'Content-Type' => 'text/plain', 'Content-Length' => '11' }, [ 'Hello Again' ]])
Rack::Handler::Thin.run(mockServer, :Port => 4000)

The MockServer implements the Rack interface so it can work within the Thin web server from inside the rspec tests. The expectations are registered with the MockServer and the first parameter is simply a hashtable in the same format as the Rack Environment. You only specify the entries that you care about, any that you don’t specify are not compared with the request. Expectations don’t have to occur in order (expect where the environment you give is ambiguous, in which case they match first in first matched).

As a first venture into writing more in Ruby than an rspec test I have to say I found it pretty sweet – There was only one issue with getting at array indices that tripped me up, but Ross helped me out with that and it was pretty quickly sorted.

Plans for this include putting in a verify() and making it thread safe so that multiple requests can come in parallel. Any other suggestions (including improvements on my non-idiomatic code) very gratefully received.

dev8D | Lightning talk: Agile Development

This is a great post on agile development coming out from the JISC Dev8d days.

Example from the floor, Matthew: what worked well in a commercial company I was working for where we practiced extreme coding and used agile principles was: no code ownership (bound by strict rules), test-based development, rules about simplicity, never refactoring until you have to, stand up meetings, whiteboard designs, iterations so could find out when you’d messed something up almost immediately, everything had to have unit tests, there has to be a lot of trust in the system (you have to know that someone is not going to break your code)

Graham: building trust is central.

via dev8D | Lightning talk: Agile Development.

The quote above from Matthew and Graham mirrors exactly my experience – when we do those things well, and are disciplined about it and trust each other the things work out well. When we do less of those things then things turn out less well.

Graham is Graham Klyne who I’ve met a few times at various meets like Vocamp 2008 in Oxford. He and his team are doing clever things with pictures of flies and semweb technologies.

Nounification of Verbs

For a long time I’ve felt uncomfortable every time I’ve written a class with a name like ‘FooManager’, ‘BarWatcher’ or ‘BazHelper’. This has always smelt bad and opening any codebase that is structured this way has always made me feel ever so slightly uneasy. My thoughts as to why are still slightly fuzzy, but here’s what I have so far…

Firstly some background, my perspective on object-oriented programming is deliberately naive. I don’t like to create interfaces for everything and I don’t use lots of factories. This comes, I guess, from my earliest education in C++, through one of the best books ever written on the subject. Simple C++ by Geoffrey Cogswell. While you stop laughing at the idea that you can learn something as complex as object-oriented programming from a thin paperback featuring a robot dog and the term POOP (Profound Object Orientated Programming), think about the very essence of what it is we’re trying to do.

OOP is about modelling objects. Objects are things that are, and to name things that are we use nouns. Then we give the objects responsibilities, things that they can do, behaviour. So we use what my primary school english so beautifully called ‘doing words’ or verbs if you prefer.

Now, not long ago I wrote a ByteArrayHelper class in Java. I’m not ashamed of it. The code is good, efficient, readable code that does many of the common things I needed to do with a byte[]. However, help is a verb. My classes responsibility is to help byte[] by doing things that byte[] doesn’t do. I’ve made the class name into a noun by nounifying the verb.

By de-nounifying it I can see where the responsibilities should really sit – with byte[]. My ByteArrayHelper does nothing for itself. All of its methods do something with a byte[]. The methods are things like SubArray(offset, length) and insertAt(offset, bytes). These are methods that I wanted on byte[].

Now, what I really wanted was to be able to add these methods to byte[], making them available wherever a byte[] was being handled, but as Java is statically typed I couldn’t do that (even if byte[] were a class, which it isn’t). In SmallTalk, Javascript or Ruby I likely could have just added the methods I wanted. The next best thing would have been to declare a sub-class of byte[] and put the methods on that, then the initial construction of my byte[] instance could create my own, more capable, object, but still pass it around everywhere as a byte[]. But byte[] isn’t a class in Java, byte isn’t even a class, it’s a primitive – sort of an object, but much less powerful.

Following the search for a noun-base approach I could have created my own ByteArray that may or may not have delegated to a byte[] internally. This could not have been passed around as a byte[] though, so would have required substantial refactoring of the classes already there. So, I wrote a ByteArrayHelper instead. Having written the ByteArrayHelper, though, it was obvious that none of the methods required any instance variables, they all took and returned byte arrays – so I made them all static. So, my nounified verb had actually led me to write nothing more than a function library.

Whether or not I made the right decision is left as an exercise for the reader.

Taking another example, this time from a friend’s code. Looking through it we noticed that one of the classes was a FileLoaderManager – a class who’s responsibility is to manage FileLoaders. A nounified verb looking after another nounified verb. I hasten to add that this is not bad code – the code in question does some awesome processing of relationships looking for similarities, like Amazon’s ‘people who bought this also bought’ but more generic.

When we looked into the FileLoaderManager and took away some of the responsibilities that fitted better with other classes we were left with just the need to list all the files in a given path that matched a particular pattern. Knowing what files are at a given path sounds like the responsibility of a Directory to me. Now, being very lean C++ we didn’t bother looking for one of the readily available Directory classes, the code we already had could be re-factored quickly. Having written the Directory class it becomes obvious that it would be useful elsewhere, whereas the FileLoaderManager could only be used for the one specific case it originally fulfilled. The nounified verb had led to the code being far more specific than it needed to be.

Two classes I came across in a PHP codebase recently were called FilePutter and FileGetter. These two classes wrap the file_put_contents and file_get_contents functions in PHP, wrapping these functions as classes allows them to be mocked, and therefore users of them can be unit tested. Wouldn’t a single class called, simply, File be easier to follow? The nounified verb approach had led to a peculiar structure in the code made it less obvious for a reader to follow.

So far then, my conclusion is that nounified verbs are likely to be a sign that I’m not using OO techniques for specialisation of behaviour; that my code is more specialised than it could be or that I’m writing in a way that is less easy to read than it could be.

Resource Lists, Semantic Web, RDFa and Editing Stuff

Some of the work I’ve been doing over the past few months has been on a resource lists product that helps lecturers and students make best use of the educational material for their courses.

One of the problems we hoped to address really well was the editing of lists. Historically products that do this have been deemed cumbersome and difficult by academic staff who will often produce lists as simple documents in Word or the like.

We wanted to make an editing interface that really worked for the academic community so they could keep the lists as accurate and current as they wanted.

Chris Clarke, our Programme Manager, and Fiona Grieg, one of our pilot customers, describe the work in a W3C case study. Ivan Hermann then picks up on one of the way we decided to implement editing using RDFa within the HTML DOM. In the case study Chris describes it like this:

The interface to build or edit lists uses a WYSIWYG metaphor implemented in Javascript operating over RDFa markup, allowing the user to drag and drop resources and edit data quickly, without the need to round trip back to the server on completion of each operation. The user’s actions of moving, adding, grouping or editing resources directly manipulate the RDFa model within the page. When the user has finished editing, they hit a save button which serialises the RDFa model in the page into an RDF/XML model which is submitted back to the server. The server then performs a delta on the incoming model with that in the persistent store. Any changes identified are applied to the store, and the next view of the list will reflect the user’s updates.

This approach has several advantages. First, as Andrew says

One thing I hadn’t used until recently was RDFa. We’ve used it on one of the main admin pages in our new product and it’s made what was initially quite a complex problem much simpler to implement.

The problem that’s made simpler is this – WYSIWYG editing of the page was best done using DOM manipulation techniques, and most easily using existing libraries such as prototype. But what was being edited isn’t really the visual document, it is the underlying RDF model. Trying to keep a version of the model in a JS array or something in synch with the changes happening in the DOM seemed to be a difficult (and potentially bug-ridden) option.

By using RDFa we can distribute the model through the DOM and have the model updated by virtue of having updated the DOM itself. Andrew describes this process nicely:

Currently using Jeni Tennison’s RDFQuery library to parse an RDF model out of an XHTML+RDFa page we can mix this with our own code and end up with something that allows complex WYSIWYG editing on a reading list. We use RDFQuery to parse an initial model out of the page with JavaScript and then the user can start modifying the page in a WYSIWYG style. They can drag new sections onto the list, drag items from their library of bookmarked resources onto the list and re-order sections and items on the list. All this is done in the browser with just a few AJAX calls behind the scenes to pull in data for newly added items where required. At the end of the process, when the Save button is pressed, we can submit the ‘before’ and ‘after’ models to our back-end logic which builds a Changeset from before and after models and persists this to a data store on the Talis Platform.

Building a Changeset from the two RDF models makes quite a complex problem relatively straightforward. The complexity now just being in the WYSIWYG interface and the dynamic updating of the RDFa in the page as new items are added or re-arranged.

As Andrew describes, the editing starts by extracting a copy of the model. This allows the browser to maintain before and after models. This is useful as when the before and after get posted to the server the before can be used to spot if there have been editing conflicts with someone else doing a concurrent edit – this is an improvement to how Chris described it in the case study.

There are some gotchas in this approach though. Firstly, some of the nodes have two-way links:

<http://example.com/lists/foo> <http://purl.org/vocab/resourcelist/schema#contains> <http://example.com/items/bar>
<http://example.com/items/bar> <http://purl.org/vocab/resourcelist/schema#list> <http://example.com/lists/foo>

So that the relationship from the list to the item gets removed when the item is deleted from the DOM we use the @rev attribute. This allows us to put the relationship from the list to the item with the item, rather than with the list.

The second issue is that we use rdf:Seq to maintain the ordering of the lists, so when the order changes in the DOM we have to do a quick traversal of the DOM changing the sequence predicates (_1, _2 etc) to match the new visual order.

Neither of these were difficult problems to solve 🙂

My thanks go out to Jeni Tennison who helped me get the initial prototype of this approach working while we were at Swig back in Novemeber.