Agile Coach Code of Conduct – What To Fix

A while back I started on an Agile Coach Code of Conduct. I noticed that after coaching for a while I started to forget basic principles that should be part of every coaching engagement.

So I put this list together to help me (and others) remember what coaching is all about.

Like everything in agile, it’s an ideal, not something you can ever perfectly do.

  1. I will always remember that my teams are full of intelligent professionals acting in the best way that they can for project completion
  2. I will not direct or follow, but lead the team by example, which means walking a fine line between participation and observation
  3. I will always remember that I am a guest on each team, and will strive to behave respectfully to my hosts as much as I can

read the full list of Daniel Markham’s 12 principles at Agile Coach Code of Conduct – What To Fix.

Sharing and re-use of catalogue records: what are the legal implications? : Information Environment Team

The records in a university library catalogue typically have many different origins: created by the library, obtained from a national library or a book supplier etc. So, who ‘owns’ them? And what are the legal implications of making them available to others when this involves copying, transferring them into different formats, etc.?

The JISC has just commissioned a study to explore some of these issues as they apply to UK university libraries and to provide practical guidance to library managers who may be interested in making their catalogue records available in new ways. Outcomes are expected by the end of 2009.

from Sharing and re-use of catalogue records: what are the legal implications? : Information Environment Team.

What will make eBooks as readily available as MP3s?

Printing Press by Thomas Hawk, Licensed cc-nc

Printing Press by Thomas Hawk, Licensed cc-nc

I was talking to a colleague recently about ebooks and the lack of access to course text books electronically. I asked why he thought that was, and he suggested that we were waiting for digital rights management to be sorted out – he meant that in his view we were waiting for DRM technology to be strong enough to protect publishers’ intellectual property rights.

This struck me as interesting, as that certainly wasn’t the case with music where DRM has been struggling (and failing) to catch up for some time. Then, last week, came the news that Amazon had recalled a book it had previously sold, at the publisher’s behest, deleting it from everyone’s Kindle and refunding them. James Grimmelmann reminds us he warned of Amazon’s terms and suggests we need new laws around digital property rights.

We’d also been discussing this at work, in the context of how digital music has disrupted things and attempting to predict how and what ebooks will disrupt and when. Then, Roy Tennant pops up saying Print is SO Not Dead. All of these got me thinking more about it.

The major trigger for digital music was the MP3 player, some cheap, some cool, both hardware and software. People bought MP3 players instead of CD, Mini-disc and cassette because they were smaller, could hold more music and had better battery life. Initially you put music onto them by ripping the CDs you already owned. i.e. there was a cheap, easy way to get digital music onto them from your existing media. We’ll leave the legality of ripping CDs to others.

It was this ability to get music into digital file form that led to online music sharing, and subsequently to the publishing of non-DRM MP3 files from major record labels. The ease with which music could be made available in digital form for anyone to use is what changed the recording industries business and gave consumers what they wanted – cheap, DRM-free music from their favourite artists.

DRM didn’t work for music for many reasons, not least because of the ease with which people could get hold of DRM-free copies. Other contributing factors included the profusion of cheap MP3 players, these players meant people didn’t just want DRM-free music, butneeded it because their cheap players wouldn’t play DRM. Those cheap players wouldn’t implement DRM because of the increased hardware cost of supporting it as well as the licensing cost of many of the schemes. Remember, we’re not talking about a £200 iPod here, we’re talking about a £5 USB stick with a headphone socket and 4 buttons.

The per unit manufacturing cost of an ebook reader is much higher, they’ll sell in smaller volumes, they have more parts including a good screen and use newer technologies rather than off-the-shelf components. The proportion of cost that DRM would impose is a much smaller part of the total unit cost than it was for MP3 players.

A good few good ebook readers have come out over the past year or so including the first and second generation kindles, the BeBook and recently the Samsung Papyrus. All very nice and very capable. Plastic Logic are on the brink of launching a nice new, very lightweight plastic reader.

But there’s still something missing – the books.

That’s not to say nothing’s progressing – a great number of books are available and I’ve not heard anyone complaining that they couldn’t get anything, but it’s still a tiny drop in the ocean, Barnes & Noble launching a store with 700,000 books, compared to Kindle’s “Over 300,000 eBooks, Newspapers, Magazines, and Blogs“. To put that in context, the Library of Congress alone which has 141,847,810 items in it’s catalogue.

And I have a stack of books on my desk, real ones with paper pages. And no way to easily get these onto my laptop.

This is due to several asymmetries. In music, the music is recorded and a player has always been required to reproduce the sound, whether analogue or digital. Books have had many advances in the production side, but not on the consumer side – books have never needed a player.

The second asymmetry is of the display and input of computers. Bill Buxton talks about this a lot when explaining why computers are still on the periphery of life, rather than integrated through it. Essentially this comes down to the issue that the display on my laptop can’t also see – there is no easy way to put a physical book into the computer.

So where does that leave the take-up of ebooks? The publishers seem to be in the same position the record industry was in some time ago, but without the driver to change. With music, consumers were able to say “if you won’t do this, we’ll do it ourselves” but with books that isn’t as easy. There aren’t students out there copying text books to give to their fellow students.

So without an obvious source of DRM-free ebooks – ones that people really want to read – and DRM as a much lower part of the manufacturing cost it seems unlikely that we’re going to see cheap, non-DRM ebook readers being taken up by lots of people.

So, in the absence of consumer-led digitisation of everyone’s existing collections, and assuming Google’s book scans don’t become freely available, what reason do publishers have to really support open and flexible digital publishing? None that I can see.

So this is where DRM may actually come in useful – in providing the mechanism that allows publishers to release those precious digital copies into the marketplace.

Vanish: Enhancing the Privacy of the Web with Self-Destructing Data

Computing and communicating through the Web makes it virtually impossible to leave the past behind. College Facebook posts or pictures can resurface during a job interview; a lost or stolen laptop can expose personal photos or messages; or a legal investigation can subpoena the entire contents of a home or work computer, uncovering incriminating or just embarrassing details from the past.

Vanish is a research system designed to give users control over the lifetime of personal data stored on the web or in the cloud. Specifically, all copies of Vanish encrypted data — even archived or cached copies — will become permanently unreadable at a specific time, without any action on the part of the user or any third party or centralized service.

from Vanish: Enhancing the Privacy of the Web with Self-Destructing Data.

Excel RDF

Introduction

When world’s collide sometimes things happen that can be useful. This is not one of those useful things, but a collision of two worlds none-the-less…

ExcelRDF is a proposed serialisation for RDF using the Microsoft Excel Spreadsheet format. This work was inspired by the discussions in the semantic web community about Linked Data and whether or not it mandates the use of RDF. This document is not trying to prove a point, insult anyone or come down on either side of the argument. I just noticed that it hadn’t been done and it didn’t seem too difficult. Of course, that it hadn’t been done should have been enough of a warning to me that it is not, in any sense, desirable.

Conventions used in this document

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

Overview

When a server receives a HTTP or HTTPS request for a resource that is described in RDF and the client indicates that it is willing to accept content of type application/vnd.ms-excel the server MAY respond with a Microsoft Excel spreadsheet meeting the following conventions.

  • The spreadsheet MUST contain one or more sheets that meet the following conventions.
  • A sheet SHOULD contain zero or more rows of RDF data.
  • Column A of each non-empty row of a sheet SHOULD contain a URI indicating the Subject of a statement.
  • Column B of each non-empty row of a sheet SHOULD contain a URI indicating the Property of a statement.
  • Column C MAY contain either a URI or a literal value as the Object of a statement.
  • If Column C contains a literal value then Column D MAY contain a language identifier in accordance with IETF BCP 47.
  • If Column C contains a literal value then Column E MAY contain a type specifier indicating the type of the literal value.

Example

The attached example Microsoft Excel spreadsheet contains RDF from the dbpedia project describing Annette Island Airport using the conventions described above.

ExcelRDF Example File

Use Cases

ExcelRDF may be useful where it is desirable to produce charts showing characteristics of a dataset, such as the relative distribution of types within a dataset. Perhaps analysing the count of particular properties. I can think of no obvious way to assess graph characteristics such as linkiness, but you could do things like word counts in literals, or working out how of the literal data is in French.

ExcelRDF may be useful where a specific contract, policy or agreement means that the data must be delivered as an Excel spreadsheet while the underlying data is more useful in RDF.

ExcelRDF may be useful if you wish to be deliberately obtuse.

What else? « Web of Data

great explanation from Dan Brickley:

The non-RDF bits of the data Web are – roughly – going to be the leaves on the tree. The bit that links it all together will be, as you say, the typed links, loose structuring and so on that come with RDF. This is also roughly analagous to the HTML Web: you find JPEGs, WAVs, flash files and so on linked in from the HTML Web, but the thing that hangs it all together isn’t flash or audio files, it’s the linky extensible format: HTML. For data, we’ll see more RDF than HTML (or RDFa bridging the two). But we needn’t panic if people put non-RDF data up online…. it’s still better than nothing. And as the LOD scene has shown, it can often easily be processed and republished by others. People worry too much! 🙂

from What else? « Web of Data.

Paul Miller is right… and so is Ian Davis

Paul Miller, a good friend and ex-colleague, has been having a tough time arguing that perhaps Linked Data doesn’t need RDF. Don’t misunderstand that, he thinks RDF is a Good Thing and Best Practice for Linked Data. But he thinks a dogmatic stance is unhelpful.

The problem, I contend, comes when well-meaning and knowledgeable advocates of both Linked Data and RDF conflate the two and infer, imply or assert that ‘Linked Data’ can only be Linked Data if expressed in RDF.

This dogmatism makes me deeply uncomfortable, and I find myself unable to agree with the underlying premise.

In the twitter stream that Paul links to there is some comment reminding people that RDF can take many forms, not just RDF/XML.

kidehen: @andypowe11 re. #rdf, it’s the data model for #linkeddata based #metadata. Remember #rdf != RDF/XML, no escaping RDF model re. #linkeddata.

Ian Davis (my boss) took a strong stance saying that if things weren’t RDF then they weren’t linked data. Perhaps the very thing Paul sees as a dogmatic stance. Ironic as Ian is far from dogmatic. But Ian is defending the term Linked Data, not saying that’s the only way to publish data on the web…

TallTed: @iand “I think LD better for many cases, but there are times i’d rather hv a spreadsheet.” What? Can a spreadsheet not hold #LinkedData?

Well, it seems to me both Paul and Ian are right to a strong degree and are essentially arguing over only one thing – the meaning of the term Linked Data.

Paul quote Tim Berners-Lee’s design note on Linked Data:

1. Use URIs as names for things

2. Use HTTP URIs so that people can look up those names

3. When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL)

4. Include links to other URIs. so that they can discover more things.

The emphasis is Paul’s. I would emphasise a different point:

4. Include links to other URIs. so that they can discover more things.

And in point four lies the reason that Ian is saying a spreadsheet isn’t Linked Data, even if it’s on the web and even if it’s linked to. The only standard for describing how one resource relates to others using URIs is RDF. Sure, you can put URIs into a spreadsheet, but there is no standard interpretation of what the sheets, rows and columns mean. Sure, you can put URIs into a CSV file, but again, there is no standard interpretation of what the fields mean.

The end result of that is data published on the web that can be linked to but not from.

At this early time, though, Paul argues that what we really want is to get more and more data published and open. We all agree on that, I know. Ian does for sure, he runs Data Incubator for exactly that reason – well, that and helping show those publishing spreadsheets and CSV why they should move to RDF and Linked Data.

In the comments on Paul’s post Justin (another senior manager at Talis) says:

Yes the same mistake was made with the rise of the web.

Once you had URIs and HTTP you already had plain text which is a perfectly good way to encode content. By adopting the STANDARD convention of HTML, all sort of existing text based formats with their various mark ups were locked out. That locked out a lot of content that already existed and required anyone who wanted to play to convert existing content into a html format.

Of course it did have the small side effect that to consume web content you only needed a browser that understood one convention i.e. html.

The same is true of RDF. XML is the equivalent of ascii in this regard.

And that’s the point. XML is the equivalent of ASCII, as is a spreadsheet or a CSV file, not because they’re simple, but because they have no mechanism for embedding the relationships and links necessary to link out from your data. Yes, they can contain URIs and clients can decide to make those into links, but there is no way to describe the meaning.

I agree with both side of this argument – If it isn’t RDF then it isn’t Linked Data, but I wouldn’t keep pushing that point if someone was willing to publish data yet unable or unwilling to publish RDF (in any of its many forms).

Communities and Collaboration » Why are Government and Local Councils still using IE6?

Steve Dale pushing question of why local and central government is still using IE6.

The latest information on IE6 market share is just over 12%. I’m betting that a good proportion of this 12% is public sector workers who continue to be poorly served by their IT departments and CIOs who don’t see the browser as being an important component in improving user productivity.

from Communities and Collaboration » Why are Government and Local Councils still using IE6?.

We're hiring…

Fancy a job building great web apps? Interested in being an early part of publishing large amounts of data on the semantic web? Want to help build fantastically useful search interfaces to be used by millions of people? We’re hiring.

We’re looking for a Web Application Technical Lead who knows how to build great web interfaces and wants to get into the next wave of the web, Linked Data and the semantic web.

The role is to lead the development of Talis Prism, a flagship product for us and for our customers. Those customers are the biggest public and academic libraries in the UK, so Prism gets used by millions of people all over the country every day.

The job spec (pdf) gives you more detail, but one of the things we ask is that you take a pop at answering any two from the following three questions.

  1. Ensuring web applications work effectively across different browsers is hard. Explain how
    you would go about ensuring a web application functions correctly with Yahoo’s list of A-
    grade browsers, covering both development and testing approaches.
  2. URIs play a very significant role in the way a site appears on the web; WordPress blogs, for
    example, have a variety of URI schemes they can use. HttpRange-14 adds further
    implications for the use of # based URI schemes. Outline a URI scheme for a car dealership
    website and explain the trade-offs made?
  3. If you were asked to write a book based on your technical expertise, what would the title be
    and what chapters would it contain?

Now, because I’m really friendly (and because it’s my blog), I’ll give you some pointers on what we might be looking for.

With question 1, you’ve got to recognise that Prism is a SaaS product with a frequent release cycle, releasing to the live service once a month currently. That means any answer that talks about specs, manual test plans or requirement documents isn’t going to get you very far. Think about what you’d need to do if we wanted to do continuous deployment – from checkin to release in less than 30 minutes say?

On question 2 we’ll be looking for your understanding of how HTTP URIs work and how different choices work differently with browser caching, proxy servers and server-side code. If you don’t know what HttpRange-14 is then read the draft tag finding on dereferencing HTTP URIs. Take a look at How to Publish Linked Data on the Web.

Question 3, if it cropped up in a book on interviews and job applications, would be answered as “an ideal opportunity to re-present the information on your CV”. That’s because most people who interview haven’t really read your CV, so you have to say things several times. We will have read your CV, we’ll have gone through it with a fine tooth comb actually, checking all the dates and cross-referencing the technologies listed. We’ll have checked out all the sites and companies you list – even if you don’t give us links to them. We like to know who we’re interviewing, so we’ll have googled you and looked you up on Facebook, LinkedIn, Twitter and anywhere else we think you might hang out. Please don’t feel stressed about that, we’re not going to be upset if there’s a photo of you drunk at a party or if you once tweeted the F word. So, there’s no need for the book you’d write to be a game of buzzword bingo, we’re just curious about what excites and motivates you.

All in all though, we’re looking for great people to come and help us do great stuff. Get in touch!

New (old) Telly

So, after my success fixing a Philips Plamsa TV my dad is giving me his Sony Bravia LCD KDL32V. It’s not working following lightning storms and has been written off by the insurers following a £350 repair quote, but we all know that £350 buys you a handful of capacitors and maybe a triode or three from a pro repair shop.

Hence I pick it up tomorrow to take a look at 🙂

Update 10/07/2009: picked it up yesterday and spent the evening looking at it. Some folks on #electronics (mr_boo, SpeedEvil and kludge) helped me work out that the broken bits were very likely the expensive silicon on the primary side of the power supply – difficult to replace and the main control chip, a Sony CXD9841P would cost me about £20.

So, having worked out it’s not a nice easy case of swapping out a couple of transistors or caps, I decided to hunt down a whole board. Obvious approach was to find the same model on ebay that had suffered an encounter with a wii controller, there were a couple about but nothing really cheap or really close. Next step then was to try and find a new power board.

The original quote, from BSS had quoted £180 for the board, plus a whole load of labour bringing a total over £300! Next try, Audio Technical Services who refused to quote me a price because Sony would remove their Authorised status (and stop sending them business) if they sold an internal component to an end customer. Fantastic, good business Sony, thanks.

So, a quick google for the exact part number brought up SJS Television Services who have the board in stock. A cheeky call to owner Stuart and he helped my plight to keep costs down by agreeing the board, inc vat and p&p for £75. Really nice guy, shame he isn’t local or I might have been able to point more people to him.

Now I just have to be patient for the part to arrive 🙂