There's no I in Team

Back in June on The Berkun Blog, Scott’s talked about Asshole-Driven Development, and other great techniques for the dysfunctional office. He states clearly that his list is cynical, and that there is probably a happy list as well…

Well, I figured I’d take a pop at a happier list…

First up, let’s have:

Motivated and Empowered Individual method (ME,I)

This is how I’d describe the way Joel Spolsky has set up the guys at Fog Creek. Essentially the team breaks the solution down into parts and gives a part to a person. Each person is free to develop in their own way, within some bounds set by the team, and becomes the owner of an area of functionality. Without the distractions of other people working on the same code areas the owner can become very productive within the bounds of the code they own. Joel describes the people he hires as “Smart and Gets Things Done“, he wrote a book of the same name.

Smart Friends Development Model (SFDM)

I spotted this one at XTech in Paris earlier this year. There I met three smart friends who, in their spare-time, had developed Quakr. Friendship in a development team provides a real boost to the way the team communicates and negotiates decisions and issues. In the case of Quakr they were friends first and decided to build Quakr second, but I’ve seen teams formed by other companies where effort has been put in to building great friendships.

Very-Clever and Nice People (VCNP)

Martin Fowler of Thoughtworks is open about trying to hire only the very best people. The main barrier to growing Thoughtworks is finding and hiring that talent. Once hired, they move people around, making sure they get to know all the other very clever people they’ve hired. Being clever isn’t enough though, they also looking for soft skills; they hire nice people. The end result is that they can form teams who can work at a very high level and have a lot of fun sharing ideas and helping each other. This is essentially what Microsoft did in the early days too and how they came to have the Program Manager role. Comments over at Scott’s piece talk about responsibility without authority in a very negative way, but if you have very clever and nice people this can clearly work and Thoughtworks show this with their teams.

Smart and Nice Entrepreneurs (SANE)

Back at Talis we also hire smart people. We also try very hard to make sure they’re nice too. We think we’re all pretty nice really. But there’s also a key self-motivational quality we look for; the ability to understand and be interested in how the software will make someone’s life better, as well as how clean the code is under the bonnet. We think that combination is what’s helping us develop some really great stuff and have fun doing it.

It saddens me to read posts like Scott’s and the subsequent comments. I’ve had bad experiences with employers and managers who seem to have different motivations and values to mine, and I know from friends around the industry how prevalent the problems Scott and his commentors talk about are. Surely the best thing to do is to find somewhere worth working and move, or as Martin Fowler apparently said “If you can’t change your organization, change your organization!”

I had hoped to get to more than three happier methodologies. Perhaps that’s a sign that the cynics are right.

Technorati Tags: ,

Names Names Names

Over at OCLC, Thom and his team are doing work to match names across several international Name Authorities. This comes after the recent announcement about allowing non-latin characters into the LC/NACO Name Authority.

This is great work and ties up somewhat with threads I’ve been thinking about following discussions on other lists.

Firstly, Nicole brings together thoughts from Tim Spalding with blog comment on the RDA drafts. This challenges the notion that controlled subject vocabularies serve end-users particularly well. This is covered in David Weinberger’s Everything is Miscellaneous, of course. This is one of the key things that changes when an index no longer require a huge room full of drawers of cards to keep it in.

Secondly, there’s a thread about linking to digitized books (login required) going on over on NGC4Lib. In that thread folks are discussing the cataloguing of books digitized by Google Book Search and others.

Jan Szczepanski describes how she is cataloguing GBS books:

You can collect in two ways, Tim’s way or my way, or a combination of both.

My way, or what I could call the quality way means that You carefully looks at every title. Who likes “white noise”? I use the same criteria I use for paper books.

Maurice York is interested in that approach:

I’m curious about this trash-or-treasure line of thinking as a reasoned basis for the manual effort of selection of digitized texts. You are quite right that libraries specialize in selection and have been doing it for thousands of years (more in generalities than realities, since I don’t believe any library with a currently functioning collection has been around for more than a few hundred). But it seems to me that this is the very reason Google saw libraries as such an attractive proposition for digitization–they have been building high-quality collections of print materials and (presumably) sorting much of the dross according to sustained plans over long periods of time. When you say that the vast majority of texts in Google are “bad quality, bad relevance”, that seems more a dig at American libraries and how we collect than at Google, since Google’s collection is no more and no less than what librarians have created. Let me expand that a bit….it’s something of a criticism of the libraries of Spain, Germany, the Netherlands, Japan, England, and France as well, all of whom are digitizing books with Google.

These snippets of bits & pieces are all starting to fit together. Not quite sure what the jigsaw is of yet, but it’s going to be interesting.

It seems to me that subject classification has to be opened up to everyone – and simplified. It doesn’t need to be a hierarchy anymore, and it doesn’t need to be controlled. On the other hand, names need some real clever work, to decide which names are the same and which are different requires a huge amounts of intelligence and knowledge. Authority files have historically helped with this, but we need to make those work much harder.

I should write some more on how I think this stuff fits together – mental note to self, must write a paper on Marc and RDF just as soon as we have Talis Insight out of the way.

Technorati Tags: , ,

Multi-lingual Authority

Over at hangingtogether.org Karen notes that the CPSO at Library of Congress have announced that:

The major authority record exchange partners (British Library, Library of Congress, National Library of Medicine, and OCLC, Inc., in consultation with Library and Archives Canada) have agreed to a basic outline that will allow for the addition of references with non-Latin characters to name authority records that make up the LC/NACO Authority File.

This is a great step, the LC/NACO authority files form a rich web of data that can be used to improve many things, including search as Karen mentions.

The only question is… why has it taken so long to accept non-latin characters in library land?

Technorati Tags: ,

Free Speech

Well, several free speeches actually. At least they are if you register and come to Birmingham… Insight – A Library Conference for All Insight is a free two day conference for anyone interested and involved in the future of libraries. Being UK-based we’ve set it close to our home in Birmingham.

The programme has some great names, headed up with a keynote by Euan Semple. Other include:

Doing great stuff like this is why so many of us are at Talis. This is going to be a great conference.

Technorati Tags: , , , , ,

Re-Use is hard

Bloody hard.

Well, at least it seems to be. So why?

We’re better off than most at work. We hire really smart people, we’ve got good tools and we keep our working practices under review so we can get better all the time. We’ve solved many of the problems that seem to plague software companies (and have a whole new set all of our own ;-)

So why, with a whole bunch of really smart people working close together on similar projects using the same technologies do we not have near perfect re-use?

By re-use I mean proper re-use – of packaged libraries of code – not simply copying and pasting code someone else wrote already. Copying and pasting is easy, you don’t have to think about dependancies, or clashes or what they might change about it in future. You can alter it in subtle ways that make it ‘neater’ and ‘more understandable’. I put those in quotes because they’re subjective measures, what’s more understandable for you may not be for me.

And there lies the nub of the problem. If we’re writing code that others can take and re-use we have to write in a way that matches the problems they’re trying to solve as well as the one we’re solving. And that’s the hard bit.

The reason it’s so hard is that we have to put all the modeling we have in mind for the current problem to one side and start modeling from scratch in a different, more abstract and generic way.

Thinking about it from an OO perspective, books such as Domain-Driven Design by Eric Evans and Object Thinking by David West lead us down the path of using nouns from the domain to name our classes, but when we do that we have to decide which domain we’re currently in. If you want to build re-usable UI controls then your domain is the UI, so nouns like ‘TextBox’ make sense while nouns like ‘Author’ do not – if you want the components to be reusable that is.

In essence that’s where the hard bit lies; in identifying the different domains and how they should be cut so that they can then be recombined in different ways.

So how to get better at that? The obvious thing to do is go look for places where people already solved the problem – that turns out to be class libraries like MFC, the .Net libraries and the swathes of small, open-source libraries that do just one thing well.

When we look at these we find they’re broken into modules, components, namespaces or some other unit of packaging. Within each module the vocabulary is consistent, complete and, to a great extent, independent. This means that a library for parsing RSS may depend on one for parsing XML, but is unlikely to require other packages. This sounds obvious when written down, or as you read a book on it, but when writing the code itself it’s hard.

So, knowing it’s hard, the next most important thing is to support the evolution of the components, otherwise we have to get them right first-time which seems unlikely. This requires forgiveness and respect. The thing you want to re-use won’t have been written as you would have written it yourself; and at that realization you have a choice of two paths.

The first and most common route would be to get a bit ranty, copy & paste the code into your own world (where you have absolute control, so don’t need to be considerate) and start hacking it into the shape you want for the problem you’re solving. The second is to take a deep-breath, put on your reading glasses and start to try and make your head think the way the original author was thinking – if you can achieve that then you can go on to improve the code, extend it sympathetically and contribute that work back. The second route is harder, much harder, than the first.

But even knowing that, it still looks hard to me.

Technorati Tags: ,

Cause and Effect

For those reading outside of England…

We’ve just had the first run on a bank in 150 years – Northern Rock has been forced to borrow billions of pounds from the Bank of England. It’s been really interesting to watch.

But what’s really interesting is the discussion of blame. Here are the reasons why the Northern Rock went down according to their senior management:

  • The Bank of England refused an earlier rescue loan
  • Another major bank backed out of buying/bailing them out
  • The wholesale money markets closed
  • The BBC announced that they needed an emergency loan before they had an official comment ready

What’s notable about these are that they are not the reason Northern Rock went down. That’s plain and simple that they had an imbalance between how much money they had in deposits and how much they had tied up in non-liquid assets. This happened because there was an imbalance in priorities throughout Northern Rock culture that led to them being great at lending money and not as focussed on bringing it in.

The reason I find this interesting is that it ties in with other thoughts I’ve been having about root causes.

The things that the Northern Rock managers are bringing up in their defense are all things that didn’t help, and maybe if some or all of them had been different there wouldn’t have been a run on the bank – maybe.

We’ve been talking at work about why nobody seems to really effectively achieve code re-use and several reasons come up again and again. Deadline; too hard; component not good enough to re-use and so on.

But, as with Northern Rock, I think the main reason nobody really effectively achieves code re-use is because there’s an imbalance in priorities throughout our industry that leads us to be great at producing new functionality and not as focussed on sharing capabilities.

Technorati Tags: ,

Open Data Licensing

Back at the end of September we finally got to the point of releasing the first draft of the Open Data Commons License. This is work I’ve been involved in since Ian’s first draft of the TCL about a year and a half ago.

It’s great to see this license come to fruition, having argued about the need for this more than once.

It’s interesting to see the conversation happening around LibraryThing’s Common Knowledge and the Open Library project. Both of these are collections of factual data, I’ve been speaking to people involved in both and both have a clear desire to protect the data and ensure that it’s available for the community into the future.

Licensing is critical to that – as I said in Banff (listen) at the start of the year.

Back then we were concerned with navigating the difference in protection afforded to database in the EU and the US. In essence, databases have protection in the EU, but have no protection on the US. The reason we were looking at that was because the natural thinking goes something like this:

Creative Commons extends Copyright to allow you to easily position yourself on the spectrum of ‘All Rghts reserved’ to ‘Public Domain’.

Therefore Open Data Commons must need to extend a Database Right to allow to position your data on the same spectrum.

Well, the Open Data Commons license gets around that by being couched in contract law. This seems like a great way to license data for open use and prevent it being locked away in future.

With all that’s been going on then, it’s no surprise that I missed the Model Train Software case that could have a big impact on how Open-Source software licenses are drafted. A San Francisco judge ruled that the Artistic License was a contract – meaning that breach of the license did not necessarily mean infringing the copyright. That changes the legal redress and potential penalties available for breaching a license.

Interesting.

Technorati Tags: , , , ,

Lost + Found

a while back, when I switched machines, I lost my feedreader – and all the blogs I was following with it. ho hum, a great opportunity to find out which I missed and to find new ones :->

Just discovered I miss Tinfoil + Raccoon. Paul and I caught up with Rochelle in the OCLC bloggers salon at ALA this summer. Much beer was consumed.

Anyhow, I found her again a few weeks ago and then realised today that she is certifiable. I mean, a typewriter? Unless you’re expecting imminent doom from some kind of electro-magnetic pulse or you’re a museum curator what on earth would you want one for? Even for $5 – you could have bought yourself a nice sandwich for that!

Technorati Tags: , ,

Back on OS X

I’ve been head down for a while on work things, doing a whole load of data munging as well as the usual dev work. But my Mac went pop a couple of weeks ago and Apple decided the best thing was to replace it rather than fix it; fine by me. It seemed like a good opportunity to look at what I have installed and list what’s on my machine and why:

iWork 08

Makes work life so much easier than with Office. Keynote and Pages are a joy to work with on the odd occasion where I have to write something other than code.

Firefox

I know lots of mac users insist on using Safari and I agree with them that Safari’s a great browser, but the extensions for Firefox are too useful, and we have one or two internally that help a lot. Firefox has to be the default. Extensions that go on straight away are: Web Developer; Firebug; Duplicate Tab; Download Statusbar; Greasemonkey; del.icio.us Bookmarks; and Resizeable Textarea.

Transmission

Very simple torrent client that seems to behave itself nicely.

Sun Java 5

Got to have the real deal installed and running. The standard one shipping with OS X seems fine too.

Eclipse PDT

Much of what I do is a mix of Java and PHP right now. A departure from a few years ago. Eclipse PDT works really nicely. I’d rather be using Coda for the markup, but can’t justify it right now.

Subversion

The slickest source repository software I’ve ever worked with. Simple, fast and elegant.

Colloquy

We use IRC a lot to keep in-touch and ask quick questions, this is a great client, with customizable alerts and the ability to put in a sequence of auto-commands for when you connect to a server.

Adium

The best multi-network IM client I’ve ever used.

Skype

Of course. Phone home.

Twitterific

I said a while ago I wasn’t going to twitter any more. I was too hasty. When I moved over to the mac someone mailed me twitterific and it makes Twitter useful.

Skitch

Skitch is great – grab bits of screenshots, annotate and drop into emails, doc or post to their online service. Simple idea executed really, really well.

Password Gorilla

I’ve been using Password Safe for years, but moving to Linux and Mac I needed something else. Pasword Gorilla is compatible with Pasword Safe, so I can just move my password files from machine to machine easily and securely.

Mac The Ripper

Rips DVD images onto your disc, allowing them to be played by DVD Player while the disc stays at home. The other advantage is that the hard-drive uses loads less power, so you can watch at least a whole movie while on a flight – on one battery.

EasyWMA

This great little tool takes a whole load of WMA files and converts them to MP3 and registers them with iTunes. A painless way to migrate from WMP.

VMWare Fusion

I run XP very occasionally and Ubuntu quite often for testing under different OSs. Very handy. I sometimes develop under Ubuntu too, as Fusion can take snapshots I can play easily without wrecking my machine.

Cisco VPN Client and Shimo

Connect to work via a Cisco VPN, nice and easy, fast and reliable from pretty much anywhere. Shimo sits in the menu bar allowing quick connections without having to open the cisco client up.

Macports

Open-Source project to make linux open-source projects available to OS X. Equivalent to apt-get or yum package managers. The folks behind this do a great job of keeping the builds up-to-date and providing repositories. There’s Fink as well, and I’ve tried both. I found MacPorts better, but if I’m wrong please tell me!

Vienna

When I moved over to Mac I very nearly bought NetNewsWire for blog reading. Then I found Vienna; an open-source blog reader that is really good. On of the key things is the way it opens articles into tabs, keeping the feed handy when you’ve finished.

Ecto

Not free, but worth the 11GBP it cost me. This is a great little offline blog editor. Hopefully might help me get a little more written here.

Kismac

Some people may have policy issues with this tool – it’s a wireless network discovery tool that also allows you to crack WEP and WPA keys. I’ve used it to secure my own network, but I also use it to find open hotspots when I’m out and about. It’s been moving about a bit, so if the link’s broken then let me know. It was hosted from a site run by it’s creator Michael Rossberg, but since a change to German law outlaws this tool he has handed it on.

Desktop Manager

I don’t understand why OS X doesn’t have multiple desktops built-in, but as 10.4.10 it doesn’t. This is the nicest of the desktop managers I found. Also works with Smackbook if you’re so inclined.

Stuffit Expander

This used to be distributed as part of OS X apparently. But Smith Micro insist on you getting it from them now. Part of the process is giving them your email address; which they then spam until you tell them to stop. Useful piece of software, as stuff still comes in .sit form, but annoying model employed by Smith. They should read cluetrain.

Creatures and Creatures 2

Finally – a little fun. The Creatures icons from Fast Icon are lovely and adorn my most useful folders.

Technorati Tags: , ,