I’ve written before about something that would really set a rocket under the opening up of data: the vigorous pursuit of the useful stuff.

When we’ve been given access to transport data, wonderful things have happened. When we get real-time feeds, useful services follow hot on their heels. Let’s make those infrastructural building blocks of services available for free, unfettered use: the maps, the postcodes, the electoral roll, your personal health records.

(Ok, I didn’t mean the latter two. Or did I? It gets complicated. Still writing that post…)

Here’s a vision:

Roll forward to a time when the first priority of any service owner within the public sector is not “how shall I display the accounting information about the costs of this service” (or indeed “how shall I obfuscate the accounting information..?”).

No. Instead, it is: WHERE is the service? WHEN is the service? WHAT is the service? HOW DO I USE the service? (And maybe even: WHAT DO PEOPLE THINK about the service?)

Those basic, factual jigsaw pieces that allow any service to be found, understood, described and interacted with. From a map of where things can be found, to always-up-to-date information about their condition, and a nice set of APIs with which others can build ways in.

The genius of this type of thinking being that many of the operational headaches of current service delivery simply fall away. They are no longer a concern for the service owner. “Our content management system can’t show the information quite like that.” “We haven’t got the staff to go building a mapping interface.” “We’re not quite sure how we’d slot all that into our website’s information architecture.”

Pouf. No more. Gone. The primary concern becomes: is the data that describes this service accurate (or accurate enough–with some canny thinking about how it might then be written to and corrected), and available (using a broad definition of availability which considers things like interoperability standards).

Well, Paul. Nice. But what a load of flowery language, you theoretical arm-waver. Can’t you give a more practical example?

Well, reader. Yes I can.


That’s right. Public conveniences. A universal need. A universal presence. But where are they? When are they open? And what about their special features? Disabled access? Disabled parking? Baby-changing?

There’s actually a bit more to think about (once you start to think hard) than just location and description. But not a whole lot more. The wonderful Gail Knight has been banging this drum for a while, and has made some good progress, especially on things like the specification for data you’d need to have to make a useful loo finder service.

Why’s this really interesting? Really, really interesting? Because having got a good idea of the usefulness of the data [tick] and a description of what good data looks like [tick] we then find all the other little gems that stand between A Great Idea, and a Service That Ordinary People Can Easily Use.

Who collects the data? Where does it get put? Who updates it? Who’s responsible if its wrong? How do people know they can trust it? Can people make money from it? (I could go on…)

Bear in mind that any additional burden of work on a local authority (who have some duties around the provision of public loos) probably isn’t going to fly too high in the current climate of cuts. Bear in mind also that anyone else who does a whole load of work like this is probably going to want something in return. Bear in mind also that “having a sensible standard” and “having a standard that everyone agrees is sensible” are two different things. Oh, and I need hardly add that much of this data will not currently be held in nice, accessible, extractable formats. If, indeed, it exists at all.

Two characters usually step forward at this point.

The first is the Big Stick Wielder (“well, they should just make councils publish this stuff. Send them a strong letter from the PM saying that this is now mandatory. That’s the standard. Get on with it. It’s only dumping a file from a database to somewhere on the Internet, innit?”) BSW may get a bit vague after this about precisely where on the Internet, and may, after a bit of mumbling start talking about a national database, or “a portal”, or how Atos could probably knock one up for under a million… (and it’s usually at this point that some clever flipchart jockey will say “Why just loos? Let’s make a generic, EVERYTHING-finder! Let’s stretch out that scope until we’ve got something really unwieldy massive on our hands”.) We know how this song goes, don’t we?

The second is the Cuddly Crowd-Sourcer (“forget all that heavy top-down stuff, man. We have the tools. We have some data to start from. Let’s crack on and start building! Use a wiki. Get people involved. Make it all open and free.”) CCS’s turn to go a bit vague happens when pushed on things like: will this project ever move beyond a proof-of-concept? how do we get critical mass? does it need any marketing? can people charge for apps that reuse the data and add value to it? how do we choose the right tools?

Both have some good points, of course. And some shakier ones. That’s why this is a debate. If it were clear-cut, we’d have sorted it by now, and all be looking at apps that find useful stuff for us. And isn’t just a matter of WDTJ (Why don’t they just..?).

My suggestion? CCS is nearer the mark. Create a data collection tool which can take in and build on what already exists. Use Open Street Map as the destination for gathered data. Do get on with it.

Matthew Somerville’s excellent work to get an accurate data set of postbox locations and the Blue Plaque finder are obvious examples to draw inspiration from. Once in OSM, data can be got out again should the need arise. There will be a few wrinkles around the edges as app developers seek to make a return on what they build using the data. There may well be a case for publicly-funded development on top of the open data. But get the data there first. Make it a priority.

Because if, after years of trying to make real-world, practical, open, useful services based on data we continue as we are, with a pitiful selection of half-baked novelties and demonstrators of “what useful might look like, at some point in the indeterminate future” we’re badly letting ourselves down.

Basically, what I’m saying is: if we can’t get this right for something as well-defined and basic as loos, a lot of what we dream of in our hack-days and on our blogs about the potential of data will just go down the pan.



OK, so it seems it already exists. Or at least a London version of it anyway. Don’t you love it when that happens? Would be good to see how it progresses, and what its business model looks like. I like the way that data descriptions have been used e.g. “Pseudo-public” for that class of loos which aren’t formally public conveniences, but can easily be accessed and used – e.g. those in libraries, and cooperative shops. The crowd-update function looks good too.

In a way, this also shows up another headache that arises when spontaneous services start to appear: there is only one set of loos in the real-world. But each representation of them in an app or online service must go through the same process of ensuring accuracy and extent of coverage. Distributed information is always tricky to manage. Should we hope that several competing services make it into production, with the market determining which succeeds? Will that be the one with the best data? Or is there scope for an underpinning data service that feeds them all? (But then we court the central, mega-project problems again…)

Answers on a postcard, please.

The most expensive basic office PC in the world

The Parliamentary Public Administration Select Committee (PASC) has just published its report on the state of government IT.

It’s not a pretty story. It’s a long, messy, complicated one.

But in such stories we, the simple readers, look for things we can identify with. Costs that might mean something to us in our version of the real world, where we don’t try to process 10,000 claims a week, or track case histories numbered in millions a year.

Costs like the cost of a PC. A computer. The box on your desk.

And there’s a surprising figure quoted in that BBC report. Can it really be that a single office computer can cost £3,500? Read that again. £3,500.

No. Of course not. And it almost certainly doesn’t.

Charges made for desktop computing in the public sector are invariably composed of an element for the hardware, plus a rather greater element to cover installation, support… in fact quite a bit more. IT managers (disclosure: I used to be one in the public sector) can play quite a few tunes on this figure; using it to cover centralised development work, packages of software and all manner of other “hidden” costs.

But from a government with an avowed commitment to be the most transparent and accountable in the world we see a reluctance to disclose any of the detail behind that £3,500 figure. (Actually, according to this piece, it’s not “up to £3,500”–it’s actually higher!)

Why should the cost of what is essentially a commodity component of a hardware/services package not be openly disclosed? I can only think that it must be because it is not a very nice answer. I can think of no other reason.

I mean, it couldn’t be because at no point during the procurement process did anyone think to check, or pin down, unsexy old commodity costs like that, surely?

You could argue that this figure looks absurd (it does). And that any reporting of it should sensibly clarify that it doesn’t tell the full story. That was certainly my first reaction.

But on reflection–given this intentional avoidance of transparency, even under Freedom of Information requests, let alone the spontaneous publishing of the detail that we were promised–I suspect the Cabinet Office, to name but one department, rather deserves to look a bit absurd.

Figures please.

Agile, waterfall and muppets

There’s been some very good debate of late about how to do it all better, with a heavy emphasis on the role that Agile methods might play. “It” in this case being not just government technology, but extending to policy development, communications and more.

There seems to be a massed rebellion against the substandard, the lame, the failed and the apparent deathgrip that a few large suppliers have on taxpayer billions. This is a very good thing, of course.

I found a couple of recent pieces very insightful (in different ways). Alistair Maughan kicked off a lot of debate with a provocative piece arguing that Agile was doomed to fail in a public service setting. This from one of the architects of arguably some of the biggest, most expensive and (inarguably) rigid ICT contracts imaginable. Adam McGreggor from Rewired State came back with evidence of Agile success, and a rebuttal of the good lawyer’s key arguments.

These arguments really seem to me to be based around an overarching sentiment that Agile weakens the ability to hold contractors (and their clients) to account for the delivery of fit-for-purpose products. Whether as a result of mismatched expectation, fuzzy requirements, incompetence or downright fraud.

Our defence against these–particularly the latter two–has traditionally been the much-lambasted public procurement process. As Anthony Zacharzewski put it with characteristic tact and style, we should have a care about throwing out the defences that these processes are designed to provide, for all the tales of woe that can also be laid at their door.

Stepping back a little from all this, I am left wondering what points are really being argued here? Is this about a methodology, or is this about ensuring that the right people are getting through the door? Amazing things can happen when the usual barriers are thrown down and the real, trusted experts are brought in. (I hope to see real-life evidence of some of this next week.)

After all, with amazingly insightful and competent people involved, respecting each other, listening to reason, taking risks where necessary, being flexible–and above any suggestion of conflicted interest or perverse incentive–even the most traditional requirement/specification/build approach is going to perform pretty well. And just imagine the horror of a perversely-motivated behemoth muscling in (let’s call them Anders*nAgile, say) with their Chicago-schooled ScrumBizAnalysts(TM) charging a couple of grand a day to dance in a slightly different style to the same old tunes.

If this is about trust–and I think a lot of it might be–then we need to be careful not to confuse “method” arguments with those that are more about “gatekeeping”. A great sage in the public sector IT world once said that the only HR rule you ever needed was “No muppets”. He had a point.