In one of his final speeches ahead of the general election campaign, Gordon Brown announced plans to offer Directgov’s content via an API ‘by the end of May’. And whilst other announcements in the same speech, such as the Institute of Web Science, have since faded or disappeared, the commitment to a Directgov API didn’t.
Bang on schedule, the API has been launched – and it looks quite marvellous. You’ll need to go here to register – but all they ask for is an email address. Once you’ve received confirmation and a password, you’re away.
Pretty much all Directgov’s content is available, and in various formats. So you can request (for example) articles by section of the website, or by ‘keyword’ (tag); or articles which have been added or edited since a given date, optionally restricted to a given section. You can pull down contact information for central government organisations and local councils. Data is made available, dependent on the query, in XML, JSON, Atom or vCard. (There’s also a browsable XHTML version, from which I’ve taken the screengrab above.)
This stuff isn’t child’s-play; but to those who know what they’re doing – and despite a few successful experiments this morning, I don’t really count myself among them – the potential here is huge. Reckon you can do a better job of presenting Directgov’s content, in terms of search or navigation? Or maybe you’d prefer a design that wasn’t quite so orange? – go ahead. Want to turn it into a big commentable document, letting the citizens improve the content themselves? – well, now you can.
There’s quite an interesting back-story to it all: I had a small matchmaking role in joining up the ideas people in Downing Street with the delivery people at Directgov. And whilst I’m told Directgov did have it in mind for some time this year, the Brown speech on 22 March rather forced the pace. Six weeks (so I’m told) from start to finish isn’t half bad. And whilst I’ve certainly had the odd dig at Directgov in the past, I’m happy to say a hearty ‘well done’ on this one.
It’s a potential game-changer in terms of how the content is presented to the public; but it may also have implications for those producing it. A quick look at the nearly 15,000 ‘keywords’ reveals, perhaps inevitably, a rather chaotic picture: bizarre and inconsistent choices, typos, over-granularity, and so on. My guess is, it’s not been used for front-end presentation before, so it hasn’t had much editorial attention. However, now the data is out there, it has to be taken seriously.
Responses
Any idea how I can extract the information in XML? All I’m seeing is the xhtml version.
Thx
Hi Colin,
If you’re talking about articles, either send the following header:
Accept: application/atom+xml
… or append “.atom” to the article URL, e.g. /id/article/DG_123.atom
You can get ATOM representations from any URL that supports them: /articles/ since, /search (provided you’re only searching for articles), /id/article/{article-id}. In the latter case you’ll get a feed with a single entry.
Hope this helps,
Russell
Be great to see a follow up piece in 6 months with usage statistics… if this just gives the info already available on the website in a different representation – will anyone really use it ? It’d be easy enough to grab and convert the HTML if you really wanted to and it’s fairly well laid out as it is.
Could see this model being good for (say) train timetables, traffic info, weather, health or many of the other sites that directgov sends you to. Not so sure about directgov itself tho’.