Saul's gov.uk plugin now on Github; anyone know Ruby?

Saul's plugin: 24 hours later

I blogged earlier today about Saul Cozens and his ‘v0.1 alpha’ WordPress plugin for embedding gov.uk content via WordPress shortcode.
The great news is, Saul has uploaded it to a public repo at Github, meaning it’s now:

  • dead easy for you to download, and keep up to date
  • possible for you to fix, enhance and generally improve it

Saul has very foolishly kindly given me commit privileges on it, and I’ve done a bit of work on it this evening – a bit of error handling / prevention, adding basic parsing of gov.uk’s multi-page ‘guide’ content (including any videos!), and general housekeeping.
In other words, it’s now less likely to simply fail on your page. It’s likely to fail in more complicated ways instead. 🙂
There’s one substantial catch: and this is an appeal for help.
The platform’s content is marked up, so it turns out, using an extension of the Markdown language, which they’re calling govspeak.
It adds a number of extra formatting options, to create things like information and warning ‘callout’ boxes. And whilst there are PHP based libraries for Markdown, which we can bolt on easily, there’s nothing instantly WordPress-friendly for this new govspeak.
Yet. If you know a bit of ruby, if you’ve got a bit of spare time, and if you want to help expand the reach of govuk’s content to charities, community groups, local government, etc etc… now’s your chance.
If you fancied one of those £73,000pa developer jobs, I bet it would look great on your application. 😉

New plugin embeds gov.uk forms within WordPress


Saul Cozens has done a wonderful thing. He’s written a WordPress plugin which allows you to integrate content from the new gov.uk site within WordPress pages. You add a WordPress shortcode, of the form:
[govuk url="https://www.gov.uk/vat-rates"]
It pulls in the corresponding JSON data – which is really just a case of adding .json on the end of the URL – and plonks it into your WordPress page. So far, so not tremendously complicated.
Here’s the good bit. No, sorry, the fantastic bit. Not only does it plonk the text in, it can also plonk forms into place. And keeps them active as forms. Yes – actual, working forms.
My screenshot above is taken from my test server: no offence Saul, but I’m not putting a v0.1 alpha plugin on my company site! – but it shows me successfully embedding the Student Finance Calculator ‘quick answer’ form within my current blog theme, and sending data back and forth. Sure, the CSS needs a little bit of work… but Saul’s concept is proven.
Game on.

Directgov unveils syndication API


In one of his final speeches ahead of the general election campaign, Gordon Brown announced plans to offer Directgov’s content via an API ‘by the end of May’. And whilst other announcements in the same speech, such as the Institute of Web Science, have since faded or disappeared, the commitment to a Directgov API didn’t.
Bang on schedule, the API has been launched – and it looks quite marvellous. You’ll need to go here to register – but all they ask for is an email address. Once you’ve received confirmation and a password, you’re away.
Pretty much all Directgov’s content is available, and in various formats. So you can request (for example) articles by section of the website, or by ‘keyword’ (tag); or articles which have been added or edited since a given date, optionally restricted to a given section. You can pull down contact information for central government organisations and local councils. Data is made available, dependent on the query, in XML, JSON, Atom or vCard. (There’s also a browsable XHTML version, from which I’ve taken the screengrab above.)
This stuff isn’t child’s-play; but to those who know what they’re doing – and despite a few successful experiments this morning, I don’t really count myself among them – the potential here is huge. Reckon you can do a better job of presenting Directgov’s content, in terms of search or navigation? Or maybe you’d prefer a design that wasn’t quite so orange? – go ahead. Want to turn it into a big commentable document, letting the citizens improve the content themselves? – well, now you can.
There’s quite an interesting back-story to it all: I had a small matchmaking role in joining up the ideas people in Downing Street with the delivery people at Directgov. And whilst I’m told Directgov did have it in mind for some time this year, the Brown speech on 22 March rather forced the pace. Six weeks (so I’m told) from start to finish isn’t half bad. And whilst I’ve certainly had the odd dig at Directgov in the past, I’m happy to say a hearty ‘well done’ on this one.
It’s a potential game-changer in terms of how the content is presented to the public; but it may also have implications for those producing it. A quick look at the nearly 15,000 ‘keywords’ reveals, perhaps inevitably, a rather chaotic picture: bizarre and inconsistent choices, typos, over-granularity, and so on. My guess is, it’s not been used for front-end presentation before, so it hasn’t had much editorial attention. However, now the data is out there, it has to be taken seriously.

Cameron pledges to free our data

David Cameron has taken the Conservatives’ promises on availability of public data a few steps further, in principle at least, in a speech at Imperial College on taking ‘broken politics’ into the ‘post-bureaucratic age’.
‘In Britain today, there are over 100,000 public bodies producing a huge amount of information,’ he said; ‘Most of this information is kept locked up by the state. And what is published is mostly released in formats that mean the information can’t be searched or used with other applications… This stands in the way of accountability.’ Now I’m still not convinced that there’s that much deliberate, conscious locking-up of data; but certainly, the formats in which that data is eventually made available often has the same end result.
OK, so we’re broadly agreed on the problem… what’s the solution, Dave?

We’re going to set this data free. In the first year of the next Conservative Government, we will find the most useful information in twenty different areas ranging from information about the NHS to information about schools and road traffic and publish it so people can use it. This information will be published proactively and regularly – and in a standardised format so that it can be ‘mashed up’ and interacted with.
What’s more, because there is no complete list that can tell us exactly what data the government collects, we will create a new ‘right to data’ so that further datasets can be requested by the public. By harnessing the wisdom of the crowd, we can find out what information individuals think will be important in holding the state to account. And to avoid bureaucrats blocking these requests, we will introduce a rule that any request will be successful unless it can be proved that it would lead to overwhelming costs or demonstrable personal privacy or national security concerns.
If we are serious about helping people exert more power over the state, we need to give them the information to do it. And as part of that process, we will review the role of the Information Commissioner to make sure that it is designed to maximise political accountability in our country.

Now don’t get me wrong here, it’s great to have Cameron’s explicit sign-up to the principle of data freedom, standardised formats, the presumed right of availability, and a 12-month timeframe. But it’s not really anything that the other major parties aren’t already talking about – and in the case of the current government, bringing in the Big Guns to actually do something about. OPSI’s data unlocking service, for example, is nearly a year old, and effectively answers the ‘wisdom of the crowd’ idea. Now it hasn’t been a huge hit… but the principle is already established.
And then there’s his unfortunate choice of public sector jobs as an example of what they might do:

Today, many central government and quango job adverts are placed in a select few newspapers. Some national, some regional. Some daily, some weekly. But all of them in a variety of different publications – meaning it’s almost impossible to find out how many vacancies there are across the public sector, what kind of salaries are being offered, how these vary from public sector body to public sector body and whether functions are being duplicated. Remember this is your money being put forward to give someone a job – and you have little way of finding out why, what for and for how much. Now imagine if they were all published online and in a standardised way. Not only could you find out about vacancies for yourself, you could cross-reference what jobs are on offer and make sure your money is being put to proper use.

Er, isn’t Mr C aware of the recently-upgraded Civil Service Jobs website – with its API, allowing individuals and commercial companies to access the data in a standardised format (XML plus a bit of RDF), and republish it freely? The Tories have talked about online job ads since December 2006; maybe it’s time they updated their spiel.
So what does today’s pledge boil down to? On one level it’s just headline-grabbing, bandwagon-jumping, government-bashing, policy-reannouncing rhetoric. But that’s not necessarily a bad thing. If all the work is going on already, but it isn’t well enough known, or isn’t proving as effective as it could/should be,  maybe we should be welcoming any headlines the subject manages to grab. And if Cameron’s Conservatives do take power at the next election, and truly believe in what was said today, it would be the easy fulfilment of a campaign promise to yank these initiatives out of their quiet beta periods and into the limelight.

Civil Service jobs API: five years in the making

Five years ago – to the very minute, as it happens! – I was working on a proposal to put to someone at the Cabinet Office. I was still working at ONS, and was trying to think of a clever way to handle our job adverts. We were obliged to post details of all vacancies into the (very recently departed) Civil Service Recruitment Gateway website. So I thought, what if that site could feed our vacancies back to us?
I approached the Cabinet Office with a proposal to not only help them spec up the work, but to pay for it. I’ve still got the PowerPoint slides I produced for the ‘pitch’.
Click to view slide-by-slide
Five years ago, this was truly visionary stuff – in effect, an open API on all government jobs, way beyond anything that had been done before. And even though I’d documented the whole thing, even though I was putting up the money myself for them to do it, to build a function for everyone to use freely… it never happened. An all too familiar story. So it’s especially amusing to see Steph’s news, exactly five years on, of Civil Service jobs, your way.

Given the enduring popularity of job search online, this is an exciting development for a major government data set. It should provide something which third party developers can use to derive valuable commercial services to their customers, as well as helping to ensure Government broadens the reach of its recruitment at lower cost, facilitating the creation of innovative new services based on public data. With luck, it’s the business case for APIs to government data that we’ve been looking for.

Now admittedly, my proposal was a modest affair based on a straight-down-the-line RSS feed. There were few specific references to XML, never mind API, and certainly not RDFa. But reading Steph’s piece, and the ensuing comments, I can see a direct line between my 2004 proposal – which, let’s be honest, is ancient history in online terms – and today’s unveiling. If you ever wanted a precise metric for how slowly government moves, there it is.
Regardless of the history, it’s an excellent piece of work by the Cabinet Office team; and – I hope – having done the donkey work to set it up, someone is ready to take it to market, and make people aware of what the service can do for them. Some relatively straightforward PHP or ASP would be enough to put an automated list of all current vacancies on each department’s own homepage; perhaps the Cabinet Office team could go a step further, and deliver it via a Javascript-to-PHP call (as the LibDems do for their ‘campaign buttons‘), making it child’s-play for the recipient site. The requirement to obtain an API key doesn’t help their cause, though.

Barely a third of Tweeting is via the website

Some fascinating data published on Techcrunch reveals the usage patterns behind Twitter. Less than a third of updates (I think that’s what they’re measuring?), just 32% are posted via the web interface. The two leading Adobe Air-based clients, Tweetdeck and Twhirl, account for 23% between them; Twitterfeed‘s automated RSS postings put it fourth, ahead of (wow!) a paid-for iPhone app, Tweetie. And although Twitter doesn’t seem an entirely natural fit with most Blackberry users, Twitterberry is at no6.
I see all sorts of implications in this ranking: the fact that a clear majority of use of ‘a website’ isn’t via the web, showing what good things can happen when you offer an API; an endorsement of Adobe Air’s cross-platform approach, coupled (potentially) with Air’s relative friendliness to the less technical, more creative developer; and the fact that people really are prepared to pay actual cash for something like Tweetie, when there are perfectly decent alternatives (like Twitterfon or Twitterrific) in the iPhone app store. (And for the record: two of the top five are UK-based – Tweetdeck and Twitterfeed.)

API promised for 2011 Census data

Chances are, you missed last month’s publication of the Cabinet Office’s white paper on the 2011 Census. ‘Modern times demand modern approaches,’ declares Sir Michael Scholar, chair of the UK Statistics Authority: you’ll be able to complete your census form online, and ‘all standard outputs will be publicly accessible online, and free of charge, from the National Statistics website (whatever that is – as I understand it, the name disappeared in the UKSA rebranding).’
The Census represents a marvellous opportunity. We’re now many years into the post-web world, and online is now the main distribution channel for data. We’ve got several years to learn from the best practice of others, be they fellow statistical organisations around the world, or heavy-duty data disseminators like the financial markets. There’s no issue as regards a business model: the commitment to free availability has already been made. It’s an open goal.
Unfortunately, I probably wrote something almost identical to the preceding paragraph seven years ago, when I started working for ONS as Web Editor in Chief, full of optimism at what magic we could weave with the 2001 census data. It didn’t last; there was virtually zero consideration of public usage in the output plans, and I couldn’t persuade the key people of the cultural shift happening outside. There were some blazing rows. I left ONS in 2004; it says something that the website I built as a six-month stopgap in 2002 is still their main web presence – reskin aside, almost exactly as I left it.
A quick skim through the white paper provides little reason to restore my optimism. It has more to say about printed books of preformatted tables than it does about electronic methods – there’s no fleshing-out of what ‘online dissemination’ might mean. Instead, there’s a commitment to produce CDs and DVDs… seriously? in 2012?
But there may yet be hope. Back in December, ONS quietly launched a 2011 UK Census Output consultation – based, remarkably, on a Wiki platform. They’ve published initial survey findings from 500+ respondents, half of whom were in government; it’s a bit disappointing to see so little input from potential new customers (only 2%), as opposed to the ‘usual suspects’. Yet a clear majority of this normally conservative (small ‘c’) audience said they would be happy with electronic output alone.
And hallelujah! – elsewhere on the wiki there’s even mention of an ‘intention is to support a variety of electronic dissemination options through the use of an internet-based API [said on another page to be ‘publicly-available’] that can access the full range of aggregated Census statistics.’ There’s even a link to a list of the API calls to be offered – but it ‘does not (yet) exist’. Many a slip twixt cup and lip, as they say… but they’re undoubtedly talking the right talk here, and perhaps that’s all we can ask at this stage.
My only plea is that they remember the huge potential value for new users. Things have moved on dramatically since 2001; I can think of countless websites which would adore a system they could hook into, with fantastic potential benefits to ordinary web users. The wiki’s list of planned response formats betrays the ‘insiders first’ instinct again: nothing your average masher will be familiar with. Consult your community by all means, guys; but recognise there’s an even wider potential community these days.

  • PS: It’s not the Census group’s first venture into social media: two years ago, they took part in the Hansard Society’s Digital Dialogues initiative, with a blog centred on consultation on small area geography policy. Ten blog posts in three months (over Christmas) isn’t great, and the Hansard Soc was politely critical of the blogger’s failure to engage with the readership, and the organisation’s failure to take the initiative forward. Interestingly, the site has been wiped from the record books: the Hansard Soc’s graphics have been replaced by Flickr errors, and the onsgeography.net domain name appears to have lapsed. There’s always web.archive.org though… 🙂

Puffbox's Project MyTube: hooray for APIs


A few days ago, I bought an iPod Touch; and I can finally understand the fuss. I didn’t really want it; I’m not short of portable media players, and my Android phone gave me a perfectly good touchscreen to play with. But I’m very excited about mobile-optimised web interfaces at the moment, and felt I needed an iPod/iPhone to do some proper testing (as opposed to educated guesswork).
I’ve been especially blown away by the quality of videos streamed from YouTube. For example, I’m a big ice hockey fan – and the NHL (the big league in North America) is kind enough to put full highlights of every game on YouTube. But as you can probably guess, a flying puck isn’t easy to see in a heavily pixellated non-HD video stream. It’s a completely different story on the iPod Touch – crystal clear.
But – unless I’ve missed it? – there’s no easy way in the built-in YouTube applications, either on the iPod or Android, to log into your YouTube account and see your various ‘subscriptions’. On the face of it, it’s an extraordinary omission. Subscriptions are effectively your personalised EPG, allowing you to cut through the chaos, and get to the content you want. Isn’t that exactly what you want/need? So I did it myself.
If you go to mytube.puffbox.co.uk, you’ll see an intro page with a dropdown list of various YouTube channels: these are being called in dynamically via Javascript, from the puffboxtv account on YouTube, courtesy of Google’s astonishingly comprehensive API. (I got the list of HMG YouTube channels from Steph’s digitalgovuk catalogueding!) When you choose a channel from the dropdown, it makes a further API call, drawing a list of the last 10 videos posted to that account, with upload dates and thumbnails. Click on a title, and you’ll see the clip description, plus an embedded player. On a normal browser, the clip will play on the page; on an iPhone/iPod or Android unit, it’ll play in the native YouTube app, full-screen. The ‘back’ button in the top left corner (not the browser back button!) returns you to the list of videos.
That’s pretty cool… but here’s the really clever bit. If you have made your YouTube subscriptions publicly visible, you can call your own favourite channels into the dropdown – go to http://mytube.puffbox.co.uk/?account=yourname and you should see a familiar list. I should stress, my site never holds any personal information: it’s all coming in dynamically from YouTube.
As with most of my experimental stuff, it comes with zero guarantees. There are rough edges, and it could be a little prettier. But here’s the important point: I knocked this together in 24 hours*, thanks principally to (a) Google’s wonderful API and (b) the free JQuery javascript library to process the responses.
Coincidentally, as I was putting the finishing touches to the site, I came across Charles Arthur’s piece in today’s Guardian about the Home Office crime mapping problems – which concluded thus:

The Free Our Data campaign thinks the practices outlined in the memo do not go far enough: what external developers especially are looking for is pure data feeds of events, rather than static maps… Ironically, the police’s efforts to meet the deadlines might be better aimed at producing those data feeds with time, location and crime form data which could then be used by external developers – who would be able to produce maps more quickly than in-house efforts.

I couldn’t agree more – and I hope my efforts over the last 24 hours prove the point. I’m amazed by how easy (relatively speaking) such things are becoming. The common thread across all the really successful web 2.0 properties is the availability of an API, allowing developers to work their own unique magic. As I’ve said before… Government needs to recognise this, and get in the API game. Not just as a ‘nice to have’, but as an absolute priority.
* 24 hours? Well, put it this way. It was working perfectly in Firefox, Safari (desktop and mobile), Chrome, Android… but not IE. It’s taken me the best part of a day to make it work in IE, and I still can’t really understand what I’ve done differently to finally make it work. Opera’s acting really strangely, but I’ve spent long enough playing with it for now.

'Linking here' lists with Google feed API

Time for some tech talk. A few weeks back, I wrote about Google’s new AJAX Feed API. Having played with it last week on behalf of a client, and having liked what I saw, I decided to implement it myself.
If you’re reading this on the puffbox.com website itself, you might see a list in the sidebar headed ‘Who’s linking here?’. (If not, see here for an example.) It’s something I introduced a while back, powered by feeds from Google’s Blogsearch engine, and processed using the excellent SimplePie. But I’ve now switched over to doing it client-side, through the Google API.
Once the blog post finishes loading, the Javascript calls in the RSS feed (actually, it’s Atom format) from Google. If it finds any blogs linking to that specific post, it writes a title into one previously empty DIV, and a disclaimer into another. In between, it generates up to 10 <LI> list items, each active as a link to the linking blog.
If you want to see how it’s done, and copy the idea for yourself, a quick glance at the page’s source code will reveal how straightforward it was. (You’ll need your own API key, obviously.)
Why go client-side? It’s less effort for the server to process; and it doesn’t build up a mass of cached feeds. It should also be marginally more secure on paper, which is important to some clients. And whilst the function is dependent on Javascript being available, it’s dead easy to offer a ‘noscript’ alternative – a link to a pre-formatted Google search query. It’s nowhere near as slick, but it ensures the information is still available to those without Javascript, so it passes accessibility requirements (W3C guidelines, checkpoints 6.3 and 11.4).

Stop what you're doing and sign up

I’m not sure I need to waste my time explaining why you need to go to TheyWorkForYou and sign up to MySociety’s campaign to Free Our Bills – or rather, to have Parliamentary data marked up in mashup-friendly XML. Just compare ‘proper’ Hansard to TheyWorkForYou, and imagine the same process being done on all Parliamentary paperwork.
You may or may not be interested in the intricacies of XML parsing, or even in the uglier workings of the Houses of Parliament. But the fact is, TheyWorkForYou has become a living case study for what we want from e-government. It’s the best-practice example everyone quotes. And if they can persuade/force Parliament to work with them, it sets a valuable precedent for everyone else.
Quick update: Tom Steinberg has been in touch to say it’s not a petition, it’s ‘an action list, proper online campaign style’. Duly noted.
And when you’re done there… log into Facebook (come on, you remember) and join the campaign to allow clips of Parliament on YouTube. Useful in itself, but helpful to MPs who want to show their constituents what they’re up to. My thanks to Lynne Featherstone for the tipoff.