Directgov starts seeking feedback


To their great credit, Directgov have added a feedback box to the bottom of nearly every page on the site, asking how useful you find the page contents. Responses are anonymous, and there’s some nice javascript to aid usability: a jQuery-based character count and validation check prior to submission. The language is maybe a bit formal, and it looks like the rating is mandatory without actually saying so; but I’m nitpicking.
My suspicion is, it’s actually a pretty modest email form: nothing particularly advanced. But it’s a significant step forward, and perhaps, a step closer to the Tories’ stated vision of government websites as ‘places where people can come together to discuss issues and solve problems’. (And don’t forget, Directgov’s API has been promised ‘by the end of May’.)

Live text commentary in WordPress

I don’t usually blog about projects until after they’ve happened; but I’m going to make an exception for something that’s going to happen later today.
For just about a year, we’ve been looking after the website for The Big Care Debate, the government’s large-scale consultation on the funding of long-term social care. We’ve had a great relationship with the team at the Department of Health, and we’ve done some fun, innovative and highly effective things: commentable documents, Facebook activity, online questionnaires, even user-submitted photo galleries.
The consultation process is reaching its conclusion, with the publication of the government White Paper on the subject. (For those who don’t know the jargon: a ‘green paper’ presents options or starts a debate, often leading to a ‘white paper’, which is a declaration of government policy.) Oh, and as you might have noticed, there’s an election on the cards, and we’ve already had a few skirmishes on this very subject.
When we first met to discuss plans for the White Paper publication, one idea was to ‘live tweet’ the launch event on Twitter; but I’ve never been a fan of sudden, frantic bursts of tweeting by one of the hundred-odd accounts I follow. (And indeed, I’ve ‘unfollowed’ certain people for doing precisely that.) So we reworked the plan, taking as our inspiration the undoubted success of the BBC’s ‘live text commentaries’ – seen at its best on the sports site on a Saturday afternoon, but used with increasing frequency on the news site, for set-piece events like PMQs.
So over lunchtime, we’ll be supplementing our live video stream with a live text commentary – using ajax and some custom WordPress wizardry. It’s a very simple concept at heart. A live commentary is just a chronologically-presented series of short text chunks… just like a list of comments on a post. So that’s what we’re going to use.
The site editor will be entering his comments via a hidden, ajax-powered comment form: and, as with any WordPress comment, he’ll benefit from features like automatic text formatting, including conversion of URLs into clickable links. Meanwhile, users will see each new comment appended to the bottom of the list, with a cute colour highlight, but without the need for a full page refresh.
Naturally, this means a much increased workload for the web server, particularly if – as we expect – we attract a sizable audience for what looks like being front-page news. WordPress and its plugin collection can do a lot to help; but we’ve taken a few additional server-level steps to ensure all runs smoothly. All the credit for this goes to my regular collaborator Simon Wheatley, who knows a thing or two about these things, thanks in part to his work for Stephen Fry.
There are plenty of options for running live text commentaries like this, such as the excellent CoverItLive. But there are a number of benefits to running it within WordPress: not least the fact that afterwards, you’ll instantly have a bullet-point summary of the key points at your disposal. And as we’ve been building the functionality, we’ve been getting quite excited at other ways we could use it.
If you’re at a keyboard at lunchtime, please drop by, and let me know how you find it.

Code your own BBC News homepage

The BBC has announced plans to switch off its low-graphic websites:

The low graphics version of the site was designed as a low bandwidth alternative to the full website at a time when most users of the site were using slow dial-up connections. Now, most of our users are on much faster broadband connections and as a result, the percentage of users of this service has steadily declined to a current level around 2%.

Fair enough I suppose. Except that I was one of those 2% of users. Why? – because I had it set to load in a Firefox sidebar. With one click of a browser button, I got my instant news fix. I use it constantly throughout the day.
For obvious reasons, the full-size homepage doesn’t render especially well in a 200px-wide space; but the low-graphics version did pretty well. Not perfect, but pretty good.
For a few days now, I’ve tried following the BBC’s advice, by switching to the mobile interface. But it just didn’t do it for me. So I’ve taken matters into my own hands, and spent the last half hour ‘coding my own’. (And most of that time was just making look a little prettier.)
It’s a fairly simple PHP/RSS thing, with a dash of jQuery thrown in. I fetch the BBC’s homepage RSS feed via SimplePie, dress it up all pretty, then run a very quick jQuery routine to ‘zebra stripe’ the stories for easier reading. For each story, I give myself the headline, timestamp, summary – and the thumbnail image, something the low-graphic version couldn’t give (beyond the top three items).
Why am I telling you this? Because it’s a perfect case study for the ‘raw data now’ concept. The BBC supplies the data, I bang out a hasty rendering routine based on free code… and I’ve got the service I want, regardless of what they want to do themselves.
It’s running in my development web space; I’ve got no intention of making it public. But if you really think it would be useful for you, let me know, and I’ll maybe share the address details.

Telegraph calls No10 site 'a technical mess'

Last night, the Telegraph published a piece by their head of audience development, Julian Sambles accusing the Downing Street website of being ‘a technical mess’. This damning conclusion was based on the following criticisms:

  • It wasn’t in the top search results for a few randomly-selected Budget-related search terms.
  • It doesn’t have a ‘link canonical’ tag in its code header.
  • It has a pretty curious set of ‘meta keywords’ – including ‘piercings’, ‘tattoos’ and ‘polish armed forces’. (Update: apparently not random at all – see comment below.)
  • The page templates aren’t especially well structured for SEO purposes.
  • It has inconsistent names on various external sites like Twitter, YouTube and Flickr.

None of which, in my mind, constitute a ‘technical mess’. So it’s interesting to see, this morning, that the headline has been watered down, to mock the keyword selection.
Some of the criticisms are valid. The site could do a few simple things to improve its SEO standing, probably taking barely a few minutes. And yes, I have trouble remembering which specific configuration of ‘downing’ and ‘st(reet)’ it uses to make up its various usernames. But some of the accusations are way over the top, and some don’t stand up at all.
The ‘meta keywords’ criticism, for example. In the old days, search engines respected the keywords you entered in your page header as a guide to the page’s substance. But then people, possibly working in the field of ‘audience development’, began abusing them. So what does Google, with 90% of the UK search market, think about meta keywords?
Let’s ask Google’s Matt Cutts, shall we?
His answer: they don’t use it. ‘Basically not at all… Even in the least little bit.’ Not worth spending much time on then, I’d say.
And then there’s the failure to rank highly for certain budget-related search terms. But would you want or expect Number10 to be a high-ranking result, when it has very little material on the subject – and isn’t the ‘lead site’ on the subject, from either a policy (HM Treasury) or a citizen-facing (Directgov) perspective?
If you search Google right now for ‘budget’, you’ll get both HMT and DG in the top few results. That’s the appropriate outcome.
I’m not saying there aren’t improvements I’d want to make to the Number10 site. As regular readers may know, I contributed some advice in the early days of their migration to WordPress – but I didn’t have any hands-on involvement in the build itself. If I had, for the record, certain things would have been done differently.
PS: Thankfully, someone at the Telegraph saw sense, and dropped the ‘technical mess’ line. Otherwise I’d be forced to point out that their article page scores 88 HTML validation errors in the W3C checker, compared to the Number10 homepage’s zero.

Tories' commentable Budget


Following the apparent success, back in December, of presenting a leaked draft of the government’s IT strategy for reader comments, the Conservatives have repeated the trick by laboriously scanning every page of the Budget book, and presenting them on commentable WordPress pages.
They aren’t asking for email addresses on comments, and aren’t posting the comments when they’re submitted – citing a desire to protect the ‘anonymity [of] those who have sensitive insights’. It turns WordPress into an inbox filtering application, in effect: recording people’s submissions against the page to which they related, but not really doing anything more than that. Nothing wrong with that approach, just a little curious.
Again, I applaud the Tory team’s ingenuity here. But… writing on the Conservatives’ Blue Blog yesterday, Jeremy Hunt said:

We will be publishing it online in an easy-to-read format (not like the enormous PDF documents so beloved of the Treasury) as soon as possible after its release.

Now, there’s a lot wrong with publishing stuff purely in PDF files – and there’s a lot right about doing this site in WordPress. But PDFs have several huge benefits which this image-based site can’t match. Copy-and-paste, search, screen-reading, search engine indexing… etc. Plus, without wishing to be too pedantic: if ‘enormous’ is a reference to file sizes, the Treasury’s 3.5MB PDF file equates to significantly less than 230 JPG images of roughly 150kB each.
Leaving aside the technicals, this is a very interesting initiative on several levels. There’s the ‘crowdsourcing’ aspect, of course; but there’s also an underlying message – that Labour will be trying to sneak the nasty things through in small print on page 186. They do, after all, have a certain amount of form on this.
So is this a declaration that under the Conservatives, they’ll tell it to us straight – good and bad? I sincerely hope so.

Number10's iPhone app


I finally gave in, and upgraded the company’s iPod Touch for the purposes of testing the brand new iPhone app from 10 Downing Street. And then, as I spent an hour randomly resetting and restoring, I promptly remembered why I hadn’t upgraded for so long. Anyway…
On a technical level, the Number10 app is actually quite modest – just a pretty front end on its website’s RSS feeds, and the feeds from its YouTube, Flickr and Twitter accounts. But it’s really very pretty – and that kind of thing matters in the world of the iPhone. It feels like a perfect blend of native iPhone interface and the parent website’s house style.
It follows, coincidentally I’m sure, in the wake of recently-launched apps by both Labour and the Conservatives – and I’d say it’s the best of the three. The Tories’ somewhat dazzling effort may have more glitz, but the Number10 app feels better in terms of information delivery: and I like its one-click sharing button to send details to your Twitter and Facebook chums. (It’s quite surprising that neither the Labour nor Tory apps have sharing buttons.)
Not entirely sure who it’s aimed at, or what specific purpose it serves, other than providing an iPhone-optimised interface on those various web presences: but the same criticism can be levelled at many such ‘corporate’ iPhone apps.

Brown's big picture of the digital future


Gordon Brown’s speech, describing a vision of Britain’s digital future, is stirring stuff, with its pledges to make Britain a world leader in terms of digital jobs, public service delivery and ‘the new politics’.
The announcements and commitments came thick and fast – from the £30m to create an Institute of Web Science, to be headed by Tim Berners-Lee and Nigel Shadbolt, to confirmation of the release of ‘a substantial package of information held by Ordnance Survey … without restrictions on reuse’, to a ‘Domesday Book for the 21st century’ listing all non-personal datasets held by government and arms-length bodies, to an iPhone app for Number10, to an API on Directgov content ‘by the end of May’.
And then there’s MyGov – ‘a radical new model making interaction with government as easy as internet banking or online shopping.’ On the face of it, this seems – finally – like recognition that citizens’ expectations have jumped ahead of government’s delivery in the last decade. There wasn’t much detail in the speech – but it sounds to me like the first hint at Vendor Relationship Management, where the citizen shares his/her data up to suppliers. That’s certainly where the Times seemed to be pointing on Saturday, when it described the creation of a ‘paperless state’:

The aim is that within a year, everybody in the country should have a personalised website through which they would be able to find out about local services and do business with the Government. A unique identifier will allow citizens to apply for a place for their child at school, book a doctor’s appointment, claim benefits, get a new passport, pay council tax or register a car from their computer at home. … Over the next three years, the secure site will be expanded to allow people to interact with their children’s teachers or ask medical advice from their doctor through a government version of Facebook.

As I’ve written here before, I’m convinced this has to happen at some point. We build up personal profiles on Facebook, and allow Amazon and Tesco to analyse our purchasing habits – in return for much improved service. I just don’t think it’s sustainable on any level for government to continue to demand that we fill in lengthy forms, whether on paper or online, to get what we’re due.
But of course, that’s a huge government IT project, isn’t it? And by definition, that’s doomed? Well, there’s a blink-and-you’ll-miss-it line which suggests things might be changing:

This does not require large-scale government IT Infrastructure; the ‘open source’ technology that will make it happen is freely available. All that is required is the will and willingness of the centre to give up control.

Blimey: recognition that open source is ready to deliver the most visionary of government policy.
And with my WordPress hat on – do I ever take it off? – I can’t help smiling at his pledge that ‘no new [government] website will be allowed unless it allows feedback and engagement with citizens themselves.’
Of course, the speech has to be seen in context. Without ever mentioning the Tories, the speech was quite unashamedly party political in places: portraying the differing views of broadband expansion, or trying to match or trump Tory pledges on data transparency. It was also the speech of a Prime Minister staring at a huge public debt problem: and with neither tax rises nor spending cuts being palatable, that really only leaves technology-driven efficiency savings.
And it’s the context that’s stopping me getting too excited about it all. We’re probably a fortnight away from government pulling down the shutters for a month. In six weeks, Brown may or may not be Prime Minister, and may or may not be in a position to deliver on these promises.
Comparisons with the Tories’ technology manifesto are inevitable. In this speech, Brown blended small-scale but symbolic measures, like a Directgov API within weeks, with big-picture principles such as VRM. It’s both shorter- and longer-term than the Conservative document – attempting, perhaps, to outflank Cameron, Maude, Hunt et al on both sides at once.
But whilst they may differ on certain matters of implementation, both are heading – rushing actually – in the same basic direction. On the face of it, no matter who wins, we can’t lose.

SEO as a political campaigning tool

I’ve mentioned this before, but it still brings a smile to my face.
One consequence of the rebuild of Lynne Featherstone MP’s website, which we launched last September, has been a marked improvement in Google performance. And it’s arguably my greatest personal triumph that if you search Google for ‘haringey council’ – the top suggested search query if you just type in ‘hari’ – here‘s what you (currently) get:

So the first five results on a standard Google results page are: two pages from the council itself – the council’s own homepage and one of its most popular individual pages (as you’d normally expect for such a targeted query); a page from Wikipedia; a page from Directgov; and at slot number 5, LibDem MP Lynne’s automated page detailing everything that’s wrong with the Labour-run council… with a particularly arresting excerpt.
SEO, or Search Engine Optimisation, isn’t something I typically find myself paying much (conscious) attention to. In my experience, it’s usually enough to have followed the basics of web page construction: and I’ve been coding HTML for 15 years now, so it’s all fairly instinctive. WordPress helps by encouraging you to use significant elements such as the page title – presumably including significant keywords – in both the HTML <title> and the page URL; plus there are a couple of plugins I tend to activate for all clients which help Google ranking, install instantly, and never trouble you again.
But because it’s baked into the process, albeit subconsciously, the results are there to be seen: and will come to the fore over the next few weeks.
Naturally, with an election imminent, MPs and candidates are looking for every possible opportunity to get their messages in front of voters and journalists. For zero extra effort, and at zero cost, we’re getting one of Lynne’s core messages in front of the tens of thousands of people searching for ‘haringey council’ each month. (According to Google’s Adwords keyword tool, 22,200 people searched for ‘haringey council’ in February 2010… far more than the 1,300 who searched for ‘lynne featherstone’ specifically.)
Lynne is defending a relatively modest majority of 2,395 – notionally putting her in Labour’s no39 target seat. We’ve had plenty of favourable feedback regarding her website already: Iain Dale, I’m reliably informed, called it one of the best political websites he’d ever seen. But it won’t surprise you to learn that we’re looking at a couple of possible enhancements for the election campaign period. Stay tuned.

Another version of IE in circulation. Great.

Oh fantastic. I return from a few days holiday to discover that Microsoft has issued a ‘platform preview’ of Internet Explorer v9. So now that’s four major releases of Microsoft’s monopolistic browser in circulation: and I can’t even install IE9 for testing purposes, because it doesn’t – and won’t – run on XP, the version of Windows I currently prefer to use. (Well, ‘prefer’ is maybe a bit strong.)
This is just getting ridiculous.
PCWorld.com proposes that Microsoft should make IE open source – noting that it wouldn’t lead to a revenue loss, as it’s a free product anyway, and might lead to some benefits. I still find myself leaning towards the equal but opposite solution: that Microsoft should adopt an existing open source rendering engine, and compete on the basis of the functionality built upon and around it. Google did that, building its Chrome browser around the open-source Webkit originally developed by Apple – which itself had its roots in the open-source KHTML.
It’s somewhat remarkable to see the Microsoft website heralding its 55% compatibility with the generally-accepted ACID3 standard: partial compliance is just another way of saying ‘not compliant’. The versions of Chrome, Safari and Opera (all running satisfactorily on my apparently ‘not modern enough’ machine) already score 100%. Others – Firefox, even Opera Mini and Android – are already scoring in the 90s.
So as a result, it’s depressing to see us moving further and further away from a single global standard, for the frankly pretty mundane task of getting the right elements in the right place on the page. This is all just wasting my time, and your money.

From my presentation to new civil servants

On Tuesday this week, I gave a presentation at the Government Communication Network’s foundation course for new entrants – talking about the current online and social media landscape, and highlighting a few specific implications for those working in government. I haven’t received the feedback questionnaire summary yet; but the initial signals look encouraging. (Thanks Sue.)
For a while now, I’ve been doing slides which don’t make a lot of sense unless I’m standing up in front of them; so I’m not going to share my slideshow per se. However, I thought I’d share the key information sources I used: you’ll find the facts and figures I quoted, plus – undoubtedly – some other gems I missed.
The best sources of numbers were the Ofcom Communications Market Report and the National Statistics release on Internet Access, both published last August. The Ofcom report is particularly good for numbers and charts on all aspects of media consumption. I also used a few figures from the BBC’s iPlayer press pack, to illustrate the growth of high-bandwidth activity, and the use of non-traditional devices (specifically in that case, PS3s and Wiis).

Click to see it a bit larger

The most useful visual was probably this one, from Hitwise – using data originally produced for the BBC’s Virtual Revolution series, it visualises the UK’s top 30 web presences, and the traffic flows between them.
It’s particularly useful to show just how significant Google, Facebook and Hotmail (or strictly nowadays, Windows Live Mail) are in the UK online experience – and hopefully made people think a bit more about tools which government (and indeed, party politics) often seems to ignore.
I used to do similar sessions for GCN (or as it was then, GICS) courses on a fairly regular basis; and it was great to be doing them again, forcing me to stay up-to-date on the latest statistics. The biggest single change between this one and the last one I did, back in 2006? The language I found myself using.
I was perfectly comfortable using ‘industry terms’ which I’d felt the need to avoid (or certainly, explain) last time: and the audience knew what I was talking about. But perhaps most striking of all, I was conscious of the fact that most of what I was saying was in the present tense, where last time it was future.
My thanks to Jon Worth for recommending me for the course; and to course leader Sue Calthorpe for going far beyond the call of duty, when I stupidly left my laptop power cable behind. I really enjoyed it all; I hope the attendees did too; and I’m dead keen to do more.