Why the fork does the BBC need its own jQuery?

Of course it’s good news that the BBC’s in-house Javascript library, Glow has been released as open source. It’s a very respectable chunk of code, with some quite nice built-in widgetry. But why on earth should the BBC have its own Javascript library in the first place? Its ‘lead product manager’ – itself a worrying job title – justifies its existence as follows:

The simple answer can be found in our Browser Support Standards. These standards define the levels of support for the various browsers and devices used to access bbc.co.uk: some JavaScript libraries may conform to these standards, but many do not, and those that do may change their policies in the future. Given this fact, we decided that the only way to ensure a consistent experience for our audiences was to develop a library specifically designed to meet these standards.

They’re clearly sensitive to this question, as there’s a whole section about it on the Glow website itself, specifically referencing my own current favourite, jQuery. ‘On reviewing the major libraries we found that none met our standards and guidelines, with browser support in particular being a major issue,’ they explain.
So why not contribute to something like jQuery, to make up for its deficiencies? Isn’t that the whole point of open source? ‘Many of the libraries had previously supported some of our “problem” browsers, and actively chosen to drop that support… Forking an existing library to add the necessary browser support was another option, and one that might have had short term benefits. However, as our fork inevitably drifted apart from the parent project we would be left with increasing work to maintain feature parity, or risk confusing developers using our library.’
I’ve written here in the past in praise of the BBC’s browser standards policy, and I stand by that. But I’m afraid I’m not buying this defence of their decision to reinvent the wheel – and, it must be noted, ending up with results remarkably close to jQuery. The best argument seems to be the risk that libraries which currently meet their standards might not in the future; or that they might have to do work to keep a fork in sync. And even if that should happen, the worst case scenario is that they’d have to churn out a load of new Javascript. Which is what they’ve chosen to do anyway.
Plus, crucially, this isn’t about a bunch of geeks directing their spare-time volunteering efforts in one direction, rather than another. These are people being paid real money, taxpayers’ money, to do this, at a time when the BBC is supposed to be trimming its ambitions. If they’re at a loose end, perhaps they might want to address the News homepage’s 416 HTML validation errors, and abandon the ‘table’ markup.

Putting Google geo-location to the Twitter test

Google’s javascript API has an exciting, and somewhat underreported little feature built in: each time a call is initiated, it attempts to establish where the browser is physically located – and reports back a town, ‘region’ (county) and country. I was wondering if it was accurate enough to be used to ‘personalise’ a website automatically: so I ran a quick experiment among my Twitter following.
I set up a quick test page on puffbox.com, which included a call to the Google API, and asked people to leave a comment as to whether or not the response was accurate. Within an hour I’d had 30 responses, from all around the UK.
The results revealed that the function is sometimes bang-on, sometimes blocked, sometimes curious, and sometimes plain wrong… occasionally by hundreds of miles. I can forgive the occasional placing of towns in the wrong county; but several people in the north of England, using the same ISP also located up north, were getting responses of ‘London’. So my conclusion, disappointingly, is that it’s not really good enough to make meaningful use of.
A wasted effort? Hardly. It actually saves me the effort of building something reliant on the geo function, only to discover it’s useless for large numbers of people. And it’s a nice case study for the value of Twitter: a crowd of good folk and true, located all over the country, from whom I could ask a 5-second favour… with a good expectation of getting responses. Thanks, team.

Gordon Brown on your Wii

One of the more inspiring developments at the BBC recently has been the extension of iPlayer away from the desktop PC. Back in April, they launched iPlayer on the Wii – but it wasn’t the breakthrough moment it might have been. Leaving aside the fact it didn’t stream especially smoothly on my machine, the interface was optimised for a screen resolution which the Wii couldn’t deliver, making for a horrid user experience. Last week they made amends, with a Wii-optimised screen setup – and it’s truly brilliant. Try it on your desktop PC, but to appreciate its full glory, you need to be sitting on the living room sofa, in proper telly-watching mode.
I’ve been a bit surprised that people haven’t done more optimising of content for ‘games consoles’ – particularly the current generation, with their online capabilities. And with the Wii (again) selling like hot cakes (set to get even hotter too), it has tremendous potential for video-on-demand in the living room.
Inspired by the Beeb’s efforts, I wondered how much effort it would take to put a Wii-friendly front end on some YouTube content. So I took a few hours last night to build a prototype – and here it is: wii10.puffbox.co.uk

It’s basically the same concept as the BBC’s design, rebuilt from scratch using a combination of PHP, RSS and Javascript (specifically, JQuery). The code pulls in the last 10 items from Downing Street’s YouTube account, and puts them into a JQuery-driven carousel. When you click on a clip, a popup fades into view, and the embedded YouTube player autoplays. The big buttons left and right make the playlist scroll beautifully from side to side.
I want to stress: I’ve done this completely off my own bat. Although we have a continuing working relationship, I wasn’t asked to do this by Number10. It’s purely a proof-of-concept, using publicly available (publicly funded) material. It’s a bit rough round the edges: some of the link highlighting isn’t too smooth on the Wii, the word wrapping isn’t polished, and it doesn’t seem to work properly on (desktop) Firefox for some reason – although curiously, all other browsers seem OK, even IE! But having proven the concept, to be honest, I may not bother going back to fix these issues. There’s also a risk of YouTube changing their code, as has happened before: the Wii’s Flash player is a bit behind the times, and YouTube’s improvements have caused problems in the past.
But for now – it works, really quite nicely, and I’m dead pleased with it. You need never again say the words ‘there’s nothing on telly.’ 🙂 And with more and more government content going on YouTube, if anybody thinks this might be useful in a proper business context, please get in touch.

'Linking here' lists with Google feed API

Time for some tech talk. A few weeks back, I wrote about Google’s new AJAX Feed API. Having played with it last week on behalf of a client, and having liked what I saw, I decided to implement it myself.
If you’re reading this on the puffbox.com website itself, you might see a list in the sidebar headed ‘Who’s linking here?’. (If not, see here for an example.) It’s something I introduced a while back, powered by feeds from Google’s Blogsearch engine, and processed using the excellent SimplePie. But I’ve now switched over to doing it client-side, through the Google API.
Once the blog post finishes loading, the Javascript calls in the RSS feed (actually, it’s Atom format) from Google. If it finds any blogs linking to that specific post, it writes a title into one previously empty DIV, and a disclaimer into another. In between, it generates up to 10 <LI> list items, each active as a link to the linking blog.
If you want to see how it’s done, and copy the idea for yourself, a quick glance at the page’s source code will reveal how straightforward it was. (You’ll need your own API key, obviously.)
Why go client-side? It’s less effort for the server to process; and it doesn’t build up a mass of cached feeds. It should also be marginally more secure on paper, which is important to some clients. And whilst the function is dependent on Javascript being available, it’s dead easy to offer a ‘noscript’ alternative – a link to a pre-formatted Google search query. It’s nowhere near as slick, but it ensures the information is still available to those without Javascript, so it passes accessibility requirements (W3C guidelines, checkpoints 6.3 and 11.4).