2. The challenge is not to make humans computer
literate, but computers human literate.
Liverpool Street station crowd blur http://www.flickr.com/photos/victoriapeckham/164175205/
Sunday, March 15, 2009
3. The BBC has historically created a series of
microsites – each coherent in their own right but
not across the breadth of BBC content.
Radio 4 Big Bang http://www.bbc.co.uk/radio4/bigbang/
Sunday, March 15, 2009
4. I can’t carry on my journey to find everything Brian
Cox...
Sunday, March 15, 2009
7. Things are changing : URIs, data and things
instead of webpages and photoshop.
Sunday, March 15, 2009
8. We’re talking Linked Data.
Linked Data cloud diagram http://www4.wiwiss.fu-berlin.de/bizer/pub/lod-datasets_2009-03-05_colored.png
Sunday, March 15, 2009
9. Adopting LOD principles makes sense because
you create coherent usable services – human
literate services.
Sunday, March 15, 2009
10. Tom Scott
derivadow.com
Colon Slash Slash http://www.flickr.com/photos/jeffsmallwood/299208539/
Sunday, March 15, 2009
Notes de l'éditeur
Stephen Fry recently notes that the challenge is not to make humans computer literate, but to make computers human literate.
And when one considers the revolution we've seen over the last 20 years I think we are making great progress towards that goal. Access to information has become democratized in a way never seen before.
So what for the next 20 years? Well obviously I don't know, but I will steal an idea from William Gibson and suggest the future is here it's just not evenly distributed yet.
I work for the BBC and it’s a big place - we produce and publish an amazing volume and diversity of content and I would like to suggest that in someways it represents a microcosm of the wider web.
We produce so much content that I suspect that a traditional central design and build would never work. It wouldn’t work from a UX POV, nor from a coordination and governance POV.
You simply couldn’t sit down, gather requirements and build a left hand nav style website. The coordination effort alone would kill you.
As a result the BBC has historically created a series of microsite. Each coherent in their own right but not across the breadth of BBC content.
Consider for example I can navigate around a Radio 4 site about the opening of the LHC... but...
I can’t carry on my journey to find everything to BBC knows about Brian Cox... but it’s nothing personal to Brian. You can’t...
find everything the BBC knows about lions or any other species...
or even one of our presenters, like Jeremy Clarkson.
But things are changing...
It has been my honour to work on a few of projects where we took a different approach. Starting with the data and how people think about it rather than starting with the web page or worse a photoshop document.
And when I say data I really mean starting with understanding what concepts and things people care about and giving each of those things a URI.
/programmes - ensures every programme the BBC broadcasts has a web presence, has a URI. And that that URI can be dereferenced to return an HTML document, an RDF document, JSON, iCal or mobile views.
/music - (currently in beta) is built with MusicBrainz and gives us a page for every artist the BBC plays and in due course it will give us a page per track. And the plan is then to integrate this with /programmes so that from an episode page at /programmes you can click on an artist in a tracklisting and find out more about that artist, including other programmes that have played that artist - hopefully introducing people to new music and new programmes.
And because it’s built with Musicbrainz and integrated with DBpedia not only do we get a URI per artist we also get links into the rest of the web and lovely webscale identifiers to make it easier for others to integrate.
I’m now working on a new project, BBC Earth, which is seeking to bring the BBC’s Natural History content online in a similar fashion. A page per species, habitat, behaviour and adaptation that the BBC cares about - all linked to the programme space and the wider web through DBpedia.
And of course as with programmes and music the API is the website - the URIs can return RDF, JSON etc. as well as HTML.
Of course what I’m talking about is Linked Data... even if we didn’t quite realise that when we started.
But the idea that we should care about our URIs, care about having one URI per concept, care about having machine representations for those resources instead of a separate API has helped us build a coherent, scalable, sane service. One that we hope one that is a bit more human literate.
The semantic web project has helped the BBC to start to move away from caring about the document and towards the ideas, concepts and things we as people care about.
So you can find all things Brian Cox, Lion or Jeremy Clarkson.
It is my hope that the future of the web is human literate and my belief that the way of achieving this is by following the principles of Linked Open Data.
HTTP URIs for concepts and things that make sense to people, linked to related things and dereferencable to the appropriate document.
I say it is my hope because it has been my experience at the BBC that this approach scales in a way no other can in delivering coherent usable services - human literate services.