RDF is Too Hard

This is a great quote by Peter Norvig and why the RDF community continually fails to understand why RDF has failed to take the world by storm:

“We deal with millions of Web masters who can’t configure a server, can’t write HTML. It’s hard for them to go to the next step. The second problem is competition. Some commercial providers say, ‘I’m the leader. Why should I standardize?’ The third problem is one of deception. We deal every day with people who try to rank higher in the results and then try to sell someone Viagra when that’s not what they are looking for. With less human oversight with the Semantic Web, we are worried about it being easier to be deceptive,” Norvig said.

If you can’t configure a webserver, can’t get the timezone right, and can’t even get the encoding correct for your content, what makes you think you can deploy RDF?

  1. If you can’t configure a webserver, can’t get the timezone right, and can’t even get the encoding correct for your content, what makes you think you can post to a blog?

    Norvig’s being rather disingenuous. There’s no reason publishing data should be any harder than publishing content, given the right tools. The tools are increasingly available. RDF is no harder than using SQL/relational languages just considerably more web-friendly.

    (I had a little rant: http://dannyayers.com/2006/07/19/incompetents-revolt )

  2. Nah, *deploying* RDF can be done by everyone who can write a blog post or tag a page. One reason why RDF hasn’t taken the world by storm is just that it’s not meant to do that. It’s an evolution on top of existing Web architecture. It took time for XML and CSS before people realised their value, too.

    RDF’s main problem is that implementing toolkits for the whole stack of specs takes ages and tutorials for average coders are rare (personal rant). Well, and there are clearly too many AI folks who discovered SemWeb stuff as a new playground, leaving other people with the impression that RDF equals AI, although it’s actually much more down to earth in most cases (usually more like a universal data store that allows you to remember posts, bookmarks, tags, profiles, microcontent or whatever else you come across while surfing the Web, and which then lets you repurpose a freely queryable subset of the collected information).

    But you are of course free to stick to your claim until we manage to come up with more convincing examples ;) (One might be IBM’s Queso server which shows nicely how RDF tech can be combined with stuff like Atom and metadata embedded in HTML.)


  3. Apples and oranges. The tasks you cite are for sysadmins. Deploying RDF may be unnecessarily difficult, but developers and designers routinely accomplish tasks conceptually more difficult than setting a timezone. If that was really a bar the web would barely exist, let alone 2.0.

  4. Hey guys.

    Let me clarify. Buildling RDF graphs is not something the average web developer creating HTML is capable of doing.

    This isn’t theoretical. RSS adoption has been hindered by web developers who don’t understand encoding and all the other issues required to form a valid feed.

    RDF complicates things by an order of magnitude (maybe a little less).

    I’ve been on both sides of this fence. When I wrote NewsMonster I spent WAY too much time making sure my RDF graphs were isomorphic to the examples provided by Mozilla.

    In Feedparser (which I wrote at Rojo) I had to spend way too many hours making sure it was liberal enough to handle broken feeds produced in the wild.

    I really think the only way forward here could be either microformats or simple XML feeds for vertical specs…

%d bloggers like this: