Twice a year, I help run a MATLAB programming contest in which contestants try to write the fastest code to solve a math puzzle, using a resource any work done by previously submitted entries. In other words, you’re welcome to steal from those who came before you. It’s a free intellectual property open source barbecue. It works surprisingly well and results in seriously optimized code. So the first question I get when I explain it to people is this: can’t we apply this technique to some useful real world problem? But here’s the thing: the programming contest has a great advantage in its unambiguous figure of merit. If you can make the code run faster, that’s all I need to know, and I only need a clock to figure that out.
Suppose you wanted to make a similar contest to write a great poem. You’re immediately faced with a big problem: There’s no automatic poem analyzer. Who gets to judge whether or not your poem is better than the current leader’s? The Wikipedia approach comes close to programming contest idea here, in that lots of people are busy making improvements (or changes at any rate) to the same “code” or Wikipedia entry. But you can’t make a running report of the “goodness” of the article over time. It may get longer, but is it getting better? That’s a matter of opinion.
Into this space comes an interesting startup called Helium. They solve the problem of quantitative evaluation by letting members vote. So the same topic (say “How to find the best mortgage rate”) may have multiple articles, but only one of them will be voted number one. The lowest rated article for the mortgage question was this: “go to http://www.google.com and write down How can I get the best rate on a mortgage? you will get the best rate.” Good advice, but wouldn’t it be cool if the top rated result was this very page?
I think there’s still an optimal mix of Wikipedia and Helium that doesn’t exist yet, but we’re getting closer. (Helium spotted on Techcrunch)