Plan to 'crowdsource' federal policymaking unveiled

Throwing tough questions out to the larger community has worked for some pilot product innovation projects, and for cleaning up the Alaska oil spill. Now for the real test - the federal government.
Written by Joe McKendrick, Contributing Writer

We're starting to see the possibilities from applying crowdsourcing to tough questions. Rather than rely on an limited staff of experts, some organizations are turning to much vaster online communities, with their ranks of potentially undiscovered experts.

Consider the recent crowdsourcing experiment at GE Research, in which 85 employees bought and sold "stock" in 62 new product ideas. The project with the greatest value at the end of this prediction-market experiment would receive $50,000 in research funding. In another example, Dell employing a voting platform that enabled customers to vote on the features they would like to see in the next line of PCs.

Then there's the Oil Spill Recovery Institute, established by Congress after the Exxon Valdez oil spill, issued a $20,000 challenge to a network of 175,000 "solvers" from 200 countries around the world to come up with a solution for cleaning up Prince William Sound. After three months, a construction engineer from the Midwest came up with the winning answer that solved the problem.

Now, Anil Dash, director of Expert Labs, a project of the American Association for the Advancement of Science, proposes to extend the crowdsourcing approach to the public policy sphere. Federal agencies, he says, could benefit from the wisdom of crowds. At the recent Supernova conference held in San Francisco, Anil spoke with David Weinberger about the possibilities.

Anil hopes Expert Labs will serve as a catalyst to boost federal policy decision-making with an "entrepreneurial model that reaches out to places like Silicon Valley and the tech community as well as the policy community."

By opening up perplexing questions to a broader community, Anil hopes to increase the diversity of opinions available to policymakers. "There's a real opportunity for moving past the model today, in which we get expert opinions from a half dozen or a dozen people in a room for an hour or two," he explains. "Today on thew Web we float a question to a couple thousand contacts. Imagine what we could do on a national scale."

In the process, policymakers may have access to expert opinion that formerly was out of reach for various reasons. "There are many experts out there among us, they may not be accedited through the traditional means of earning a degree in a certain topic, or being a member of a certain organization. We can expose their views, collaboratively."

Of course, crowdsourcing federal policy decisions to a wide online audience will result in an untenable amount of answers to be managed and filtered. This will be the second phase of the challenge, Anil says. "There will literally be too much to know. I think were going to be confronting questions such as 'how do we define expertise?' and 'How do we define authority?'

To some degree, such a community will have an inner circle, Anid states. While the federal policymaking collaboration platform will be open to all, preference will be given to members of AAAS, for example, Anil points out. "We're never going to constrain who can respond," he explains. "But there are a lot of issues that are naturally going to do better with answers from the scientific community. And that is where where being a part of AAAS gives us an 'unfair advantage' that I'm really happy to make use of."

Other criteria that will develop over time include previous contributions and votes from other community members. Expert Labs is currently working on tools to launch its crowdsourcing platform, as well as address filtering issues.

This post was originally published on Smartplanet.com

Editorial standards