Information Commons: A bright star for the future of information

Information Commons: A bright star for the future of information

Summary: In the eyes of its creators, the World Wide Web was never designed to take on the role as the be-all end-all architecture for a truly distributed global information system. But while large vendors, standards groups and technologists have grown dependent on the Web and treat it that way, some researchers are taking a revolutionary approach to the problem and addressing it at the very core of information design.

SHARE:
TOPICS: Browser
1

In the eyes of its creators, the World Wide Web was never designed to take on the role as the be-all end-all architecture for a truly distributed global information system. But while large vendors, standards groups and technologists have grown dependent on the Web and treat it that way, some researchers are taking a revolutionary approach to the problem and addressing it at the very core of information design. A newly published white paper from Harbor Research (a firm specializing in pervasive computing), entitled Designing the Future of Information, The Internet beyond the Web looks at two initiatives—the "Information Commons” of Maya Design, and "Internet Zero" from MIT’s Center for Bits and Atoms. 

The Information Commons is a universal database to which anyone can contribute, and which liberates information by abandoning relational databasing and the client-server computing model, according to the white paper. It has been under development at Maya Design for over 15 years as the result of a $50 million research contract from several federal agencies, including DARPA, to pursue "information liquidity," or the flow of information in distributed computing environments. Their goal is to build a scalable information space that can support trillions of devices.

I spoke today with Josh Knauer, director of advanced development at MAYA Design about the Information Commons and how it is progressing. According to Knauer, Maya (which stands for Most Advanced Yet Acceptable) is using P2P technology—in the sense of information sharing and not file sharing—to link together repositories of public and private datasets in the public information space created by Maya. These data and data relationships are stored in universal data containers called "u-forms," which are then coded with a UUID, or universally unique identifiers. These are the basic building blocks of the company's Visage Information Architecture (VIA), which allows data repositories to effortlessly link or fuse together to achieve "liquidity" (the paper has more details).

After explaining the technology Knauer gave examples of implementations. The U.S. military’s "Command Post of the Future" (CPoF), a system for real-time situational awareness and collaborative decision-making in use in Iraq, is based on Maya’s architectural and visualization innovations. When hurricanes Katrina and Rita devastated the Gulf Coast recently, the system made its first bridging to the Information Commons as military officials needed to incorporate publicity available EPA data on toxic hazards in affected communities, said Knauer.

(Command Post of the Future screenshot (Credit: Maya Viz)

mayaviz_cpof_double_small_1.jpg

In the public sector, among the several non-profit and goverment agency deployments, Knauer spoke about how Maya combined data from hundreds of government and private-sector sources into the Information Commons to create a location-aware directory of services with always current transit information for the Allegheny County Department of Human Services.

"The biggest area of concern for us now is pushing the debate into the user interface," said Knauer. The more mixing and mashing of information the greater the challenge will be to comprehend it.

Below are some comparisons of the Information Commons to other approaches, according to the white paper: 

Google:

But though it is a remarkable achievement, Google remains a blunt instrument, not a precision tool, for dealing with the world’s information. No matter how clever its indexing schemes and searching algorithms, Google cannot get around the fact that the Web does not store information in a fashion conducive to sophisticated retrieval and display. The Web can’t genuinely fuse data from multiple sources, and so Google can’t build an information model that visualizes the answer to a complex, multi-dimensional question.

Semantic Web:

The Semantic Web does recognize the dilemma presented by the Web’s limitations, and it has many passionate proponents—perhaps precisely because it does not propose any particularly radical or disruptive steps. However, the Semantic Web has had very few real-world deployments. Besides being forbiddingly arcane and captive to existing Web technologies, RDF metadata often grows significantly larger than the data it is trying to describe.

Topic: Browser

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

1 comment
Log in or register to join the discussion
  • The Web IS the New Database!!!

    I like the idea of distributed data via P2P, but that doesnt solve the fundamental problem that most data and content is contained in billions of web pages seen by billions of people, and served up by millions of databases, most relational, but increasingly organized around xml. I hope we get to a world where we can experince this "liquid" nature of data online via alterbnate software and protocols, but I suspect it will never move that direction, because we simply are increasingly swimming in a web-page world, where most of the content is either displayed, stored or distributed via HTTP. Most of dont participate, nor will we want to via an alternate medioum such as P2P, so the answer wont evolve there as long as the traditional Web and HTTP page access and the search medium continues to evolve as fast and furious as it does.

    My vision is the same as the W3C's, where the Web becomes the universal database, which wont happen as long as we are stuck in the world of HTML, where precious content is locked down into presentational markup. We need a web instead that embraces the new standard of XHTML, where all web pages become XML. This will only happen if and when the vendors and browser makers begin to build Web Standard's-based browsers (ie Internet Explorer) that will support XHTML standard and MIME. We have a LONG way to go before that will happen because so much of the web is designed around HTML and tables.
    But, the closer we get to forcing all web pages into XHTML the closer they get to being XML. At that point, they ARE databases, and yes, Google and a whole host of engines can begin to analyze those nodes as data and strip out content from the presentational markup.

    Also, at some point, I think parts of the Semantic stuff will infiltrate the schemas of most pages, and then we are closer to seeing the Web itself being the fluid database we seek. When people move away from html tables in their designs and start using CSS and XSLT, we will be sending XML to the browsers with XSLT, as we can kinda do now, and you are then using the Wolrd Wide Web's HTTP and its XML data framework as the massive database you seek. As people start designing richer XML schema, like RSS, and custom tags in their pages, then we will be even closer to experiencing the largest known distributed database in the world...the World Wide Web itself!

    That's something NOBODY can own or sell to you...its something the world will own, which will be a beautiful thing! People will still keep finding ways to capture and sell you that data, but it will be free! And the data available will be bigger than any P2P technology or relational database anywhere...

    WILDRANGER
    wildranger