Web 2.0? It's more like Computer 2.0.

Web 2.0? It's more like Computer 2.0.

Summary: Or even better, the uncomputer. When I think about what today's operating systems are -- Windows, OS X, Linux, etc --  I mostly seem them as collections of application programming interfaces (APIs) that give developers easy access to resources (displays, networks, file systems, user interfaces, etc.

SHARE:
TOPICS: Enterprise 2.0
37

Or even better, the uncomputer.

When I think about what today's operating systems are -- Windows, OS X, Linux, etc --  I mostly seem them as collections of application programming interfaces (APIs) that give developers easy access to resources (displays, networks, file systems, user interfaces, etc.).  

Separately, I've been following the Web 2.0 discourse between ZDNet bloggers Richard McManus, Russell Shaw, and Joe McKendrick. By the very name of his blog (Web 2.0 Explorer), McManus appears to be in the "Web 2.0 exists" camp.  However, in saying "Web 2.0 is a marketing slogan," Shaw (our IP Telephony, VoIP, Broadband blogger) is in the "Web 2.0 doesn't exist" camp. Rising to Web 2.0's defense is our Service Oriented Architecture blogger McKendrick who recently wrote Yes Russell, There is a Web 2.0.  But in his "other blog," even McManus is now questioning the wisdom of the Web 2.0 moniker (see Web 2.0 is dead, R.I.P.) and Shaw, who, judging by the art on his most recent post, apparently feels as though he has started a World Wide Web War.

Web 2.0? Not that I want to get into the middle of a WWWW, but If you ask me, it's more like Computer 2.0.  The computer that we've come to know and love is quickly becoming a thing of the past (thus, the "uncomputer") and quickly taking its place (and drawing developers in droves) is a new collection of APIs (this time Internet-based ones) and database interfaces being offered by outfits like Google, Yahoo,  Microsoft, Salesforce.com, eBay, Technorati, and Amazon (as well as smaller private enterprises, governments, and other businesses).

Whereas the old collections of APIs (the operating systems) were the platforms upon which the most  exciting and innovative application development took place,  the new collection is where the action is at, spawning a whole new compelling breed of applications.  Barely a day goes by where some new mashup -- the creative merger of one or more of these APIs with each other and/or with a public or private database -- doesn't appear on the Web.  Three of my recent innovative favorites (with real value to Internet users) come from ParkingCarma.com (for parking availability in San Francisco), ZipRealty (for merging interactive mapping with the Multiple Listing Service's list of homes for sale), and (as a parent) mapsexoffenders.com (you can guess what it does).

In many ways, this new collection of APIs fulfills the old Sun tagline that the "network is the computer."  Not only is the old computer becoming a relic of the past, so too is the manner in which classical operating systems -- even open source ones -- are updated and maintained.  Not that I want to start a religious war, but even the "official" kernels of open source operating systems like Linux are subject to the oversight of tight-knit councils of digerati.  Relatively speaking, proprietary operating systems (again, collections of APIs) like Windows are far more cathedral-like in their development than are open source operating systems like Linux. 

But, when you look at the new operating system -- this new collection of Internet-bound APIs -- aren't they even less cathedral-like than Linux? After all, anybody (and I mean ANYBODY) can enhance this "new OS" by adding their own API at any time.  Either by way of internal development or acquisition (ie: del.icio.us and Flickr), some Internet titans are not only growing their API portfolios by leaps and bounds, but using the word "platform" to describe the portfolios.  While those short-tail platforms will undoubtedly be very compelling to developers, I wouldn't rule out some excitement in the long-tail of APIs.  Harkening back to the 80's, some of the best stuff (including uber, cross-"platform" APIs) will come not from the multi-million dollar labs in Silcon Valley or China, but from someone's garage.  There is some great disruption ahead of us. No one should be resting on their laurels.

Compared to what we're used to, this rapid proliferation of easily accessible APIs is so, uh, so uncomputer-like.  Whether or not those new APIs get used is a different story.  But the point is that there's no roundtable of Jedi Knights through which all proposed kernel changes must pass.  And, much the same way new mashups keep showing up every day, so too, as TechCrunch editor Mike Arrington constantly reports, do the APIs (Arrington is demonstrating a knack for getting the scoop on new APIs, blogging about them almost as soon as they become available).

Still not convinced of the uncomputer? Well, then consider this: not only is anybody free to add a new API at anytime, the primary user interface -- a browser -- almost never needs updating to take advantage of those new APIs. Pretty uncomputer-like.  Compare that to what happens when a classic operating system takes on new APIs.  The upgrade cycle can be incredibly painful, requiring all sorts of special hardware, new software and budget exercises that, years from now, when millions of mashed-up applications are available to anybody -- regardless of what technology they have in front of them -- there will be a lot of people looking back at the old way of doing things saying "What in the world were we thinking? Why didn't we do this sooner?"

Finally, there's an ecosystem story here that needs to be told.  The one that's the reason why this new collection of APIs is so much more compelling to developers than the old collection.  It's called "reach" but is otherwise known as target market.  Given that the new collection of APIs is so much more enabling to application developers than the old collection (not only do they abstract resource access, they abstract real information access as well), and given how rapidly "mashup artists" (a term coined by Mary Hodder in one of my recent conversations with her) can roll-out their applications (try mapbuilder.net, you'll be amazed at what it does for a specific type of rapid mashup development [RMD]), surely there must be some catch. 

Can developers possibly have their cake and eat it too?  With this ecosystem, the answer is yes.  Why? Because out there in the world, there are a lot of people who don't have OS X to run OS X apps, and a lot of people who don't have Windows to run Windows apps, and a lot of people who don't have Linux to run Linux apps.  But there's hardly anyone who doesn't have a browser.  And not until this new collection of APIs showed up -- expanding daily -- have application developers been exposed to such a rich and easy ecosystem to work; one that offers such incredibly broad access to virtually the entire market.  No computer required.  The uncomputer.

[Update 12/23/05: If you're a mashup artist or one of the many new API providers contributing to the uncomputer, then Mashup Camp may be for you.  For more information, check out my first blog entry about Mashup Camp or contact me at david.berlind@cnet.com.  Tagged @ Technorati under and ]

Topic: Enterprise 2.0

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

37 comments
Log in or register to join the discussion
  • Closed Source

    The neat thing: If you implement the api on your server, you need not worry at all about piracy!
    no open source, no copies in the wild.
    BTW, you can use all the open source code you want without incurring a requirement to redistribute.
    jimbo_z
    • GPL 3

      The GPL3 May change that - it is one of the issues that's under consideration.
      Yensi717
    • GPL 3

      The GPL3 May change that - it is one of the issues that's under consideration.
      Yensi717
  • Closed Source/Open Source

    The neat thing: If you implement the api on your server, you need not worry at all about piracy!
    no open source, no copies in the wild.
    BTW, you can use all the open source code you want without incurring a requirement to redistribute.
    jimbo_z
  • It depends on your definition of Web 2.0

    I think most definitions of "Web 2.0" are off the mark. "Read/Write Web"? We've had the ability to post to Slashdot for years. I think your definition of the uncomputer is really my definition of Web 2.0. There is not doubt there has been a new wave of innovation in the web over the last year or so. And these things will find there way into web-based enterprise software interfaces in the near future.
    A more interactive, more useful web, whether we individually choose to participate or not, is reality.
    meh1309
    • 1 Vote for the uncomputer!

      Meh,

      I agree. Read/Write is simply facilitated by some of the "new collection" of APIs. Not all APIs are read/write. And, of Web 2.0? Well, as far as I can tell, HTTP (version 1.1) hasn't changed since the year 2000 (see http://www.w3.org/Protocols/Activity.html#current).

      db
      dberlind
  • They are trying to turn computing into a service model, NO THANKS!!!

    5 good reasons why Web 2.0 sux:

    1) I like my computer/programs to run even when I am not connected to the internet.(usually on my laptop)
    2) I dont want to pay a monthly service charge for using a program I have already bought.
    3) Programs running locally will always be faster and more feature rich than a web based program.(Just look at Gmail vs Thunderbird)
    4) I like my privacy, with the service computing model, its too easy for companies to invade my privacy.
    5) Today, when you buy a program, you OWN it. With the service model, they can change the license agreement at will. DRM sucks for the consumer.
    xunil skcor
    • Yes, just look at

      5) ...they can change the license agreement at will...

      Just look at pensions. They can change the rules at any time. 30 years later...whoops...you no have any retirement. Go back to work for another 30 years.

      You owned a copy of Word for 10 years and it enabled you to write and save documents all that time? Whooooops. Well, now we need more money, so if you still want that capability, you need to pay more money. PAY MORE MONEY.
      ordaj9
      • Big windows

        ..they saw you coming.

        ..You owned a copy of Word for 10 years and it enabled you to write and save documents all that time? Whooooops.....

        Who exactly are you paying to use Word 6.0?
        Or any other version?

        A fool and his/her/it money.......

        Joe
        seosamh_z
    • Sort of...but slightly mistaken...

      I agree with you on the undesirableness of the new services model. But such services are only one embodiment of the "new APIs" being mentioned. There's no reason you can't define some such "API" that is used both locally (either within a system or systems on a LAN), or as a service on the net. That's precisely the power of the thing: its flexible enough to be used (or not) whereever, whenever, without requiring ubiquity or having other system prerequisites in order to become functional.
      Techboy_z
  • The chokepoint...

    The ISPs. They control access, price, availability, and speed.

    We'll see really whiz-bang stuff when we get speeds at 100 mbps for $19.95 per month and 99% uptime and available wherever you want to go. Like in other countries.

    Fat chance.
    ordaj9
  • MERRY CHRISTMAS/HAPPY NEW YEAR!!

    Just thought I'd take time out from the usual IT bashing and discussion to say have a great and wonderful holiday and I hope that Santa Geek got you what you wanted for Christmas!!

    Cheers!! :)
    itanalyst
  • WHy it will not work...

    http://news.zdnet.com/2100-9588_22-6004625.html
    No_Ax_to_Grind
    • One size fits all

      type arguments usually don't work.

      There is however some applicibility for using the async/AJAX model.
      I've have been doing this since MS released its Active Scripting technology last century and the users usually like the results.


      Joe
      seosamh_z
      • Doesn't matter if you lose the network

        Really matters very little what technology you use if you lose connectivity and can't get to your data.
        No_Ax_to_Grind
    • That was just a database glitch.

      Salesforce has 2 colocation facilities, but if a database has a problem (and it is sync'd between 2 colo's on 2 coasts) then the while system comes down (for a bit). SBC has had Frame Relay problems that have taken out connectivity in several states. So what? In both cases, the relavent parties took the effort to fix the problem, as would any other service providor. Everything is movint to the network, it is called convergence (and glitches happen).
      B.O.F.H.
      • The excuse doesn't matter when you need the info.

        It really doesn't matter to anyone what the excuse is, it failed.
        No_Ax_to_Grind
        • Gotta agree...

          If the remote service goes down when customers need their data; all the excuses in the world don't mean a hill of legumes and a rolling donut. Then again, the same applies to locally held and administrated data - no excuses from the data-center: when data is needed it is absolutely needed.

          Does not this all come down to cost vs reliability and the trade-off the customer is willing to make between them, If the remote service provider can give access with the same reliability, but at a lower cost, then perhaps the remote services have a chance to survive after all?
          John Le'Brecage
          • I agree, local is no guarantee...

            But then when you look at the probability of losing your server or losing your internet connection they don't come anywhere close.

            Ever stood in line at the checkout line because the credit card machine can't connect? Seems to happen to me a lot...
            No_Ax_to_Grind
          • Again, I agree.

            [i]Ever stood in line at the checkout line because the credit card machine can't connect? Seems to happen to me a lot...[/i]

            Funny you should mention that event(s). Such happened to me quite recently and I remarked to the counterstaff, "First time in fifteen years". Seriously, the event is so rare I don't ever recall it happening. Laughably, they didn't even have a manual press on which to fall-back. We waited five minutes for the lines to clear and the connection was made, thankfully.

            Yes, network reliability should definitely be a consideration: prudence dictates, "The more important the data, the more redundant the backup." Likewise, the same goes without saying for network connectivity. So, add in redundant connectivity as yet another factor-in for the costs of remote servicing or failing redundancy, a local cache of updates, which can be dumped when the connection resumes. At Goddard we used both techniques depending on the time and mission critical nature of the data flow.
            John Le'Brecage