Proof that the search for "great search" isn't over just yet

Proof that the search for "great search" isn't over just yet

Summary: Anybody who has ever used a search site like Google or Yahoo knows that there's room for improvement in search. But just how much?

SHARE:
TOPICS: Browser
22

Anybody who has ever used a search site like Google or Yahoo knows that there's room for improvement in search. But just how much? Is the room that's left only for incremental enhancements or might we still see some quantum leaps? Until last week when I got a demonstration of the work being done in Sun's Labs, I was thinking "incremental."  But now, I'm thinking quantum leap. And the really strange thing is that Sun isn't even thought of as a search company. After all, when you think about quantum leaps in search, the expectation is that if any one can deliver such a "leap," then that anyone is probably Google.  But Sun? If you told me before last week that Sun would be hatching some cool search technologies in its labs, I would have declared it bunk. But I would have been wrong. Dead wrong.

It's not like Sun wasn't already shipping search technologies. As it turns out, Sun ships a search engine with its portal server and Web server products. Now, in its Labs, Sun is trying to figure out how to take its existing search products to the next level. According to Sun's research, 85 percent of the information produced by businesses is of the sort that inspired search engines like Google's and Yahoo's in the first place: unstructured information. But the other key bit of data that Sun cites is how, on average, "information workers" spend 25 percent of their time looking for information. In other words, searching.

So, quality search isn't just about which search giant can win over the most Internet users. To businesses with a lot of information workers, any technological advancements that can whittle that 25 percent figure down to 20, 15, 10 or even 5 percent means that respectively, those workers can be spending 5, 10, 15 or event 20 percent more of their time on tasks that contribute more directly (more directly than searching) to competitive advantage. In fact, freeing up time to focus on those activities that contribute to competitive advantage -- long-hand for what I'm going to start calling "competitive productivity" (versus plain ole' "productivity")-- is more in-vogue than it has ever been. The subject routinely comes up in the context of outsourcing where the benefits of passing-off commodity tasks and automation (ones that run little chance of differentiating one business from the next) to "specialists" that make a business out of providing certain deliverables makes far more sense then insourcing the provision of those same deliverables.

For example, for most companies, using the browser-based Salesforce.com (or something like it) and letting its CEO Marc Benioff worry about system reliability, software updates, and security makes far more sense then bearing the burden of those headaches yourself. And, although there are some studies (often commissioned by the providers of insourced solutions) that try to demonstrate how outsourcing to application service providers like Salesforce.com might cost more in the long run, the question really comes back to where the majority of your employees' time should be spent: On things like running salesforce automation or human resource management systems that ultimately don't serve as major differentiators from one business to the next? Or, on the things that are clear opportunities to differentiate. Automated things aren't the only opportunities for automation. If you haven't seen Mechanical Turk -- Amazon's service for matching buyers and providers of things more manual, give it a try. 

So, there's no question in my mind that, if you can focus more of your information workers' time on competitive productivity, you should have an easier time achieving your business' goals. And that's apparently where Sun's head is at when it comes to search.  If I had to boil down what I saw during Sun's presentation, the advancements would fall into two categories that can't necessarily be disassociated with each other. The first of these is relevance of the results. The second is user interface. We've all seen the sorts of results that Google and Yahoo spit back at you when you visit either site to search for something. The problem, from a productivity point of view, is that there are so many results to weed through and they're not necessarily organized in a fashion that make them very navigable. Suppose for example the information you were looking for is actually there in the search results, but buried 10 pages down. Just clicking through and scanning that many pages alone is unproductive (competitively unproductive too).

The science of search needs to focus on how to get the relevant information closer to the surface. Today, the "surface" is often thought of as being the first page of search results. So, not surprisingly, a lot of search science is dedicated to relevance. In other words, how to make sure the most relevant links appear at the top, on the first page. The folks at Sun's Labs are clearly concerned with relevancy. According to Sun's Steve Green:

Google gets you a list of documents. We look for the answer to a question in a document. With Google, you're guessing what words someone would have used in a document to get the documents that you're looking for. Some people are good guessers. Some are not. In my house, I'm the better guesser.

It's so true. How many times have you seen someone else searching in futility for something, trying all sorts of search terms, and then you walk up and find it on the first try? Most people who can do this know something about how the answer being sought is often framed, and how search engines think. In some cases, they're experts on search syntax. For example, with most search engines, using the minus sign (or hypen) before a search term instructs the search engine to exlude documents that mention that term.  

In an effort to help users "competitively produce" an answer, Sun's Labs have come up with a lingusitics analysis technology that the company's researchers affectionately refer to as the "Blurbalyzer" and the example given cited the way Amazon recommends books on its Web site today. If you've ever shopped on Amazon, then, you've probably seen the feature that says "Customers who bought this item also bought" along with a list of items. The feature seems to suggest that, theoretically, if you liked The DaVinci Code by Dan Brown, then, based on what other buyers of the DaVinci Code also bought, you might want to read Michael Baigent's Holy Blood Holy Grail. This search feature relies heavily on the social nature of the data that Amazon keeps in its database. You're essentially relying on what others have done and in some ways, the feature can't help but self-perpetuate its own recommendations.

If for example, it suggests Holy Blood Holy Grail to buyers of The DaVinci Code and those customers act on that recommendation which in turn helps Holy Blood Holy Grail stay on the list of recommended reading, does that mean you'll like the book just because you liked The DaVinci Code? It's hard to say, but Sun thinks there's a better way based on a linguistic analysis of the reviews that people have written for the The DaVinci Code and other books. Copyright prevents Sun from digitizing, storing, and surfacing the full text of a book, but reviews are already in a digital format on the Web that Blurbalyzer can probe. In the demonstration, after digesting reviews of the DaVinci Code and comparing it with the language found in other reviews on Amazon's site,, Blurbalyzer turned up Paul Christopher's Michelangelo's Notebook. Reviews for both The DaVinci Code and Michelangelo's Notebook had similar references to religious mystery with ties to old Europe, The Louvre, symboligists, and the Priory of Scion. Yet neither book appears on Amazon's list of recommended reading for the other.

So far so good? Right. But wait, there's more. Maybe something like Blurbalyzer can discover relevant information based on its linguistics approach. But is that what you really want? To be fair, other approaches to relevancy may actually return better results in certain contexts. At some point, the end-user always ends up making the final decision about what's most relevant. In the name of competitive productivity, it's up to the search experience to make end-users as productive as possible...helping them to find that proverbial straight line that's the shortest distance between two points. Enter clusters (and eventually, the aforementioned "surface").

In the world of search, clustering is a technique that keeps similar items in close proximity to each other. If, prior to searching for something, a search engine has an idea of what documents are similar to each other in terms of their content, it can "cluster" them. Once clustered, a hit on any one document in a cluster could very well turn up the others. But now comes the surface problem. Let's say, because of its content, a single document ends up being a part of 8 distinctly different clusters. Most people doing search wouldn't get to see those clusters. In other words, they cannot visualize the eight clusters and then pick which one is closest to the vein of information they're seeking.  So, regardless of what technique (linguistics or otherwise) is used to drive the formation of these clusters, there still exists the problem of surfacing that structure (one that has been wrapped around what is mostly unstructured data).

Unless of course, you change the nature of the surface. For example, instead of a list of text, how about a three-dimensional translucent sphere (something similar to this crappy photograph that I took of one while it was being displayed on a projection screen):


In this photo, it doesn't look very 3D-esque. But trust me, on the screen, it had a great 3D feel and, if I recall correctly, the sphere could be "grabbed" and rotated.

More importantly, the sphere changes our notion of what the surface is. With this visual respresentation, we can see each of the clusters (imagine each bubble as an individual search result and the colors representing clusters). Then, imagine how, as the mouse passes over each bubble, something pops up like a book or a CD cover. On relatively short order, with a few mouse-overs, the end-user could easily get a sense of what each cluster is about and then zero in ont that cluster. It's sort of like being able to advance to the 20th page of Google's search results (and knowing in advance that the cluster you're looking for actually starts on the 20th page).

Once you have an idea of what each bubble represents (from the mouse-overs), double clicking on one of them could take you to the search result. Or, imagine if you encircled a cluster with a selection tool (like what comes with graphics programs), double-clicked on that, yielding some Google or Yahoo-like search results with headlines and blurbs for just that cluster (blurbs based on the discoveries of the linguistics technology -- not just simple text hits).  Sun didn't show that particular feature. But it would be child's play for the person who coded this 3D sphere to come up with that.

Two paragraphs ago, my mention of the phrase "CD cover" was a deliberate plant. Scouring text and applying some lingustics technology almost sounds par for the course in terms of what search can do now and where it should be heading. The 3D sphere also seems like a natural evolutionary step given where we've seen user interfaces in today's operating systems going. But text isn't the only content Sun can probe in a unique way.  It thinks it has figured how to probe music as well. In other words, instead of studying the linguistic quality of the content, it studies its audio properties. The net result is that the clusters shown in a sphere are clusters of similar music rather than similar bodies of text. And not just a cluster of classical music here, blues there (rock would be positioned relatively close to blues) and jazz on the other side. Within the general classical cluster might be the various genres of classical music such as classical guitar music (this was demonstrated). 

Searching wasn't all that the technology was capable of. With audio as the context, we saw a demonstration of how, in connect-the-dot-fashion, the small bubbles could be strung together to form a playlist.  The technology shown was even capable of automatically plotting a path (a playlist) from a quiet song to one with more energy if say, you wanted a playlist for your workout that started off mellow and worked up to something with more energy.

Relevance, "surface," and overall user experience. After seeing the demonstrations of what Sun has in the works in its Labs, my next visit to a search engine was, well, dull and not terribly productive. Perhaps there's hope and room for improvement in search, after all.

Disclosure: In the spirit of media transparency, I want to disclose that in addition to my day job at ZDNet, I’m also a co-organizer of Mashup Camp, Mashup University, and Startup Camp. Yahoo, Google, and Sun, all of which are mentioned in this story, were sponsors of one or more of those events. For more information on my involvement with these and other events, see the special disclosure page that I’ve prepared and published here on ZDNet.

Topic: Browser

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

22 comments
Log in or register to join the discussion
  • Side point, but not irrelevant...

    "although there are some studies (often commissioned by the providers of insourced solutions) that try to demonstrate how outsourcing to application service providers like Salesforce.com might cost more in the long run"

    That's why OSS CRM solutions are taking hold. They offer the best of both worlds: lower cost than commercial offerings such as salesforce, while letting your employees do their real jobs. One example is SugarCRM.
    Techboy_z
    • Disagree on two points

      1. The majority of the cost of insourcing is not in the licensing fees. It's in the cost of running the solution. A business can outsource salesforce automation to someone like Salesforce and never have to hire an IT guy to figure out how to make it work. Or update it (non-issues when outsourcing to an ASP). Over time, more and more of a business' app infrastructure can be outsourced in this fashion.

      2.SugarCRM is open source according to SugarCRM. There are many open source experts who disagree. Not only that, the supposed open source license under which Sugar releases its code today is not one that is going to get certified as open source by the OSI in its current state. If an attribution provision does get certified, my prediction is that the wording of it will be different from the wording currently found in the SugarCRM Public License.

      db
      dberlind
    • re: Side point, but not irrelevant...

      Disagree. Customer relationship management (CRM) covers methods and technologies used by companies to manage their relationships with clients. It is a CORPORATE level strategy, focusing on creating and maintaining relationships with customers.

      Information stored on existing customers (and potential customers) is analyzed and used to this end. Automated CRM processes are often used to generate automatic personalized marketing based on the customer information stored in the system. One example of a company that has taken strides forward to better their company for the customer is HP which was not done with open source.

      When it comes to open source, what information do you want to share to the world? Much of OSS CRM lacks privacy and ethical concerns. Its down right frightening what is written, if any, and stated in user agreements and licenses. It is definetly buyer beware.

      There are serious concerns when using SugarCRM. The number one being responsibility and ownership.

      Even though Oracle (who currently supplies technology and applications to over 90% of communications companies worldwide, and 17 of the top 20 most profitable communications companies run Oracle Applications) announced 10/23/06 it had entered into an agreement to acquire MetaSolv Software, it is only doing so to enable automated service fulfillment and psooibly disable some of the competition.

      I make this point to illustrate how encouraging it is to know the product will still be the backed by a single global vendor. A company that will ensure privacy and ethical concerns are addressed and resolved by providing sound user agreements and licenses. OSS CRM has its place but ...
      DesertOutlaw
      • Ethical concerns?

        I guess that knock Microsoft CRM out of the ball park!
        nomorems
  • Percentages?

    If you you spend the 20% of the total work time you would normally use "searching" on "productive" work, you have increased productive work by 25%. I suppose if you hadn't used the term "respectively" you could have gotten by with the error--mistake!

    Are you a programmer?
    Captain America
  • Search Engiines

    Never mind the big boys such as Google or Yahoo, why not try Every Click and each time you search 50% of the take goes to a charity of your choice. Check it out. http://www.everyclick.com/
    frankacne
  • Technical "solution

    "After all, when you think about quantum leaps in search, the expectation is that if any one can deliver such a "leap," then that anyone is probably Google. But Sun? If you told me before last week that Sun would be hatching some cool search technologies in its labs, I would have declared it bunk. But I would have been wrong. Dead wrong."

    Why in the world do you continue to think that Google is an innovator in search? Their last major innovation in that field was Page Rank, which is what? 8 years old? Somewhere around there? This is just part of the Google Delusion that so many people suffer from, the idea that Google is just filled with geniuses. If Google's search is so great, why is it that most of their results for technical queries now are duplicate archives of mailing lists, newsgroups, and vendor documentation that serves as keyword bait for AdSense sites? Ever wonder? Google has not done a single thing to significantly improve their search system since it first came out, their results have gone nowhere but downhill, and their index is increasingly filled with useless spam.

    If Google is so innovative, where are the patents? Regardless of your opinion on current patent law, the innovation level of a company can be directly measured by the number of patents they file. Microsoft and IBM are research and innovation leaders by that measure; Google is nowhere near them. Want to talk "innovation" Take a look at Microsoft Research and what they do; it is a real shame that very little of what they do actually makes it into shipping products, except in a diluted or dumbed down form.

    Sun, on the other hand, while not being a flashy company, having the most incompetant marketing department in history, and having management that understands bytes and bits much better than dollars and cents, is a real engineering company. If there is a technical challenge, I'll give Sun better odds at solving it than any other major company out there.

    "So, quality search isn't just about which search giant can win over the most Internet users. To businesses with a lot of information workers, any technological advancements that can whittle that 25 percent figure down to 20, 15, 10 or even 5 percent means that respectively, those workers can be spending 5, 10, 15 or event 20 percent more of their time on tasks that contribute more directly (more directly than searching) to competitive advantage."

    I have two words for you: "document management." Companies that employ document management systems that tie directly to the applications they use (such as the old Wang systems, and more modern systems that law offices use that tie into Word) DO NOT HAVE THIS PROBLEM. Companies that just have a network share where a bazillions and one people are creating directories and dropping documents all over the place have this problem.

    Workers spend 25% of their time searching for the internal information they need because their employers do not feel like implementing proper document management, training users how to use it, and enforcing the rules.

    What you are looking for is a technical solution to a non-technical problem. In others words, what is going on is, "gee, our users are completely undisciplined and our systems are incapable of handling data. On top of that, users have scattered data all over the place, from their local hard drives on multiple machines, IM history logs, email files stored on their local machines, documents in network shares with no comprehensible directory structure, and so on. Well, either they can spend 25% of their time searching for this stuff, or we can attempt to give them some sort of automagical, semi-AI using search system with automatic tagging, manually tagging, blah blah blah." The alternative is, "we can move to a more mainframe like system, get the users oriented with process and procedure, make sure that process and procedure is usable by the users in a way that makes them not rebel against it, and get our data on our servers in a usable format." Why does the second one sound so much better to me?

    I admit, I am biased; I cut my teeth on Wangs and VAXs. I think that wikis are just about the dumbest way possible to store critial information. I cannot stand HTML (or *most* other XML/SGML systems) for content, as it makes not just the content itself but the format as well impossible to work with (don't beleive me? Read the RSS spec and compare it to the old spec for "channels" that microsoft pushed in IE4, which did the same thing more or less).
    Justin James
    • Argh!

      Accidentally hit "Submit" before I was done...

      "After all, when you think about quantum leaps in search, the expectation is that if any one can deliver such a "leap," then that anyone is probably Google. But Sun? If you told me before last week that Sun would be hatching some cool search technologies in its labs, I would have declared it bunk. But I would have been wrong. Dead wrong."

      Why in the world do you continue to think that Google is an innovator in search? Their last major innovation in that field was Page Rank, which is what? 8 years old? Somewhere around there? This is just part of the Google Delusion that so many people suffer from, the idea that Google is just filled with geniuses. If Google's search is so great, why is it that most of their results for technical queries now are duplicate archives of mailing lists, newsgroups, and vendor documentation that serves as keyword bait for AdSense sites? Ever wonder? Google has not done a single thing to significantly improve their search system since it first came out, their results have gone nowhere but downhill, and their index is increasingly filled with useless spam.

      If Google is so innovative, where are the patents? Regardless of your opinion on current patent law, the innovation level of a company can be directly measured by the number of patents they file. Microsoft and IBM are research and innovation leaders by that measure; Google is nowhere near them. Want to talk "innovation" Take a look at Microsoft Research and what they do; it is a real shame that very little of what they do actually makes it into shipping products, except in a diluted or dumbed down form.

      Sun, on the other hand, while not being a flashy company, having the most incompetant marketing department in history, and having management that understands bytes and bits much better than dollars and cents, is a real engineering company. If there is a technical challenge, I'll give Sun better odds at solving it than any other major company out there.

      "So, quality search isn't just about which search giant can win over the most Internet users. To businesses with a lot of information workers, any technological advancements that can whittle that 25 percent figure down to 20, 15, 10 or even 5 percent means that respectively, those workers can be spending 5, 10, 15 or event 20 percent more of their time on tasks that contribute more directly (more directly than searching) to competitive advantage."

      I have two words for you: "document management." Companies that employ document management systems that tie directly to the applications they use (such as the old Wang systems, and more modern systems that law offices use that tie into Word) DO NOT HAVE THIS PROBLEM. Companies that just have a network share where a bazillions and one people are creating directories and dropping documents all over the place have this problem.

      Workers spend 25% of their time searching for the internal information they need because their employers do not feel like implementing proper document management, training users how to use it, and enforcing the rules.

      What you are looking for is a technical solution to a non-technical problem. In others words, what is going on is, "gee, our users are completely undisciplined and our systems are incapable of handling data. On top of that, users have scattered data all over the place, from their local hard drives on multiple machines, IM history logs, email files stored on their local machines, documents in network shares with no comprehensible directory structure, and so on. Well, either they can spend 25% of their time searching for this stuff, or we can attempt to give them some sort of automagical, semi-AI using search system with automatic tagging, manually tagging, blah blah blah." The alternative is, "we can move to a more mainframe like system, get the users oriented with process and procedure, make sure that process and procedure is usable by the users in a way that makes them not rebel against it, and get our data on our servers in a usable format." Why does the second one sound so much better to me?

      I admit, I am biased; I cut my teeth on Wangs and VAXs. I think that wikis are just about the dumbest way possible to store critial information. I cannot stand HTML (or *most* other XML/SGML systems) for content, as it makes not just the content itself but the format as well impossible to work with (don't beleive me? Read the RSS spec and compare it to the old spec for "channels" that microsoft pushed in IE4, which did the same thing more or less).

      ***************************

      Here is where I continue from my mispost

      ***************************

      Law firms are a great axample of this. So is a lot of things related to the legal industry. They have had these problems beat for a long, long time, because they needed to, and because they devoted the resources to IT. They have disciplined IT, and users who understand that using the IT systems properly is "make or break" for them. Maybe it is because the stakes are so high. I do not know.

      On a side note, I find it amazing that tech writers appluad outsourcing so many systems to third party vendors, with the data hosted remotely... and never once ask "how in the world do I combine all of these separate vendors' systems into a unified view?" Let's say that you have your CRM with Salesforce.com, your email with Centerbeam, and use Google Docs for spreadsheets, and some 3rd party wiki company for "groupware" or "collaboration". How in the world do you expect to have 1 search box that you can enter a term in and get meaningful results from all of these sources? Could you explain that? Because I can't.

      I do not that the kinds of systems that you and many other tech pundits push to "help users" are the exact same systems that create this mess which you turn around and deplore. It's rediculous. You have two choices. You can either let users run all over the place creating data all over the place, and then have them not be able to find it. Or you can stick closely to a tightly managed client/server model (or mainframe model) where users have few options, little choice, but their data can be located. Choose your poison.

      J.Ja
      Justin James
  • cluster search engine has been existing for years

    The idea of clustering search results is not new. Years ago, while in grad school, I started using Vivisimo. The search results were grouped in a pane and the user didn't need to hover their mouse over a visually attractive otherwise cluster to see what results it contains.

    Currenlty Vivisimo seems to have made their search tool available free only to non-profit organizations, government and schools.

    http://vivisimo.com/html/sitesearch-20060320

    A list of other meta-search engines:
    http://searchenginewatch.com/showPage.html?page=2156241
    VBoycheva
  • Search improvement

    Personally I don't want my search engine making assumptions about which things should be grouped together. This approach will work *against* discoveries if my search is especially novel and groupings do not apply. I prefer a search engine that?s unbiased. Instead I would like to see the user interface improved. As much as I like google, I'm tired of scrolling down the damn window to get to the 'next page' button. Everything should fit on one page. Two columns would be nice. I should also be able to shift-click or alt-click on any given word to automatically exclude it, forcing an instant refresh of the search results. That's the sort of improvements I?d like to see. Simple UI improvements.

    gary
    gdstark13
    • Yes - simple improvements are needed

      I wholely agree. Simple improvements that make the user experience faster and easier are where google and other search sites should go.

      I have used vivisimo and like its clustering, but found that it was not nearly as comprehensive as Google's results. I'm not sure why Google has not implemented some form of clustering, at least as an option. I think it would help a lot with "deep" searches.

      When I am doing a "shallow" search (e.g., looking for a specific web site or simple information), I almost always find my result in the first 3 results. But, when I am "deeply" searching, I almost always go to the "Advanced search page" and set the number of hits per page to 99 and sometimes do more complex queries. I'm sure I'm not the only one. Gee, wouldn't it be neat to have on the first results page a button to say "I want gobs more results, please..." or simple ways to enhance a secondary search.

      I like Gary's ideas to have simple shortcuts to exclude words or "sites like this". I have found that the Google "Similar Pages" does not really help much. I also agree that there has not been much visible innovation by Google in the basic search arena and am frustrated by the "spam" hits by sites that are just useless links looking for clicks.
      Scrappy T
  • Still embryonic - next big step is Semantic Web

    I agree with the author that there are sources of genuine innovation in search other Google. To me, it would be both defeatist and narrow-minded to think so. It's almost axiomatic that the larger and more established a company becomes, the less innovative it gets. Innovation gives way to considerations like revenue growth, EPS and share price i.e. how to maximize what you've got not necessarily create something new.

    The next big step is the Semantic Web which for search will essentially mean that you don't have guess what words are likely to appear in a document that has the info you're looking for. Simplistically expressed, this means you'll be able to search on topic more like an encyclopedia using terms that mean something similar to the words in the document such as you would find in a thesaurus.

    There will be room for some bridging technologies to add more structure to the mass of unstructured information on the web before the time when all documents contain the necessary Semantic tags. I developed a prototype for doing semantic organization and presentation of search results about 3 years ago. I applied it to the issue of organizing search advertising results so that way more advertisers could be represented on page 1 of the search results. I pitched to all the major players, analysts and several VCs, got lots of interest and positive comments, but ultimately didn't get funded in the environment of the day.

    Maybe your article will convince a few people that innovation can happen outside of Google and I can resurrect it.
    Kopete
  • Personal Relevance

    Great article. There is much more that can be done on the search side. For me, I see personal relevance as a key issue with current search engine models. Currently, the relevance is dictated by others. Other links to that site, other people that recommend the link, and the ones with the highest page rank. This is good for some types of topical searches.

    The big win is figuring out how to search is relevant to me. If I type in a search, the results need to be more targeted to what I find relevant. This would shift current advertising revenue models, but potentiall for the beter. If the results are targeted to the indivual and the ads can be better suited for higher click through.

    In short, it's a combination of what we have today and what we need. I don't think we'll have a revolution here but improvements in relevance will take search engines to the next level.

    We're working on web application (www.convos.com that allows you to have your relevant information in one place so you can interact with it. While much different from the search, it is aiming for improving the relevance of and interaction with the results.
    trush_convos
    • RE: Personal Relevance

      Can you give an example of a search where you would want the results "personalized"? I'm not sure I understand.
      gdstark13
  • Liked the "Competitive Productivity" term

    The other way to improve "competitive productivity" (great term) is to remove some of the mindless work most information workers do - approvals they have to give that could be automated, for instance, or time spent helping a front-line worker calculate something that is complex but not impossible to automate. Hopefully there's a trackback on the article from mine.
    jamet123
  • IS SEARCH ENGINE GOVERNMENT?

    It could be all of that information you type in for your websight ends up in a government computer.The ISP links to Federal telephone lines.The Internet is computers on the phone system.Pulses of DC.
    BALTHOR
    • re: IS SEARCH ENGINE GOVERNMENT?

      Don't say could be, the question is where the information is stored? Not paranora, its a fact. Our government does not create anything new, especially money, it only collects information, spends, and regulates.

      Kudos for Sun.
      DesertOutlaw
  • Proof is when I can search without....

    Proof that the great search engine search is a bit closer to reality is when I can search:

    * with true Boolean expressions (not Google?s guestimates)

    * when I can search within restricted time spans (Google's is just a joke, but then they want you to have lots of results even if irrelevant.)

    * when I can search on a topic and get results without Amazon clogging up thousands of entries.

    * when I can search without eBay or others clogging up my results.

    * when I can search for information only and never get an advertisement in the search results.
    Irritated_User
  • Google is the bright shiny object, but beyond it there is "great search."

    When it comes to searching within an organization, i.e. enterprise search, there are several very strong companies, FAST & Autonomy most notably, that do a better job of indexing content of various types and formats regardless of location (databases, file servers, legacy apps, CMS, etc.), and have a stronger, more open Relevancy Model than say Google, provide Results Categorization/Clustering, Static or Dynamic Navigation and much more; all of which help a user find answers, not just links to possible matches.
    michael.malone@...
  • Clusters...

    I did a search for Vivisimo and discovered Clusty. Pretty nifty looking.

    http://clusty.com/
    bghost