Showing results 1 to 19 of 19

February 13, 2012 by

NASA switches off last mainframe

NASA has shut down its last mainframe, the space agency's chief information officer announced over the weekend.Linda Cureton said on Saturday that some organisations still needed the transaction-oriented capabilities of Big Iron, which allows for huge numbers of inputs and outputs, but NASA was not one of them.

November 17, 2011 by

Google's chance to plug iTunes gap in Asia

The battle in the mobile space between Google and Apple has no doubt been intensifying with various numbers emerging, claiming market leadership over the other.The popular consensus, for now, is that Android is seeing strong growth while Apple iPhone rules the smartphone market.

April 29, 2009 by

How many telecommuters does it take to save a rainforest?

Some hard-core managers may still be against people working outside their direct oversight, but the economic climate AND the push toward sustainable business practices are convincing more companies to consider the benefits of telecommuting. One of the technology vendors supporting this space, SonicWALL, has jumped on that opportunity, creating a calculator tool that businesses can use to run some numbers on whether or not their organization would benefit from telecommuting.

November 5, 2008 by

Blackbox, SWaP, and the z10

During last week's discussion of IBM's z10 numbers, one contributor claimed that it is both space and power efficient - so I suggested he compare the z10's numbers to those for Sun's project blackbox. Since I doubt he bothered, I've done it for all of us -and the results are appalling, absolutely appalling.

September 18, 2008 by

Palm posts Q1 FY09 financial results, slightly better than last quarter

Palm just posted their Q1 FY09 results on their investor relations site and they continue to experience challenges in the mobile space. TreoCentral has a post comparing the numbers from last quarter and they show that the loss was less and the revenue more this quarter so there is a slight positive sign there. Palm is hosting a conference call on the results and will probably talk about the continued success of the Centro and the newly released Treo 800w that should be coming soon to Verizon.

April 5, 2015 By CssCompany

The Game of the Goose

The Game of the Goose or Goose game is a board game with uncertain originsThe board consists of a track with consecutively numbered...

November 5, 2007

A simple picture of Web evolution

This picture above shows a simple abstraction of web evolution. The traditional World Wide Web, also known as Web 1.0, is a Read-or-Write Web. In particular, authors of web pages write down what they want to share and then publish it online. Web readers can watch these web pages and subjectively comprehend the meanings. Unless writers willingly release their contact information in their authored web pages, the link between writers and readers is generally disconnected on Web 1.0. By leaving public contact information, however, writers have to disclose their private identities (such as emails, phone numbers, or mailing addresses). In short, Web 1.0 connects people to a public, shared environment --- World Wide Web. But Web 1.0 essential does not facilitate direct communication between web readers and writers. The second stage of web evolution is Web 2.0. Though its definition is still vague, Web 2.0 is a Read/Write Web. At Web 2.0, not only writers but also readers can both read and write to a same web space. This advance allows establishing friendly social communication among web users without obligated disclosure of private identities. Hence it significantly increases the participating interest of web users. Normal web readers (not necessarily being a standard web author simultaneously) then have a handy way of telling their viewpoints without the need of disclosing who they are. The link between web readers and writers becomes generally connected, though many of the specific connections are still anonymous. Whether there is default direction communication between web readers and writers is a fundamental distinction between Web 1.0 and Web 2.0. In short, Web 2.0 not only connects individual users to the Web, but also connects these individual uses together. It fixes the previous disconnection between web readers and writers. We don't know precisely what the very next stage of web evolution is at this moment. However, many of us believe that semantic web must be one of the future stages. Following the last two paradigms, an ideal semantic web is a Read/Write/Request Web. The fundamental change is still at web space. A web space will be no longer a simple web page as on Web 1.0. Neither will a web space still be a Web-2.0-style blog/wiki that facilitates only human communications. Every ideal semantic web space will become a little thinking space. It contains owner-approved machine-processable semantics. Based on these semantics, an ideal semantic web space can actively and proactively execute owner-specified requests by themselves and communicate with other semantic web spaces. By this augmentation, a semantic web space simultaneously is also a living machine agent. We had a name for this type of semantic web spaces as Active Semantic Space (ASpaces). (An introductory scientific article about ASpaces can be found at here for advanced readers.) In short, Semantic Web, when it is realized, will connect virtual representatives of real people who use the World Wide Web. It thus will significantly facilitate the exploration of web resources.A practical semantic web requires every web user to have a web space by himself. Though it looks abnormal at first glimpse, this requirement is indeed fundamental. It is impossible to imagine that humans still need to perform every request by themselves on a semantic web. If there are no machine agents help humans process the machine-processable data on a semantic web, why should we build this type of semantic web from the beginning? Every semantic web space is a little agent. So every semantic web user must have a web space. The emergence of Semantic Web will eventually eliminates the distinction between readers and writers on the Web. Every human web user must simultaneously be a reader, a writer, and a requester; or maybe we should rename them to be web participators. In summary, Web 1.0 connects real people to the World Wide Web. Web 2.0 connects real people who use the World Wide Web. The future semantic web, however, will connect virtual representatives of real people who use the World Wide Web. This is a simple story of web evolution.This article is originally posted at Thinking Space.  

November 1, 2007 by

Web 2.0 Summit - reflections after a trans-Atlantic flight and a day off!

Last week's Web 2.0 Summit in San Francisco was pretty intense, all things considered. It's therefore lucky that this week is the Half Term school holiday in this particular corner of the UK, and peppered with days off to do various non-work things.During the conference (sorry, 'summit') I managed to live-blog most of the sessions I attended, and the corpus can be found here. O'Reilly/CMP are also doing a great job of getting session videos up.Now I've had time to reflect without the need to type and listen and keep an eye on the office, what were the trends and highlights for me?I noticed two big switches since 2005 when I last attended this particular gathering. Firstly, although I didn't see much evidence of a credible alternative, there was far less of an assumption that Google AdWords were the business model of choice. And secondly the lobby conversations just seemed much less desperate than last time, when everyone and everything was frenetically for sale.The iPhone was everywhere. I saw lots of people using Apple's latest, but don't think I saw anyone actually talking into the thing, which means that Nokia's phone-less (?) alternative may do well. We get iPhones in the UK in a couple of weeks, and Talis will be raffling one at our conference the week before that launch. Something tells me that my chances of winning that iPhone are about as high as those for Nokia to send me an N810.There seemed less of an emphasis upon scheduled evening entertainment than previously. Richard MacManus comments on this, too. From my perspective it was a good thing, as it made my packed schedule of dinner engagements (and a trip to a real San Francisco home) so much easier to manage. In many ways, these (including one with Mr MacManus) were the highlight of the trip.The main auditorium was a truly unpleasant place to spend time; way too crowded. The overflow room upstairs was a far better bet, complete with comfy sofas, power, wifi (which you could also get downstairs, if your battery was up to the job), and easy access to food and drink. It would have been nice to be able to ask questions with a video link to the sweatshopauditorium downstairs that was bi-directional, though. A second display showing the whole stage would also have been good. The main monitor kept zooming in to provide detail on faces/slides etc; it wasn't always focussed on the thing I considered important.So what about the meat?Well, in case you hadn't noticed, Facebook is going to be big. I don't just mean suggestions that Zuckerberg may be 'selling himself short' at a mere $15bn, or evidence that Facebook's platform is delivering profit for third party developers. More than both of those, there was an underlying - often implicit - recognition that growth opportunities lie in pushing content and functionality off our individual websites and into the cloud. Although I've argued before that Facebook is a very long way from being open, it's 'Platform' remains a compelling example of ways in which external content can be aggregated and consumed elsewhere. Imagine what would be possible in a more open ecosystem, an ecosystem of which Facebook could be a part? Were others (MySpace, anyone?) to seed such an ecosystem whilst Facebook remained off to one side, would the rate of fall in Facebook numbers equal or exceed their recent growth?'Semantic' has arrived; the Metaweb/ Radar Networks/ Powerset pow wow with Tim O'Reilly (pictured) on the final afternoon was great, and was just beginning to go places when they ran out of time. More debate and analysis would have been nice, with (a lot) less demo. This was followed up by John Doerr recognising the whole space as a compelling investment opportunity, echoing trends that Brad Feld highlighted in his recent podcast with me. I found Danny Hillis' explicit distancing of himself from the Semantic Web odd (Shelley just found it funny...); I'll admit that I've done a little of the same, but more to demonstrate that there is plenty that the Semantic Web's building blocks (RDF, GRDDL, etc) can do right now, without needing to await the arrival of The Semantic Web. We do need to find better ways to describe this space, though; 'Web 3.0' can be unnecessarily confrontational/epochal, and 'Semantic Web' carries way too much baggage...Jonathan Zittrain had some interesting things to say, and they're not nearly as contrarian as they might at first have appeared.Mary Meeker was good value, as always... although impossible to blog! I was surprised by the lack of reaction to her figures illustrating the fall in US growth, relative to competitors to the east.The Launch Pad, that gathering of exemplary startups, was hugely disappointing. I can't believe that was the cream of the crop.Gene sequencing needs to be watched... very closely.Real people don't think (quite) like geeks and venture capitalists! Craigslist, rejoice...(Almost) everyone had a Platform, with some more black hole sucking-ish than others. It does appear, all too often, that the web is actually becoming less open than it has been of late. All these Platforms are sucking data and users and developers to themselves, and letting very little flow back out. It certainly fulfils short-term goals around eyeballs, advertisers, and the like. But it's bad for the web and, in the long term, it's got to be bad for (most of?) the guilty.(Almost) everyone was recognising the power of intention/attention, and seeking ways to implicitly or explicitly harness both. Social and semantic graphs have something to say, here.Photograph © James Duncan Davidson/O'Reilly Media

October 31, 2007 by

The Semantic Web - is everyone confused?

The Economist. Tim O'Reilly. Nova Spivack. Danny Ayers. Read/Write Web's Alex Iskold. Kingsley Idehen. Brad Feld. Over the last few days all of them have been amongst those writing to clarify their understanding of the Semantic Web and where it's going.Each piece is thoughtful, each piece is well worth a read, and each differs somewhat from the others in outlook as they delve into 'ontologies', 'classic approaches', 'machine intelligence', 'SPARQL', 'Turtle' and other geekiness [meant in the nicest possible way]. I do wonder, though, if all of them are bypassing some fundamental points as they seek to clarify their own perspectives to themselves, to one another, and to the world; points with which I suspect that each may actually agree.First, I definitely don't think that a company, technology or approach can only be either 'Web 2.0' or 'Semantic Web'. Sure, some companies will see themselves (or pitch themselves) in one space or the other, but there's going to be an ever-increasing number that reside firmly in both. Ultimately, of course (and figures in the FT this week, suggesting that “The pull-back was particularly acute in Silicon Valley, as big Web 2.0 investors such as Benchmark Capital, Kleiner Perkins Caufield & Byers and Omidyar Networks, the private financing vehicle of Ebay founder Pierre Omidyar, cut back on their investments.”might more logically be interpreted as supporting this argument) companies won't be Web 2.0 or Semantic Web. They will be companies that solve a particular set of problems for a particular set of audiences. Some of the tools in the toolbox they use to do this will be Web 2.0-ish, some will be Semantic Web-ish, some will be both, and some will be neither. Those things that currently differentiate us - and to which we apply labels in order to reinforce the differentiation - will become mainstream, run of the mill, mundane, and simply expected. That's progress, and it's a good thing. Web 2.0 won't go away. The Semantic Web won't go away. Shouting about either might, and it doesn't have to mean that their importance has diminished.Second, 'collective intelligence' applies equally to both. Tim O'Reilly's absolutely right that it's been a key differentiator of many Web 2.0 darlings; “By contrast, I've argued that one of the core attributes of 'web 2.0' (another ambiguous and widely misused term) is 'collective intelligence.' That is, the application is able to draw meaning and utility from data provided by the activity of its users, usually large numbers of users performing a very similar activity. So, for example, collaborative filtering applications like Amazon's 'people who bought item this also bought' or's music recommendations, use specialized algorithms to match users with each other on the basis of their purchases or listening habits. There are many other examples: digg users voting up stories, or wikipedia's crowdsourced encyclopedia and news stories.”It's also front and centre in Semantic Web work, though. For example that from ourselves, Radar Networks and others. See this white paper [PDF] for one, and watch here and here for public sight of internal developments... soon. The connections that RDF makes so manifest are a perfect way to express, traverse, and mine the habits, behaviours and desires of the collective.Third, 'a formal ontology' is not a requirement, and nor is pushing structure in the face of the user.Tim makes a good point here; “The Semantic Web is a bit of a slog, with a lot of work required to build enough data for the applications to become useful. Web 2.0 applications often do a half-assed job of tackling the same problem, but because they harness self-interest, they typically gather much more data. And then solve for their deficiencies with statistics or other advantages of scale.”I'm not sure, though, that SemWeb/ Web 2.0 is the dichotomy here? Rather, it's a split between purist, all-encompassing, and hugely flexible on the one hand and pragmatic and 'good enough' on the other. I would agree that stereotype would often place Semantic Web developers on one side of that divide and Web 2.0 startups on the other. The technology is not the point there, though, so much as the mindset. Believe me, we can do some great stuff to harness self-interest, gather much more data, and solve the deficiencies with statistics and other advantages of scale in a Semantic Web-ey Platform... :-) “But I predict that we'll soon see a second wave of social networking sites, or upgrades to existing ones, that provide for the encoding of additional nuance. In addition, there will be specialized sites -- take Geni, for example, which encodes geneaology -- that will provide additional information about the relationships between people. Rather than there being a single specification capturing all the information about relationships between people, there will be many overlapping (and gapping) applications, and an opportunity for someone to aggregate the available information into something more meaningful.”Too right, Tim. But I'd definitely suggest that those building the second wave should be talking to Talis, to Radar Networks, to Metaweb and to some of the other proponents of a new and far more Web 2.0-inspired Semantic Web paradigm. There are way too many synergies there to ignore...Dan Brickley's comments in response to one aspect of Danny's argument are also interesting; “Let me clear something up. Danny mentions a discussion with Tim O’Reilly about SemWeb themes. Much as I generally agree with Danny, I’m reaching for a ten-foot bargepole on this one point: 'While Facebook may have achieved pretty major adoption for their approach, it’s only very marginally useful because of their overly simplistic treatment of relationships.' Facebook, despite the trivia, the endless wars between the ninja zombies and the pirate vampires; despite being centralised, despite [insert grumble] is massively useful. Proof of that pudding: it is massively used. 'Marginal' doesn’t come into it.”Too true. I've complained about Facebook, too [for example here and here]. But I use it, and millions of others use it. And it serves a purpose. That doesn't mean it can't be better.Turning, finally, to Alex' post; “The first problem is that RDF and OWL are complicated. Even for scientists and mathematicians these graph-based languages take time to learn and for less-technical people they are nearly impossible to understand. Because the designers were shooting for flexibility and completeness, the end result are documents that are confusing, verbose and difficult to analyze.”Well, yes and no. That's what tools are for. And in a large number of cases the RDF may actually be auto-generated as part of some process of aggregation or value addition of which the data creator or manager need have no explicit awareness. The RDF may very well be generating an aggregation of tiny snippets of data from large numbers of transactions; the interaction of a single user with a single resource doesn't have to result in a whole RDF document of its own. More on that later.And, also from Alex; “Going back to John Markoff's example of a computer booking a perfect vacation, one can't help but think of a travel agency. In the good old days, you would go to the same agent over and over again. Why? Because just like your friends, your doctor, your teacher, the travel agent needs to know you personally to be able to serve you better. The travel agent remembers that you've been to Prague and Paris, which is why he offers you a trip to Rome. The travel agent remembers that you're a vegetarian and orders the pasta meal for you on your flight. Over time people learn and memorize facts about life and each other. Until machines can do the same, knowledge of semantics, limited or full is not going to be enough to replace humans.”Exactly. And that's where network effects, collective intelligence, behavioural observation and all the rest kick in. The knowledge comes from observation of an awful lot of behaviour; not from having the traveller fill in some long-winded and tedious form detailing an RDF graph representation of their travel preferences for all situations. Context matters. I, for example, want a window seat on short-haul flights, and an aisle seat on long-haul flights. It's not a simple preference one way or the other. I don't have a preferred airport to depart from, as so many other factors come into play. I'll go to a more distant departure airport for a better departure or travel time, for example. I won't always travel with the airlines I've got frequent flier cards for... but they don't have to be cheapest before I can or will. It's more complex than that. Current systems don't understand. “Perhaps the worst challenge facing the semantic web is the business challenge. What is the consumer value? How is it to be marketed? What business can be built on top of the semantic web that can not exist today? Clearly the example of instant travel match is not a 'wow.' It's primitive and, in a way, uninteresting because many of us are already quite adept at being our own travel agent using existing tools. But assuming that there are problems that can be solved faster, there is still a question of specific end user utility.”Talis. Radar Networks. Joost. Metaweb. Garlik. Need I go on? (I can... :-) ) “The way the semantic web is presented today makes it very difficult to market. The 'we are a semantic web company' slogan is likely to raise eyebrows and questions. RDF and OWL clearly need to be kept under the hood. So the challenge is to formulate the end user value in ways that will resonate with people.”Absolutely right! SWEO is part of the answer. Companies like ours getting out and showing what can be done, and why it's valuable is crucial too... and we're getting there.And to answer my initial question; No, I don't think everyone is confused by or about the Semantic Web. We do, though, have a lot of different niche views of value (or lack thereof), clamouring for attention. These overlapping - and not necessarily incorrect - perspectives certainly could appear to be a result of confusion, if viewed from the outside. Language is a complicated thing, and these are complex ideas. Describing one with the other requires a number of iterations to arrive at clarity, but we're getting there.There's a lot more to say, but this post has now gone on long enough (especially as I initially meant to simply point you at some interesting blog posts...).

May 10, 2007 by

Is no news good news for Red Hat?

Red Hat is no longer the only big dog in the Linux space. It faces a lot of competition. But at the heart of its market, the enterprise space, what I might call the IBM space, Red Hat remains strong. Its relationship with IBM seems strong, and IBM itself has mainly kept quiet this last year, preferring to let its numbers to do the talking. So, does Red Hat need to make headlines in order to stay on top?

April 29, 2007 by

Is there any "wow" left in the Tablet PC?

Rob Bushway at wrote a thoughtful piece the other day pondering where the "wow" has gone in the Tablet PC segment. It's a well-balanced look at what he sees as a mature product segment lacking in significant new innovations that are likely to drive large numbers of new adopters to the platform. I've been using Tablet PCs for about as long as Rob – I first got my hands on the original Toshiba entry into the space (the Portege 3500) back in 2003 and I remember the "wow" experience it delivered.

March 31, 2005 by

At last, SOA gets some credit

It's always good to get some hard numbers around the results of a technology implementation, and such measurements are still extremely rare in the SOA space. At last, there is a big tangible ROI story to tell.

January 9, 2002 by

Time Warner Telecom launches NASA contract

Time Warner Telecom on Wednesday announced it has won a five-year contract to deliver local telecom services to NASA's Kennedy Space Center in Florida, according to a Time Warner Telecom statement. The broadband provider says it is extending its network to provide the service through a private, secure connection. As part of the deal, Time Warner Telecom will provide more than 20,000 phone numbers, 21 lines located throughout Central Florida that are used to measure sound and seismic activity during launches, as well as other services and equipment. Financial terms were not disclosed. --Lara Wright, Special to ZDNet News

March 20, 2001 by

Break those bottlenecks: Performance test your network with Iperf

Getting a grip on network performance numbers is one of the most difficult tasks for network managers. Unlike many information technology statistics such as disk space utilization and available memory, network performance is a very subjective issue--what's slow to you might be a zippy network to someone else.

September 5, 2000 by

Do you live in the Internet's Rust Belt?

It's being called the Internet's rust belt, that space between the old economy and the new economy, where jobs grow old, then disappear. Today, it's occupied by travel agents, bank tellers and bookstore clerks, whose ranks are being reduced in increasing numbers by their automated Internet substitutes.

November 3, 1998 by

Election Day Web traffic no shooting Starr

Predictions that Election Day would drive blockbuster traffic to news Web sites didn't quite pan out Tuesday night and Wednesday morning.Officials at several of the major news sites said that, while traffic exceeded the average weeknight's draw, Netizens didn't go online Tuesday night in anywhere near the numbers they did for other recent blockbuster events -- such as the release of the Starr report and John Glenn's return to space.


You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.
See All

Most Popular