X
Home & Office

Future Net: Six technologies to make a bulletproof Web

The Internet didn't just magically appear, and it won't just magically get better. Whether it's creating a simple appliance to dial into the Net, or distributing memory in unseen places to speed up content delivery, a network of networks is a living, breathing thing.
Written by ZDNet UK, Contributor

The Internet didn't just magically appear, and it won't just magically get better. Whether it's creating a simple appliance to dial into the Net, or distributing memory in unseen places to speed up content delivery, a network of networks is a living, breathing thing. And it is given life by living, breathing beings -- human engineers, programmers and technologists. Here are six technologies that will transform the Net in the next four years -- and the people who are helping speed their arrival.

  • Are you who you say you are?

When Victor Chang was working on Apple Computer's Open Collaboration Environment in 1991 and 1992, one of the things he needed was a way to enable participants to identify themselves in an online session without affording anyone the opportunity to posture as somebody else.

In solving that problem, he adopted RSA Data Security's approach to establishing digital identities as a way to authenticate collaborators. The basic method was laid down by Whitfield Diffie and Martin Hellman at Stanford University in 1975-78 when they proposed matching a public key, assigned to an individual, with a private key held in strict confidence. With a private key and public key encryption, a message could be unlocked only at its intended destination.

RSA was an early implementer of such key encryption -- relying on a method known as a certificate server, used to verify the identity of a message recipient. Chang became conversant in its uses as he worked on the Apple project. Two years later, RSA President Jim Bidzos hired him to supervise RSA's staff of six engineers.

Chang and his team undertook the work that would give the Internet several of the tools needed to implement credit-card orders and financial transactions, and to secure exchanges of information. The RSA tool kit, Bsafe, gave application developers a means of implementing RSA public key encryption in their systems. Among the users of the tool kit were Microsoft and Netscape Communications, which built a certificate recognition capability into their respective Internet Explorer and Navigator browsers.

Among other things, the tool kit implemented RC4, an encryption algorithm running across Secure Sockets Layer (SSL), developed by Netscape. SSL used the RSA-based authentication method of recognising a party's digital identity, and RC4 to encrypt and decrypt the accompanying transaction or communication. SSL has grown to become the leading security protocol of the Net. Adopted by the Internet Engineering Task Force as a core technology, it has been merged with other protocols and authentication methods into Transport Layer Security.

Chang and his development team, which now numbers 60 engineers, "indirectly created the framework for people to conduct e-commerce over the Internet," he says.

- Charles Babcock

  • Crashes unacceptable

As general manager at game manufacturer Spectrum HoloByte, it was Steven Weinstein's job to ship games that focused on destruction. Today, he's one of the people in charge of building and shaping the future of devices and appliances that will connect to the Internet. And he's still having fun. He's senior vice president of engineering at Liberate Technologies, a company born from Oracle founder Larry Ellison's propounding of the idea that the network will be the source of most computing intelligence -- not the appliances that attach to it.

Liberate, formerly Network Computer, is not really about creating those appliances, although it provides the underlying software and servers that will make cellular phones, pagers, cable set-top boxes and other devices attach easily to the Net and execute applications smoothly on it.

In Weinstein's view, the easiest way to make these appliances ubiquitous is to concentrate on making standards universal, something he and Liberate are actively involved with.

Liberate is working on making its own software as bullet-proof as possible. "No one ever thinks of [Sony's] PlayStations or Nintendos crashing. They just work. We're not designing computerware; we're designing consumer electronics, and they need to be perfect," Weinstein says. "To me, crashes are unacceptable."

They're also unacceptable to the users of these appliances.

"About 65 percent of the US is connected to cable. Over the next few years, those same 65 percent of households are going to have email and impulse purchasing using the set-top box," Weinstein says. "The way we see it, people don't know how [cellular] phones work; they just like the service. People won't know how their set-top boxes work or what's inside. The technology is being sublimated by content."

- Karen J. Bannan

  • Is smarter cache a better mousetrap?

In 1994, when Solom Heddaya was an associate professor in Boston University's computer science department, the Internet was just starting its breathtaking ascent -- with some gurus, such as Ethernet inventor Robert Metcalfe, declaring that the Internet would not be able to sustain its growth without crashing.

"At the time, no one expected the Internet to hold up," Heddaya says.

Intrigued by this engineering challenge, Heddaya and some of his graduate students started a research project to answer the question: What specific technical problems will the Internet encounter?

The most serious shortcoming they found with the Internet's architecture was that in order to keep up with demand, a Web site's infrastructure would have to increase in proportion to the growth of the Net itself.

In addition, the Internet presented the problem of "flash crowds" -- an unpredictable, spontaneous convergence of users that could suddenly overwhelm a site. "You have 100 million potential clients that could come to your site," Heddaya says. "The whole population of the world might suddenly be interested in the St. Louis Cardinals' Web site."

Heddaya and his BU crew concluded that the Internet needed some help. Their solution: a new infrastructure based on widespread, coordinated pockets of memory, called caches. Instead of simple caching, in which servers operate in isolation, the distributed caching system Heddaya developed with BU researcher David Yates lets cache servers communicate with each other to maximise their efficiency.

"In order for the cache to help the network in any significant way, it was going to require placing caches all over the network," Heddaya says.

So, what was next? Naturally, to make money off the idea. In 1997, Heddaya, David Yates and Ian Yates -- David's brother -- founded InfoLibria to develop and sell products based on the distributed caching concept.

InfoLibria is now a leading start-up in the caching market, which will grow from about $300m (£186m) this year to a multibillion-dollar segment within four years, says consultancy Collaborative Research. The company sells its system to Internet service providers and enterprise network operators on the merits of providing more reliable service to their users -- and, at the same time, improving the resiliency of the Internet, network by network.

"The infrastructure of the Internet will look significantly different in two years," Heddaya says. "The network all of a sudden isn't just a point-to-point connectivity network, but a storage network that is ensuring the quality of content delivery."

- Todd Spangler

  • Addressing the Internet head-on

Stephen Deering speaks of the Internet Protocol layer of the modern model for public exchange of data over communications networks as if it were a faceted jewel, mystical and yet matter-of-fact. "The IP [Internet Protocol] layer is the most simple layer in many ways. But it's a never-ending source of work and opportunity and challenge," he says.

In his eyes, the telephone network as we know it is essentially dead. What is coming alive is the Internet.

His invention even makes the Internet a competitor to broadcasting networks, not just phone networks. He devised a way to allow one stream of data to serve an almost unlimited number of users on a network. Such "multicasting" has been used to transmit everything from Rolling Stones concerts to news events.

With that contribution under his belt, Deering could be considered one of the Internet's illusive gods. But he passes as a regular guy on the campus of Cisco Systems, the dominant supplier of routers and other gear for connecting companies and service providers to the Internet.

"There are no gods of the Internet, only sceptical engineers," he says.

But as technical leader, one of the few workers at Cisco that isn't slave to a product line, Deering gets to look at the long-term issues of the Internet. He consults the many strategists inside of Cisco, focusing his attention on one of the most vital pieces of the Internet protocol -- the header.

"The idea of multicasting is pretty simple and special," Deering says. IP multicasting enables IP networks to support one-to-many communication along with regular one-to-one communication. "Imagine how limiting it would be if all human communication were only one-to-one; i.e., there were no radio or TV stations, and you could never speak to more than one person at a time. Well, that's what a network without multicast is like," Deering explains.

But Deering didn't invent the notion of multicasting. The idea and technology have been around for a long time in local area networks, like Ethernet and Token Ring. "My contribution was to define Internet multicasting -- that is, a multicast service that spans more than just a single local area network -- and to invent routing protocols capable of supporting multicast delivery in a general-topology, store-and-forward network like the Internet," he says.

Deering says his initial work took lots of thought. He says he studied literature on routing protocols and on earlier proposals for doing store-and-forward broadcast first. "Then I thought hard about the problem, came up with a few good new ideas and a few bad new ideas, and refined it all down to something that would work. You know -- basic engineering."

Deering made a prototype implementation in Unix and deployed it first in a small test bed at Stanford, then tested it among six US research sites and finally let it loose in "the MBone," a virtual network overlaid on the public Internet. However, Deering warns: "It's still too early to declare it perfect."

It goes back to Deering's career-long expectations. Before Cisco, Deering worked at Xerox on technology projects like mobile networking. His manager at the time would ask him what his career plans were.

"I never had an answer, but for some reason problems don't disappear," he says. "I've never worried about running out of problems to solve."

- Kathleen Cholewka

  • Directing Internet traffic

Getting data to traverse the labyrinth of routers and servers that make up the Internet has never really been a problem. Architects of the Internet painstakingly designed transmission protocols and hardware to automatically route information from source to destination, even if that means transmitting the data multiple times.

Completing that trek in a timely, predictable manner, however, is another story.

As the Internet increasingly becomes a vehicle for large-scale commerce where sounds and sights must be delivered immediately and transactions executed instantly, engineers and scientists have focused on overriding the Net's automated traffic-forwarding attributes, which too often direct an information packet down congested paths.

Borrowing the concept of virtual circuits from Asynchronous Transfer Mode (ATM) technology, Internet engineers are refining Multiprotocol Label Switching (MPLS), a pending routing standard designed to provide service providers with a mechanism for determining the fastest -- if not the shortest -- route across a network. Unlike ATM, which requires information to be reformatted, MPLS is designed to transmit data in its original format.

MPLS works by assigning a label, or tag, to a standard Internet Protocol packet. The label acts as a shorthand system, providing network hardware with routing instructions without requiring it to complete the time-consuming task of examining every packet in a communications session.

"MPLS is going to be very important in large backbone networks," says Paul Doolan, chief technology officer at start-up Ennovate Networks. "In those networks it will be used mainly for traffic engineering, which may help the operators run them more efficiently, reliably and cost-effectively."

Traffic engineering and MPLS are key components of the Internet's transformation into a reliable and robust medium for conducting business, Doolan adds. Without the ability to optimise delivery routes, network operators would be unable to guarantee the timely delivery of audio and video or transaction information.

As the lead author of the Tag Distribution Protocol developed at Cisco Systems, where he worked alongside world-class engineers, Dolan helped create the routing technology that would become the foundation of the MPLS spec. Seeing his work through the standards process, he also co-authored the MPLS framework document in the MPLS working group of the Internet Engineering Task Force. MPLS will also figure prominently in the delivery of next-generation services, such as virtual private networks, says Doolan, who is applying some of the principles of MPLS to streamlining the delivery of traffic over all-optical links in a network infrastructure. "You may see MPLS as the foundation for doing VPNs or provisioning voice trunks," he adds.

MPLS is a work in progress. Doolan anticipates the technology will be stretched and moulded to various applications in the next few years. "I think the inventiveness of the community involved with developing networking technology is expanding in a very nonlinear way," Doolan says. "The network connects all these bright people, so it's easier for them to collaborate to improve it."

- Joe McGarvey

  • Fashioning the next-generation Internet

Some engineers do not believe the adoption of the next generation of Internet Protocol is a prerequisite for the Internet's emergence as an ironclad environment for electronic commerce.

Robert Hinden is not part of that camp. "I think IPv6 [Internet Protocol version 6] is really critical for the Internet's development," says Hinden, who is spearheading the IP-related research efforts of mobile computing giant Nokia. "I don't see how it can grow and scale to meet the requirements of new devices without having the capabilities that IPv6 brings to the table."

Hinden's advocacy for moving service providers and companies on the Internet from IPv4, the current version of the packet transmission protocol, to IPv6 is to be expected. In addition to serving as co-chairman of the IP Next Generation working group of the Internet Engineering Task Force (IETF), Hinden helped to design the new technology.

But his involvement in the development of the Internet goes back much further than IPv6. In the early days of the ARPAnet, the Defense Department-sponsored predecessor to the Internet, Hinden worked with a group that developed one of the first Transmission Control Protocol/IP applications and created the first operational router.

The major reason IPv4 must be replaced, Hinden says, is the much-publicised fact that it is running out of unique IP addresses. Limited to just a few billion addresses, IPv4, which uses a 32-bit address system, has forced service providers and enterprises to resort to artificial addressing schemes to ensure that enough IP addresses are available to go around. The primary mechanisms for stretching IPv4 are network address translation (NAT) devices, which create unofficial IP addresses that can be used inside an enterprise. As gateways sitting between the Internet and its users, NATs, in Hinden's view, undermine security. "You don't want security in a gateway," he says. "You want to know you have a secure link all the way to the user. You can't do that unless you have global addresses."

In addition to the trillions of additional addresses IPv6 and its 128-bit addressing system will generate, the protocol is fortified with automatic configuration technology that takes most of the hassles out of manually assigning a unique address to every device on the Internet.

The move to IPv6, Hinden says, will also ensure that countries outside the US receive a more equitable distribution of Net addresses. "If we allow the Internet to stagnate and create separate islands of communication, the great promise of global communications will be lost," he says.

- Joe McGarvey

Take me to the Future Net News special.

Editorial standards