IBM and the challenge of systems design

Summary:In the past, systems could be designed for a single purpose. Although every system design is a set of compromises, it was possible to partially optimize a system design for a specific purpose. Today, system architects have been forced to try to build "do-everything-well" machines. IBM's announcements show its long history in system design.

Computationally intensive tasks

Systems must offer either a single very-high performance processor or a large number of traditional processors. Monolithic tasks are best served by a single huge computing capability. Tasks that can be decomposed into parallel sub tasks can be served with a number of lower cost, lower performance processors. Although this resource is the heart of a system design, it may no longer be the most important feature.

The system must include a very sophisticated caching/memory mechanism so that the processors can efficiently execute applications without being forced to wait for code or data to be delivered from the systems main memory. This is even more important for today's highly pipelined architectures that include several cores each having the capability to process several processing threads simultaneously. If any one of these threads can't access code or data when needed, today's operating systems will save what's going on, flush the cache and start working on another task. This can drag down a multi gigahertz processor to the point that it only offers a quarter or an eighth of its rated processing speed. System designers have been forced to create what amounts to "memory computers" to keep the processing computers fed and happy. These "memory computers" might offer compression and deduplication capabilities to make the best use of available memory technology.

The system must have a very sophisticated storage subsystem that makes it possible for code and data that is likely to be needed to be pre-fetched and brought into memory so that the massive processing capability will not be forced to stall. As organizations deploy multiple tiers of storage, including DRAM, SSD as well as rotating media, the storage subsystem has had to become its own very intelligent processing facility. In other words, storage systems have become what amounts to storage computers designed to keep the processing computer fed and happy.

The system is likely to need access to several high-speed networks so that data to be processed and already processed data can be moved where it is needed next. As with both memory and storage, this function has evolved into its own separate computing resource. This "network computing" resource applies compression, de-duplication and managing multiple network connections without having to involve the "processing computer" at every step.

Topics: Hardware


Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He is responsible for research, publications, and operations. Mr. Kusnetzky has been involved with information technology since the late 1970s. Mr. Kusnetzky has been responsible for research operations at the 451 Group; corporate and... Full Bio

Contact Disclosure

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.