IBM and the challenge of systems design

IBM and the challenge of systems design

Summary: In the past, systems could be designed for a single purpose. Although every system design is a set of compromises, it was possible to partially optimize a system design for a specific purpose. Today, system architects have been forced to try to build "do-everything-well" machines. IBM's announcements show its long history in system design.

SHARE:
TOPICS: Hardware
0

Highly interactive tasks

Highly interactive tasks make different use of processing power than batch or processing focused systems. In this case, having a greater number of processors can help a great deal. As with computationally intensive tasks, the processor is no longer the most important feature. Interactive tasks make extensive use of input and output capabilities of a system.

Since interactive tasks do some processing and then send the result out either to the end user's device over the network or to the storage system, it is expected that tasks will often be stalled and the system will have to be very efficient at saving an application's state, flushing the processor's cache and start working on another task. This bursty processing pattern requires a different programming for the "memory computers." Features such as compression and deduplication are deployed largely to reduce the amount of memory required while still offering excellent task-switching capabilities.

As with computationally intensive tasks, the system must have a very sophisticated storage subsystem that makes it possible for code and data that is likely to be needed to be pre-fetched and brought into memory. In this case, however, it is likely that it will need the ability to find and deliver instructions and data to many different applications rather than keeping a single or a small number of tasks fed. Increasingly, this means that DRAM, SSD as well as rotating media tiers of storage must be managed.

High speed network access is even more important in this The system is likely to need access to several high-speed networks so that data to be processed and already processed data can be moved where it is needed next. As with both memory and storage, this function has evolved into its own separate computing resource. This "network computing" resource applies compression, deduplication and managing multiple network connections without having to involve the "processing computer" at every step.

Topic: Hardware

About

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

0 comments
Log in or register to start the discussion