Computationally intensive tasks
Systems must offer either a single very-high performance processor or a large number of traditional processors. Monolithic tasks are best served by a single huge computing capability. Tasks that can be decomposed into parallel sub tasks can be served with a number of lower cost, lower performance processors. Although this resource is the heart of a system design, it may no longer be the most important feature.
The system must include a very sophisticated caching/memory mechanism so that the processors can efficiently execute applications without being forced to wait for code or data to be delivered from the systems main memory. This is even more important for today's highly pipelined architectures that include several cores each having the capability to process several processing threads simultaneously. If any one of these threads can't access code or data when needed, today's operating systems will save what's going on, flush the cache and start working on another task. This can drag down a multi gigahertz processor to the point that it only offers a quarter or an eighth of its rated processing speed. System designers have been forced to create what amounts to "memory computers" to keep the processing computers fed and happy. These "memory computers" might offer compression and deduplication capabilities to make the best use of available memory technology.
The system must have a very sophisticated storage subsystem that makes it possible for code and data that is likely to be needed to be pre-fetched and brought into memory so that the massive processing capability will not be forced to stall. As organizations deploy multiple tiers of storage, including DRAM, SSD as well as rotating media, the storage subsystem has had to become its own very intelligent processing facility. In other words, storage systems have become what amounts to storage computers designed to keep the processing computer fed and happy.
The system is likely to need access to several high-speed networks so that data to be processed and already processed data can be moved where it is needed next. As with both memory and storage, this function has evolved into its own separate computing resource. This "network computing" resource applies compression, de-duplication and managing multiple network connections without having to involve the "processing computer" at every step.