X
Home & Office

Parallel computing takes a step forward

Is there a shift towards parallelism going on? You could fairly argue that, obviously, with multi-core CPUs and GPUs now standard fare on desktops and in servers, the answer is yes.
Written by Manek Dubash, Contributor

Is there a shift towards parallelism going on? You could fairly argue that, obviously, with multi-core CPUs and GPUs now standard fare on desktops and in servers, the answer is yes.

But another sign of these changing times is the advent of Google-style multi-node computing and storage on Isilon's latest rev of its OS, OneFS 6.5, which it is targeting at the business analytics market.

The company, which was acquired by EMC last year, has just announced what it calls "scale-out NAS with HDFS [Hadoop Distributed File System], combined with EMC Greenplum HD" aimed at delivering "powerful data analytics on a flexible, highly-scalable and efficient storage platform". It seems to be the first mainstream storage OS that incorporates Hadoop, the file system designed to handle big, unstructured data. It works by dividing up the data across hundreds or even thousands of commodity servers, each of which does a piece of the analysis and reports back.

HDFS is loosely based on Google's file system GFS. Java-based, it is designed to comb through large lumps of data housed on multiple computers, each of which has no need to be fully resilient because each piece of data is stored on multiple nodes. In other words, it's designed to accommodate failures which means you can use cheaper hardware.

Its Achilles heel is that the name node stores the directory and, if it fails, the whole system is down. Isilon reckons it's added resilience to the system by distributing the directory across all nodes, making it much more resilient. According to senior marketing director Brian Cox: "with our [name node] protection you don't need three copies of all the data so we make it more efficient."

What makes this interesting, apart from making HDFS and general distributed computing more mainstream, is that, according to Herb Sutter, this is where computing is heading in the longer term (if we can ever meaningfully use that phrase in this business). Sutter is a Microsoft developer and author, and has been the lead designer of C++/CLI, C++/CX, C++ AMP, and other technologies.

In a fascinating blog entry, Sutter argues that, as we the advances of Moore's Law start to diminish (as we are now seeing), parallelism takes up the slack, enabling scalable cloud-based computing using a combination of general-purpose and specialised cores.

For him, this is where software developers need to be concentrating their efforts -- something I remember being called for back in the days of HeliOS, a Unix-like operating system for parallel computers, mainly the Inmos Transputer, developed by Perihelion Software. That was 1986 -- and parallelism was distinctly niche, and very hard to do.

Sutter concludes that: "Mainstream hardware is becoming permanently parallel, heterogeneous, and distributed. These changes are permanent, and so will permanently affect the way we have to write performance-intensive code on mainstream architectures."

Isilon's latest release arguably marks another step along that path. I'd be intrigued to hear if parallel programming is any easier today.

Editorial standards