Parallel computing takes a step forward

Parallel computing takes a step forward

Summary: Is there a shift towards parallelism going on? You could fairly argue that, obviously, with multi-core CPUs and GPUs now standard fare on desktops and in servers, the answer is yes.

SHARE:
TOPICS: Networking
2

Is there a shift towards parallelism going on? You could fairly argue that, obviously, with multi-core CPUs and GPUs now standard fare on desktops and in servers, the answer is yes.

But another sign of these changing times is the advent of Google-style multi-node computing and storage on Isilon's latest rev of its OS, OneFS 6.5, which it is targeting at the business analytics market.

The company, which was acquired by EMC last year, has just announced what it calls "scale-out NAS with HDFS [Hadoop Distributed File System], combined with EMC Greenplum HD" aimed at delivering "powerful data analytics on a flexible, highly-scalable and efficient storage platform". It seems to be the first mainstream storage OS that incorporates Hadoop, the file system designed to handle big, unstructured data. It works by dividing up the data across hundreds or even thousands of commodity servers, each of which does a piece of the analysis and reports back.

HDFS is loosely based on Google's file system GFS. Java-based, it is designed to comb through large lumps of data housed on multiple computers, each of which has no need to be fully resilient because each piece of data is stored on multiple nodes. In other words, it's designed to accommodate failures which means you can use cheaper hardware.

Its Achilles heel is that the name node stores the directory and, if it fails, the whole system is down. Isilon reckons it's added resilience to the system by distributing the directory across all nodes, making it much more resilient. According to senior marketing director Brian Cox: "with our [name node] protection you don't need three copies of all the data so we make it more efficient."

What makes this interesting, apart from making HDFS and general distributed computing more mainstream, is that, according to Herb Sutter, this is where computing is heading in the longer term (if we can ever meaningfully use that phrase in this business). Sutter is a Microsoft developer and author, and has been the lead designer of C++/CLI, C++/CX, C++ AMP, and other technologies.

In a fascinating blog entry, Sutter argues that, as we the advances of Moore's Law start to diminish (as we are now seeing), parallelism takes up the slack, enabling scalable cloud-based computing using a combination of general-purpose and specialised cores.

For him, this is where software developers need to be concentrating their efforts -- something I remember being called for back in the days of HeliOS, a Unix-like operating system for parallel computers, mainly the Inmos Transputer, developed by Perihelion Software. That was 1986 -- and parallelism was distinctly niche, and very hard to do.

Sutter concludes that: "Mainstream hardware is becoming permanently parallel, heterogeneous, and distributed. These changes are permanent, and so will permanently affect the way we have to write performance-intensive code on mainstream architectures."

Isilon's latest release arguably marks another step along that path. I'd be intrigued to hear if parallel programming is any easier today.

Topic: Networking

Manek Dubash

About Manek Dubash

Editor, journalist, analyst, presenter and blogger.


As well as blogging and writing news & features here on ZDNet, I work as a cloud analyst with STL Partners, and write for a number of other news and feature sites.


I also provide research and analysis services, video and audio production, white papers, event photography, voiceovers, event moderation, you name it...


Back story
An IT journalist for 25+ years, I worked for Ziff-Davis UK for almost 10 years on PC Magazine, reaching editor-in-chief. Before that, I worked for a number of other business & technology publications and was published in national and international titles.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

2 comments
Log in or register to join the discussion
  • Well, this is why I'm both fascinated and slightly worried; parallel computing and concurrency and complex architectures don't seem to be something that's a natural fit for the mindset of developers. I like to tell a story about thousands of developers at a conference walking between two conference halls and all going into the one door out of six that was already open rather than splitting into streams to go through all doors; if developers can't parallelise themselves, they won't naturally parallelise their code. The thing that might be missing from Sutter's excellent survey is the tools that get us from the TBB level we have today to the complex heterogenous multicore multispeed multisystem async world we have to have to keep the horsepower curve going in the right direction. The tools will have to take care of a lot of this. Frameworks ahoy?
    M
    Simon Bisson and Mary Branscombe
  • Yes, frameworks and smarter compilers - but I suspect a lot of the code will have to be written with parallel processing as one of its fundamental tenets in order to make best use of those tools. But, as you rightly say, developers hate it - because it is very hard to do and the assumption that increasing hardware horsepower will take care of inefficiencies is, I suspect, deeply ingrained.
    Manek Dubash