X
Tech

Seven post holes and one thought

The real break through we need in science computing is a general way to build machines capable of doing one sequential job many orders of magnitude faster than we can now. Holographic computing is one possibility, some people think quantum computing is another .-personally? I think analog computing has the greatest potential..
Written by Paul Murphy, Contributor

As frequent readers know we got a malamute puppy earlier this year - and, at the time, it was about minus 30 and snowing so the fence I put up for his run was pretty basic. This week, that caught up with me: specifically we got a bargain price on a couple of wrought iron gates and the plan to improve that fence went from the distant future of the put-off-able to right now.

Doing it involves placing seven 2.5" square steel posts, each of which needs about a 2' post hole in ground packed down by the most recent set of glaciers, but since one of our neighbors owns one of those old manual post hole diggers - I thought "hey, how hard can this be?"

Very. So there I was sweating away, making about $0.50 per hour relative to renting a power post hole digger, and thinking about the old dictum that sixty men can not dig a single hole faster than one man.

Of course I had seven to do, and parallelism would have paid off big time -but even if I had any friends they'd have to be just about smart enough not to volunteer - so instead I took frequent breaks and used some of that time to read a bunch of recent computer science stuff looking specifically for new thinking on the problem of getting seven processors to dig one hole in significantly less time that it would take each of them to dig their own.

Mathematicians have worried about this kind of thing for hundreds of years in contexts like merging topologies; Morgenstern and Von Neumann briefly considered some related issues in their late 1920s work on game theory; and computer scientists of one form or another have nibbled at it since at the least the 1950s - all with essentially no positive results.

We're really good at executing in parallel and reasonably good at breaking jobs featuring inherent parallelism into bits, getting them done, and then combining the results, but have made (at least as far as I can see) absolutely no progress applying multiple processors to unitary tasks -like digging a single post hole using one tool.

One thing I did to improve my rate of progress on digging those holes was to fill them with water whenever I took a break - and, assuming you stand back from the details far enough to see water as water instead of as some fixed quantity of H2O molecules, you can see just how analog computing would implement a solution to the problem.

Assuming you have a fixed flow rate and can only drop one hose at a time into each hole, you can set up to deliver enough water to fill one hole in one seventh the time it takes to fill with the hose you have simply by just slightly more than doubling your hose diameter.

Flow, in other words, is analogous to diameter -but controlling diameter is something we can simulate but not actually do in digital computing.

IBM's cell is a Linux grid supercomputer put on a single chip, Sun's CMT an SMP design for one chip, and both can, in terms of the water hose analogy, be seen as seven separate pipes delivering water between a shared intake manifold at one end and a shared delivery manifold at the other. The hardware and software comprising the manifolds for both of these things is getting better: thus both Solaris for CMT and IBM's latest compilers for cell offer significant improvements over their predecessors.

Both, however, concentrate on making maximal use of inherent parallelism - which, for IBM means vector processing and for Sun means running hundreds of apparently concurrent jobs more or less independently - both offer order of magnitude throughput improvements over x86 (which you can think of as now offering multiple pipes, but no manifolds) but neither really gets one hole dug with one tool much quicker.

Both offer tremendous scope for future development of the more, smaller, faster varieties - for example, more memory, shorter connectors, and faster co-processors are all in the immediate future for both lines.

People are working on digitalizations of the simple analog approach: mimicking the effect of increasing the hose radius in software with some variations showing particular long term promise. My bet, however, is that it won't be enough: that only revisiting the 1930s victory of digital computing over analog will ever allow us to build machines that we can grow to dig one hole, using one tool, arbitrarily fast.

Editorial standards