Yet another PCIe-connected flash vendor has joined what will, I suspect, become a fairly crowded market over the next year or so. And with this straw in the wind, it may be worth having a quick look at what this means for the future of applications and computer architectures.
Connecting flash storage directly to the processor over the machine's internal bus adds performance by eliminating latency from protocol conversions and network bottlenecks. The most visible vendor of this type of storage is Fusion-io which, with Apple co-founder Steve Wozniak as its chief scientist, has become a bit of a poster child for the concept.
A recent joiner to this market is Mushkin - what do you mean you've never heard of them? The company is primarily focused on providing memory products for PC enthusiasts and overclockers, and recently showed an SSD that plugs into the PCIe bus. It will, according to one report, allow users to plug SSD modules to the board rather than supplying them ready soldered, presumably to allow the buyer to add as much or as little memory as they are prepared to pay for.
In other words, this type of technology is about to reach consumers, not just large enterprises.
Meanwhile, Fusion-io has moved the game on by demonstrating "a billion input/output operations a second via eight HP servers, 64 ioDrive2 Duo cards and a new piece of software that dramatically boosts the performance of its non-volatile storage technology", as colleague Jack Clark reported recently. What this does is remove yet another layer of complexity between storage and processor, by by-passing the OS and allowing the CPU to address storage (if that concept is the correct term in this context) directly. Expect others to follow.
What this means is that, instead of seeing storage as somewhere to go and get programmes and data, and to use as somewhere to put items temporarily, flash storage becomes just another piece of memory, much like the swap file or partition.
Some handheld devices used to provide execute in place (aka XIP) functionality, which allowed programs to execute from RAM rather than copying from storage into main memory. It made sense when pretty much all the memory was flash (and of course there wasn't much of it in a 1990s handheld) but, as flash technology prices fall, it will become increasingly common for much larger, mainstream applications to make use of this concept -- providing of course that commercial barriers aren't inserted by interested parties. As if.
And it's yet another example of how computing architectures are converging: witness how Windows 8 will present a touch-screen interface that looks more like a smartphone than the current WIMP paradigm. Such convergence can only be a good thing, in my view.