X
Tech

To flash or not to flash in the datacentre

Even while solid-state memory overtakes the consumer sector, magnetic disk storage is still the standard in the enterprise sector. How long before it takes over?
Written by Drew Turney, Contributor

It's not news that the demands on datacentres are increasing exponentially. But while there have been advances in everything from cooling to networking speeds, there's a bottleneck we can't quite get around in hard drive technology.

A read/write head has to physically move from one point above a spinning disk platter to another in order to reconstruct a file from a series of magnetic pulses and then move it into working memory for action.

The spinning produces no small amount of heat, contact between the head and the spinning disk can result in catastrophic failure and data loss, and no matter how small we make them or how fast we spin them, the mechanics of the moving head are constrained by the laws of physics.

It can be a minor inconvenience in a stand-alone PC, but extrapolate it up to the scale of a major datacentre, and costs mount up and processes that need to perform better will hit a brick wall eventually.

As Robert Crooke, Intel's senior vice president and general manager of the Non-Volatile Memory (NVM) Solutions Group, said in an online video: "The truth is, hard disks cannot keep up with the processing power of the datacentre."

So why -- as those of us who know anything about storage technology ask -- can't we use solid-state disk (SSD) storage? It's tough, doesn't heat up, has higher capacity in the same volume, and doesn't have any moving parts to get damaged. The physical, ongoing cost, and carbon footprints of SSD-based datacentres would be far smaller than those based on HDD technology, so much so that they might cancel out the higher cost per storage unit of SSD over older technology.

SSD has exploded in the consumer sector just as cloud computing has in the enterprise. So why hasn't the smaller, denser, higher-performance SSD's profile gone through the roof along with the rise of cloud computing at the big end of town?

Firstly, your goal shouldn't just be to simply replace your HDD datacentre or infrastructure with SSD, any more than you should be adopting cloud computing just because everyone else is. You're after the best performance, not the hottest buzzwords.

As Jeramiah Dooley, cloud architect at next-generation flash array provider SolidFire, told ZDNet, not all of the information in a datacentre needs the high performance of SSDs.

"There are only two kinds of storage: Production and archive," he said. "For most customers, production will be an all-flash block storage platform, and archive will be an object storage that will most likely be provided as a service from an external provider."

Instead, a deployment of both storage technologies might represent the sweet spot.

"A proper combination of SSDs as data cache and traditional HDDs as storage volumes is more suitable for datacentres and businesses without breaking the bank," said Michael Wang, global product manager of network-attached storage provider Synology.

Quiet growth

It might surprise you to learn the extent to which SSD technology already exists in datacentres. ZDNet approached at least two major providers, Amazon and Facebook, and both refused to comment on how much of their capacity is one kind or the other.

But Steve Eschweiler, director of operations at hosting provider Hivelocity Hosting, suspects that they aren't investing as much in SSD as other providers are, because of the common use cases of their customers.

"For datacentres like Google, Amazon, Facebook, and Microsoft, which have a large focus on cloud storage where top speed and performance isn't critical, it still makes sense to use spinning disk where the cost per GB is often $0.03 and less," he said.

Dooley guesses that around 30 percent of data in enterprise datacentres is stored on SSD media, and said it's been that way for two or three years, even if it's only deployed as a caching medium to get the data from traditional disk platters to a more responsive, "ready for action" state.

Unlike laptops and tablets, there's a lot more to think about when you get to the scale of a datacentre, and just going for "fast" might trip you up, according to Dooley.

"The goal is to provide better performance, reliability, and predictability to the workloads that their customers rely on," he said. "While flash can help, there are many vendors who implement it in a way that it doesn't do anything more than go faster, which misses out on much of the potential value."

Just one of those values are what Dooley calls a more predictable performance envelope, and he thinks "predictable" is a better term to use than "fast" when you're making performance promises to customers or users.

"Spinning disk has a very wide range of performance, depending on where the block is on the physical platter, how busy the disk is, how much cache is available," he said. "SSD-based storage has a very tight, very predictable response time or latency no matter the available capacity. This predictability is a huge benefit when setting SLAs and expectations with application users."

Aside from speed, SSDs are also more efficient to power, and with no moving parts, they don't throw off the waves of heat we're all familiar with after owning desk-bound PCs for years.

Then there's the idea that the software can be as powerful as the hardware.

"Three to five years ago, when SSDs were amazingly expensive, the startup community turned to compression, deduplication, and thin provisioning technologies to mask some of that cost to customers," Dooley said. "What started as an economic play turned into one of the key differentiators. On top of that first generation of data services, we added quality of service and the ability to provision capacity separately from performance."

SolidFire said replacing racks and racks of gear with a half cabinet of its SSD equipment caused a customer to recoup the purchase price in environmental savings alone in just three years.

So with the decreasing cost of SSD storage per gigabyte, Dooley believes customers can buy twice the capacity every six to 12 months for the same price. Add that to the power of software data services, and Dooley said the best-case scenario he's seen was a cost reduction of over 50 percent in the first three years.

From the ground up?

So what would a datacentre comprising entirely of SSD technology look like? The classic server farm can be the size of a sports field, because the high input/output (IOPS) demand means you need to connect multiple hard drives together, which takes up a lot of space, which in turn means a lot of space for cooling, maintenance, and security. The same capacity in SSD technology is much higher in density, which means far less space needed.

"Faster self-healing leads to less cost of outages and less time spent in maintenance," Michael Wang from Synology said. "More granular provisioning of capacity and storage leads to less over-provisioning and better efficiency. Better time between failures and failure handling leads to many enterprises forgoing the cost of quick response service."

Acoording to SolidFire, SSD manages seven times as many IOPS, runs on around a fifth of the power, gets more IOPS per unit of power, and ultimately costs around 22 percent of HDD.

Over the last two years, Eschweiler from Hivelocity said the cost of SSD storage has come down drastically. He said the price of the Intel product that Hivelocity offers customers has dropped from $1.30 per GB to $0.63 per GB, resulting in his company "aggressively" steering customers towards SSD.

"The performance improvements our customers realise with SSD over spinning disk are nothing short of remarkable," he said. "With no moving parts, an SSD is both more reliable and more power efficient than SATA or SAS drives, which in turn reduces our OPEX. For all of these reasons, we have seen an increase in SSD usage of roughly five times year over year."

But as mentioned before, SSD doesn't suit everything. If you're building a datacentre from scratch and you use SSD rather than HDD, you'll save on environment controls and floor space, but you'll still have to recoup a few hundred million dollars from investing in a datacentre. If your customers don't have high-demand IOPS processes that will push it to its capacity, you might be throwing money away. As we've learned, SSD is still more expensive than HDD per gigabyte.

"SSD is suitable for hot data processing like the fast-paced e-commerce sector," said Wang. "It's also beneficial for speeding up processing of virtualisation, software-defined storage applications, ultra-high resolution video processing, and 3D computer-aided design [CAD] content creation."

The other side of the coin is that for services like backup, archiving, and recovery, where handling and security are more important than lightning-fast delivery or processing, anything faster than today's industry-standard HDD might be a waste of money.

There's also the matter of most of our data being on legacy systems that are very HDD heavy, and moving it to an all-SSD environment is no easy task in time, risk, and manpower. Dooley said the cost of migration is high, citing inconsistent SSD controllers as one reason why.

"Not all SSDs are equally capable products, and have read and write request response times which vary dramatically," said James Myers, Intel director, SSD solutions. "That's due in part to the SSD controller and how those SSDs manage the internal memory to store and retrieve the data while still preserving the data integrity."

"SSDs are 1,000 times faster than hard disks at the physics level. Many rely on controllers that are inconsistent and let the work pile up. It undercuts the value of using them," Crooke said.

"Datacentre applications require this consistent and predictable performance, as the software generally needs to operate at maximum rates associated with the slowest command response times from the hardware," Myers said. He echoed Dooley's belief that the benefit SSD brings isn't necessarily being fast, but rather being "consistent and predictable".

Will SSD ever take over in the enterprise and datacentre to the extent it has in consumer computing? Wang from Synology was hesitant about this.

"I don't expect to see this happening in the short term," he said. "Perhaps in four to five years, we'll see SSDs dramatically come down in price with increased capacity."

But Jonathan Weech, SSD product manager of memory manufacturer Crucial Memory, has nevertheless seen it changing.

"As with many other markets and applications, cloud datacentres and big data services are increasingly transitioning to and relying on SSDs to meet their needs," he said. "When the nature and scale of cloud services and applications are taken into account, the measurable advantages are truly remarkable."

In the end, there might be a parallel with the way datacentres themselves are used. You can find any number of naysayers who say public cloud services aren't secure and private cloud services aren't affordable, but as the movement has matured, the smartest organisations are adopting the best of both worlds.

We might find the same in SSD infrastructure -- such as the SSD cache environment and the HDD storage volume working side by side. Just like in every other technology you invest in, the best approach is to do your homework and let your data needs tell you what's required, and you'll likely end up with a blend of what does the job best.

Editorial standards