I'm hearing from company after company that believes that it alone has the answer to storage performance problems. Usually, these companies select a specific market niche and creatively apply solid state storage devices (SSDs) in their solution. These companies have examined typical workloads and have tried to apply storage acceleration techniques somewhere in the path between the server and the storage.
Let's look at where they apply this type of technology:
- Repurpose internal memory — usually this approach is base upon allocating portions of the system's memory and using memory as if it was a disk drive. A similar approach is using an intelligent caching algoritym to speed data access to other storage devices without trying to make the cache appear to be yet another disk. Since in-system memory is typically faster than external storage devices, dramatic performance increases are possible. The trade off, of course, is that this type of storage is more costly than traditional rotating media so, unless the value of the processing is very high, this approach can make the cost of ownership and operation of the solution too high. Furthermore, this approach can become quite complex and costly if many systems are used to deliver a multi-tier, distributed workload.
- Add-in board — Sometimes, solid state storage is delievered in the form of a single-card computer that fits in a host system's internal buss. As with approach one, approach can become quite complex and costly if many systems are used to deliver a multi-tier, distributed workload. This board may contain very high performance, but expensive, DRAM or lower performance, but less costly, SSDs.
- Storage appliance — this approach usually is delivered as an appliance that is connected to the systems. The solid state storage, caching or storage software, and processing are used to improve overall performance in flight. The benefit of this approach is that it can be shared among many systems and improve overall performance of a complete workload. While it is less complex than managing many separate caches, it may not produce extreme performance improvements that other approaches might offer. As with solution number two, the appliance may contain very high performance, but expensive, DRAM or lower performance, but less costly, SSDs.
- Storage server caching — some suppliers have developed storage servers having extremely large internal caches and intelligent operating software that can improve overall performance of all workloads using the stortage the server is managing. This approach can be very simple to operate, because the storage supplier has tuned everything for best overall performance in a general purpose environment. Other suppliers offer similar caching, but offer many tools to adjust how the cache is used. As with solutions two and three, the storage server caching may contain very high performance, but expensive, DRAM or lower performance, but less costly, SSDs.
- Solid state disks — some suppliers have developed storage modules designed to replace traditional rotating media. These devices may be used anywhere storage is attached to the workload today. They can either be shared resources or applied to a single server. These disks may be based upon DRAM or SSDs.
What's been interesting to watch is that storage companies have been using one or more of the above approaches to target different problems. Suppliers have focused on in-memory databases, accelerating traditional database processing, big data storage and retrieval, extreme transaction processing, and more recently, virtual desktops.
The technology is similar, but the marketing messages are designed to distinguish one competitor from another.
What's the real issue?
If we took the time to closely examine workloads, whether single-system or distributed, we'd learn that often the following resources (in order) fail to meet the requirements of the workload and thus, become bottlenecks.
- Workload architecture
- The network
If the workload is distributed, it is likely that the network is the real bottleneck. If the workload executes on a single system, the bottleneck is likely to be the storage or processing power of the underlying system. If the bottleneck is storage and the system has sufficient memory capacity, using in-memory caching or storage is likely to be of help.
The storage suppliers often ignore the fact that their customers may not know where the bottlenecks are in their applications and use storage acceleration as a quick and easy patch that will provide improved performance for a time.
So, the next time you read some supplier offering the perfect solution to every performance problem and that solution is based upon solid state storage, it would be good to remember that what they're offering is a stop-gap solution to what is likely to be a very complex problem. Is that good enough? It certainly can be. Why? Because applications today change rapidly and finding the perfect solution may be beyond the time and expertise available.