After speaking with the folks at Fusion-IO back in June during their tour to announce a partnership with HP, I've been thinking about application performance issues and the various ways to address them.
Typically application performance issues are found in this order:
- Network throughput and latency — typically the biggest impact on network-based applications is the network itself. Faster and faster network links, de-duplication, compression and clever caching are often deployed to address this issue.
- Data throughput and latency — typically the second biggest impact on applications is the accessing and updating data. Faster storage devices (higher rotational speed, greater aerial density), solid state storage devices, and clever caching are often deployed to address this issue. This, by the way, is where Fusion-IO's technology comes in.
- Memory — the performance and size of system memory come next in the list of bottlenecks. This issue has typically been addressed by deploying larger memory configurations, faster memory channels and faster memory components. Other approaches have relied on intelligent distributed memory caches (so called memory virtualization).
- More and faster processors— Faster processors can help monolythic applications perform better. Multiple cores or processors can dramatically improve performance of applications designed to make good use of parallel processing capabilities.
- Application architecture — due to the golden rules of IT, organizations seldom go to the trouble of rearchitecting problematic applications and yet, there can be dramatic performance improvements gotten when poorly architected applications are re-built using newer better tools or a better basic application architecture.
Snapshot AnalysisAs more and more applications are deployed using multi-tier, multi-system system configurations, some or all of the above approaches might be required to produce the highest possible performance. Unfortunately, organizations often focus on replacing systems with newer, faster models even though the real issue may be memory, data or network delays. I guess this approach is taken because it is easier and often means that older, slower network, data and memory equipment is replaced with newer faster models.
Fusion-IO is offering an interesting combination of some of these approaches. The company starts with a PCI-based flash memory board (a combination of 2 and 3 above) and adds sophisticated caching technology (approach 3) that helps applications find and use needed data. HP and IBM are using Fusion-IO's technology to offer performance improvements to their customers.
Is the approach being offered by Fusion-IO and its partners the best for you? It all depends upon where the real bottleneck is in your applications.