I was speaking with Certeon about recently released performance figures for their network/application acceleration virtual appliance that can be hosted both on VMware's ESX Server or Microsort's Hyper-V (for more information follow this link.) The performance figures were impressive, but then again, since I was part of Digital Equipment Corporation's Networks and Communications group, I've known that tools that create optimal use of network links can help organizations experience amazing performance increases for network-oriented workloads or amazing network-related cost reductions. The conversation wandered off topic and focused on optimizing complex workloads.
Many workloads are now structured as multiple services that are hosted on different physical servers in an organization's datacenters rather than being monolithic blocks of software than are hosted on a single machine. Developing these workloads requires quite a number of separate skills. Optimizing them can be nearly impossible for organizations whose staff doesn't have the right skill set or experience. So, finding "drop in" tools that make things work better is usually welcome.
Performance optimization is accomplished through a number of small steps, each contributing to the overall performance of a workload. Developers and operations staff often make the mistake of overlooking small improvements in their search for a single sliver bullet.
When a network oriented workload is examined to find bottlenecks and performance problems, quite often significant improvements can be made by simply optimizing network traffic. Products such as Certeon's virtual appliance offer such things as caching network messages, removing duplicate information so the network link is only used to transmit needed data, and managing network connections to allow reuse whenever possible. Since inserting this type of network optimization requires little or no changes to the software or the administration of the workload, the return on the investment can happen very quickly.
The next area in which improvements can be found is storage. Optimizing the use of storage can offer impressive performance improvements and overall cost reduction. Following storage, the next area is data management. Optimizing the use of database engines can produce improvements as well. Optimizing system or virtual system memory is the next area where improvements can be found. After all of these others areas have been closely examined, the next step is to consider acquiring a faster system.
I've been amused by the fact that many organizations facing performance challenges immediately moved to replace their systems rather than trying to optimize the use of their current IT environment. While, on occasion, that move resulted in enough improvement that time-stressed IT staff would declare victory and move on to other issues. Had they taken a bit more time and examined network, storage, and memory usage, they may have gotten even more improvements and at a lower overall cost.