SANRAD V-SWITCH XL network based data acceleration

SANRAD uses a flash-based network cache to accelerate data access in a virtual environment. Is this the next phase of storage virtualization and acceleration?
Written by Dan Kusnetzky, Contributor

SANRAD LogoOded Ilan, CEO of SANRAD, introduced me to one of his company's new products, the SANRAD V-SWITCH XL. SANRAD has taken the step of moving storage acceleration and virtualization into a host-based software appliance rather than hosting that technology in a storage server. 

What SANRAD has to say about the V-SWITCH XL

SANRAD’s VXL software delivers SANRAD’s Flash caching and virtualization technology into server virtualization platforms.

Key capabilities of the VXL include:

  • Efficient dynamic distribution of host-based FLASH resources to guest virtual machines via its application optimized cache engine
  • Supports key enterprise-class storage data center requirements like high availability, storage virtualization, and resilience
  • Runs on VMware vSphere, Microsoft Hyper-V, and Xen-based hypervisors
  • Guarantees cache migration for vMotion – Cached data is treated as a virtualized storage entity and can be migrated between ESX servers along with the virtual volumes without performance loss
  • Allows for caching over highly available mirrored volumes, ensuring that a single FLASH resource is used to accelerate both copies of the data, doubling the efficiency of FLASH utilization
  • No agent required on the application virtual machine
  • Central management is provided, so that IT does not need to manage each accelerated virtual machine separately

Snapshot analysis

Storage acceleration has been the focus of suppliers of all sizes for quite some time. In the past, the focus of attention was how to reduce storage latency and increase data throughput by increasing the rotational speed of the storage media. The next step was adding some level of secondary cache on the storage volume and the use of sophisticated caching logic to enhance the performance of the storage device.

The volume-by-volume approach find its logical limits. It simply was too expensive to load up each storage volume with cache memory. When many storage volumes were used together to create a RAID array, the caching algorithms on each volume weren't aware of the steps being taken to optimize overall storage system performance and started to hinder performance rather than offering improvements.

The next step was to put the cache memory into the storage server so that the caching could be aware of all of the storage volumes and could take steps to optimize overall performance. Later, intelligence was added to the storage controllers allowing data to be moved from slower devices to faster devices (or moved back to a slower device) based upon actual usage patterns.

Then large quantities of either flash storage or DRAM storage have been added to the storage servers as a storage media not just a caching mechanism. This very fast storage allowed data access to closely approximate local system memory in performance.

SANRAD has taken a different step by putting the large Flash storage cache in a separate hardware device that can be managed by software executing in a virtual machine on one or more hosts. This allows the caching to enhance storage performance on many different storage devices and tune storage performance to a virtualized environment hosted by a physical server.

This is both a clever extension to traditional approaches and a logical next step to storage acceleration in a virtual environment. If your storage infrastructure is showing signs of being a bottleneck, it would be worth taking the time to learn about SANDRAD and its products.

Editorial standards