X
Business

Tecplot - a PernixData customer profile

Tecplot, Inc. explains what they did when storage became the bottleneck in their development process.
Written by Dan Kusnetzky, Contributor

I believe it is important to publish what customers are saying about a company and its products. This time, Pete Koehler of Tecplot, Inc. was kind enough to discuss why his company is using PernixData's PernixData FVP and why that product was selected.

Please introduce yourself and your organization.

My name is Pete Koehler, and I’m the IT Manager and Virtualization Architect for Tecplot, Inc.  Tecplot, Inc. is a leading provider of data visualization and analysis software.  Located in Bellevue, Washington, Tecplot empowers engineers and scientists to discover, analyze, and understand information in complex data, and communicate results with professional images and animations.  Our company is focused on providing high quality visualization products that helps users to be more creative, efficient and productive.  With 30 years of experience, and thousands of users worldwide, Tecplot, Inc. has become a trusted name in data visualization.

What were you doing that needed this type of technology?

Tecplot uses an automated code compiling infrastructure to support the needs of the software development team.  The faster systems compile code, the more quickly feedback can be returned to the team.  These faster turn-around times translate into better, higher quality code in our products.  Traditionally, code compiling is a CPU bound process.  In early 2013, we invested in more compute resources for our infrastructure to support these virtualized build systems.  This provided a significant benefit, but tests showed that much of the bottle-neck had shifted to our storage infrastructure.  We were giving back some of that performance increase because of storage contention.  This was also impacting other business critical systems that ran off of that same, shared storage.

What products did you consider?

Our initial plan was to invest in another traditional storage array.  More spindles, faster spindles, or some hybrid flash/spinning disk solution built into an array.  With this approach, the additional costs were going to be significant, because our legacy 1GbE storage fabric also needed to be upgraded to support the increased traffic.  Even with a significant financial investment, we were probably going to end up with just modest performance improvements.

Why did you select this technology?

We found ourselves in the same dilemma as many organizations are in.  We outgrew some of the storage performance capabilities our virtualized environment was originally designed for.  The options all seemed to point toward a complete overhaul of our storage infrastructure, which can be a financial challenge for any sized organization.  There are many great storage solutions out there that offer good performance, but they are proprietary by nature, and require quite a financial commitment with the given solution.

With the advent of flash based storage, de-coupling storage performance from storage capacity by way of caching mechanism had been an intriguing one.  However, this meant either investing heavily in array based caching solutions that still put a burden on our fabric, or go with a host based caching solution.  Software based caching solutions ride the “software defined” wave and take it one step further; abstracting the intelligence away from the physical storage.  Up until PernixData FVP, all host based, server side caching solutions were read-cache only.  Read caching can be very effective, but our heavy write biased workload (85% writes) required a solution that would address the challenges related to write I/O.  Taxing RAID penalties, and the high latency that can come along with that

We tested out PernixData FVP because of its ability to support true write-back caching.  It is tightly coupled with the hypervisor to provide a highly available pool of cache that provides the safety needed when accelerating writes, and does not compromise on feature-sets in vSphere we are all familiar with.  Just as the hypervisor is for a virtual machine, a VM being accelerated using PernixData FVP has no idea it is being accelerated.  No in guest agents.  No VM configuration changes.  It just works.

What tangible benefit have you received through the use of this technology?

With PernixData FVP, we have improved build times, and reduced load on our backing storage arrays was our measurement of success.  Both aspects have had significant improvements.  An interesting side-effect of running FVP in a production environment is how its analytics can help you understand your workload much better.  In the process of using the product, we discovered some internal workflows that were creating an unnecessary burden on our storage.  We identified the problem and eliminated the problem, which eased up some of the burden on our storage.  This pattern was only visible in PernixData FVP.

What advice would you offer others?

Know your workloads.  I cannot emphasize this enough.  All workloads are different.  In order to know what solution might be a good fit for an organization, one must understand what the requirements are from the underlying applications and the systems that support those applications.  Duty cycles, I/O transfer sizes, read/write ratios play a part in this understanding, but are not the only factors.  The type of read & write dialog that occurs within a given workload will play a key part as well.  I would also recommend taking IOPS numbers from storage manufactures with a grain of salt.  Often times those numbers are generated from an unrealistic workload, and does not accurately translate into how they will perform in your environment.

Editorial standards