From time to time, I have an opportunity to speak with Red Hat. This time, the conversation centered on Red Hat Storage (formally Gluster). The company was launching a beta test of Red Hat Storage 2.0. If we consider the growth in industry interest in what is being called "Big Data," Red Hat's move is quite timely.
Here's how Red Hat describes the technology in Red Hat Storage 2.0
The beta of Red Hat Storage 2.0 previews several key innovations, including:
- Unified File and Object Access: Red Hat Storage 2.0 now provides the industry’s first release of file storage designed to integrate seamlessly with object storage, offering organizations greater information accessibility, within a single, centralized storage pool;
- Big Data Storage Infrastructure: Red Hat Storage 2.0 includes compatibility for Apache Hadoop providing a new storage option for Hadoop deployments. This new functionality, enables faster file access and opens up data within Hadoop deployments to other file-based or object-based applications;
- Built on Red Hat Enterprise Linux- Red Hat Storage 2.0 is built on Red Hat Enterprise Linux 6, which provides a secure, high performing, flexible enterprise class operating environment. The appliance uses Red Hat Enterprise Linux's extended update support capabilities and the XFS file system (Red Hat's Scalable File System Add-On) to provide a core operating base platform which is reliable, scalable, secure and stable over an extended duration of time;
- Performance Enhancements: Faster rebalancing, performance tuning enhancements and Network File System Version 3 (NFSv3) performance optimization;
- Red Hat Enterprise Virtualization-readiness, enabling organizations to use Red Hat Storage as a storage layer for Red Hat Enterprise Virtualization;
- Improved Manageability: New capabilities that make it even easier to manage a Red Hat Storage cluster, including enhanced data management with Network Lock Manager (NLM) compliance, new event history information availability, additional storage brick level information, and improved visibility into self-healing operations and status; and
- Improved Reliability: New capabilities, such as proactive self-healing, that make Red Hat Storage even more reliable and ready for an organization’s most demanding production workloads.
Cloud storage and Big Data appear to have become key catch phrases to address for nearly every supplier focused on any branch of storage. Most of them are addressing the same issues mentioned by Red Hat in the announcement of the beta of their Red Hat Storage 2.0.
What's really different here is that Red Hat's technology is built upon a foundation of a community's development expertise.
Open source software projects are often driven by irritation. Individuals become upset with a piece of software because of what it does and doesn't do and they get together to fix it. This means that the software directly addresses real-world problems that users have.
While the early editions of this code are often for organizations comfortable with being on the bleeding edge of technology, each subsequent version is more and more polished and usable.
It is clear that the community behind Red Hat Storage has run directly into the challenges of adopting virtualized computing environments, cloud computing and adapting Big Data technology without also walking away from decades of investment in other approaches.
I'm looking forward to hearing stories from people using this technology as it rolls out.