Democratize big data: How to bring order and accessibility to the data lake

Hadoop emerges as the corporate standard for big data management, but success depends on governance, cataloging, and accessibility. Here's a look at two camps of vendors in the Hadoop ecosystem that are bringing order and accessibility to big data.

The good news is that 10-year-old Hadoop is maturing quickly. The bad news is that many companies are still struggling to get beyond pilot projects and support many applications on this new data-management platform. That's why I wrote my latest report, "Democratize Big Data: How to Bring Order and Accessibility to Data Lakes."

First, let's consider the industry leaders that are doing great things with data lakes, drawing on examples from the recent Hadoop Summit:

In financial services CapitalOne has been making waves this year, appearing at multiple events to talk about its Hadoop and Spark-based fraud-detection and its big data analytics, streaming, and security work.

In retail Macy's embraced Hadoop more than five years ago to power insights for Macys.com. Today it's doing more sophisticated cross-channel analysis, driving personalized promotions encouraging online customers to shop in stores and in-store customers to obtain out-of-stock and online-only items at Macys.com.

In manufacturing Ford relies on Hadoop for connected car capabilities. Ford does filtering and decision-making at the sensor and car level while uploading crucial data points for centralized insight and analysis. For example, FordPass app users can remotely check their car's fuel level, location, and diagnostic error codes, but detailed data used by service technicians remains in the car's black box.

In insurance Progressive has been a pioneer of usage-based pricing with Progressive SnapShot. The company has more than 15 billion miles' worth of driving data in a Hadoop-based data lake, but it can drill down and offer discounts to individual policy holders based on factors such as their total miles driven, nighttime driving, and speed and braking habits.

Hadoop and Spark: A tale of two cities

It's easy to get excited by the idealism around the shiny new thing. But let's set something straight: ​Spark ain't going to replace Hadoop.

Read More

These examples are inspiring, but behind every breakthrough there's been a lot of hard work. And many fast followers are still struggling. Here's a recent sampling of criticisms I've heard from Hadoop users:

  • The VP of platforms and architecture at a digital marketing company said, "better data governance is the number-one priority on our Hadoop wish list."
  • The director of analytics at a logistics firm said "Hadoop was messy on the data-lineage end. We spent months working out the details for data ingestion."
  • A BI solutions architect at an aerospace firm said "We have three people working with Hadoop, but we have more than 150 business users who need access to the data. I'd like to see better ease of use for business users."

These and other comments led me to publish my latest research: "Democratize Big Data: How to Bring Order and Accessibility to Data Lakes." The report explores three areas where commercial vendors are filling gaps in the Hadoop stack: data management and governance, data cataloging and metadata management, and data discovery and self-service data prep.

These three gaps are being filled by two camps of vendors that are complementing what's available in Hadoop. Incumbent data integration vendors focusing on the data lake include IBM, Informatica, Oracle, Pentaho/Hitachi, SnapLogic, Syncsort, and Talend. Next-generation vendors that have emerged in the big data era include Alation, Collibra, Datameer, Podium Data, Paxata, Trifacta, Tamr, Waterline, and Zaloni.

Both camps are bringing automation and repeatability to data lake management and governance. They're also making the contents of the data lake more accessible and many are abstracting users from the complexities of manual coding in Pig, Hive, Spark, and other open source components.

As the report explains, a data lake is not a replacement for a conventional enterprise data warehouse, but many data-processing and data-analysis workloads are shifting to this new platform. The choice among incumbents and next-generation vendors depends on the specifics of your deployment. The report offers vendor-, category- and capability-specific descriptions and selection criteria as well as big-picture advice on setting your analytic direction. Here's a peek inside the table of contents:

  • Executive Summary
  • Broader Access Drives Insights and Actions
  • Hadoop Emerges as a Corporate Standard
  • Data Lake Success Demands a Mature Approach
  • Seek Ease of Use, Repeatability and Automation
  • Look to Next-Generation Vendors to Fill Data Lake Gaps
  • Consider Incumbents for Broader Needs
  • Recommendation: Target the Center of Analytical Gravity
  • Recommendation: Consider Consultants and System Integrators
  • Takeaways: Prevent Data Swamps and Make Big Data Accessible

Click here to download an excerpt of the report.

YOUR POV

How is your organization filling the gaps in the Hadoop stack? Get in touch with me on the Constellation Research website or on Twitter @DHenschen.