As noted a few weeks back, the future of Big Data is cloudy. There's simply too great a disconnect between interest and latent demand for understanding from Big Data, and the skills necessary to profit from it.
That's what's sparked (pun intended) interest in managed Big Data cloud services, which are a form of Platform-as-a-Service (PaaS). The guiding notion is that, if skills, but not capital or resource, is your constraint, just buying time on some Amazon clusters to run your Hadoop or other Big Data analytics store is not going to get you much. That explains the growth of managed data services, both big and small. No wonder Amazon has moved over 14,000 databases with its new migration service during its first year, and that for Amazon, Aurora followed by Redshift have been the cloud provider's fastest growing services.
The recent announcement by CenturyLink of a new Big Data-as-a-Service cloud offering with Cloudera spotlights the fact that there are not simply more choices for running Hadoop in the cloud, but that the choices are very diverse.
In summary, the CenturyLink offering is clearly a premium product. It repackages the Cloudera Enterprise, with options to take any of the five editions, and implements it on a bare metal cloud infrastructure that does away with the virtualization that is common to most cloud platforms. The upside is that lacking the virtualization layer, compute will be more efficient, and with local (rather than pooled) storage, data access will be much faster. The flip-side of virtualization is that it is key to the elasticity and economies of scale that giant providers like AWS or Azure can offer. It's why virtualization tends to be the norm, with providers like Oracle offering bare metal only as an option to their customers.
The other piece of the CenturyLink offering is that it will provide options for tactical and strategic services; this is largely attributable to capabilities that came to CenturyLink through the 2014 Cognilytics acquisition.
For cloud providers, the need to differentiate is simple: They need an answer to the "Nobody gets fired for buying Amazon [or Microsoft Azure]." Otherwise, simply offering a carbon copy of what the big guy on the block already offers will be self-defeating; you can try competing on price, but that's an option that, realistically, is only open to challengers the size of Google.
Significantly, when Hortonworks announced its offering on the AWS marketplace, it did not try to run a port of its HDInsight offering (which features the complete Hortonworks Data Platform plus Spark). And neither did it try to emulate the similarly full-featured Amazon Elastic MapReduce (EMR). Instead, Hortonworks created a new SKU, Hortonworks Data Cloud built around the most popular workloads, Spark and Hive, as a subset product. In so doing, Hortonworks was providing AWS marketplace customers a reason not to default to EMR.
For enterprises, there's also good reason for diversity. Some only require the ability to run Hadoop cheaply while leaving the driving (all the ugly technical stuff like deployment, management, patching, updating, etc) to the cloud provider. Some will be happy with plain stripped-down vanilla offerings while others want more mix-and-match options. Some might even want the type of high-touch engagement that they would expect from an Accenture, Deloitte, or more specialized regional provider.
The new CenturyLink Big Data-as-a-Service cloud offering is a clear stake in the ground for this telecoms provider that it will provide the type of high-touch service that will be perceived as an alternative from the cloud's usual suspects. It is another sign that the cloud is morphing from deployment alternative to enterprise platform, with the diverse menu of options that enterprises expect from their strategic IT technology providers.