It's hard to ignore that in the IT world, Amazon is becoming a force of nature. The AWS cloud business, which has been on a trailing annual run rate that's now exceeding $10 billion, has also morphed into the parent company's cash machine. As IT vendor, Amazon is still a fraction of the size of the Microsoft, IBMs, and Oracles of the world, but its share of enterprise IT wallet is growing harder to ignore as cloud gets perceived as the new norm.
Today, Amazon and rival cloud providers are barely scratching the surface for addressable customer workloads. The cloud has been a popular place for dev/test, hosting new externally focused apps, not to mention being the natural point of presence for IoT and mobile solutions. But what about the heartbeat workloads that keep enterprises running?
While it might be straightforward to use the cloud for that new customer engagement, social media, or IoT application, moving back office systems is another matter. Start with the dread that the words "version upgrade" mean to organizations with 10- or 20-year old back end systems that have been heavily customized over the years. It's not hard to conclude that the notion of moving such fragile systems strikes fear into the hearts of even the most jaded DBA or application architect.
Nonetheless, just as Y2K pushed enterprises in an earlier era to replace those back office systems not designed for the next millennium, security might be the factor that becomes the cattle prod to the cloud on this go round. There's a bit of irony there, given that a traditional criticism of the cloud was whether it was a safe enough place to entrust your data. With the growing regularity and mutability of hacks, the shoe's now on the other foot: can you trust your own IT group to muster the same degree of state of the art security sophistication compared to cloud providers whose sole business is IT infrastructure.
AWS to Oracle: Now it's our turn and we got next | Amazon launches AWS Greengrass for IoT | Workday takes customer workloads into the cloud with AWS | Amazon brings its AI expertise to AWS customers TechRepublic: Amazon goes all-in on AI and big data at AWS re:Invent 2016 | 5 architectural principles for building big data systems on AWS
The case becomes more compelling when you look at the cloud as a vehicle for application modernization. Elastic, highly scaled architectures change the notion of how you design and run databases and applications. When there's no more excuse about lag time for procuring infrastructure, IT can change from gatekeeper to enabler, especially if it embraces a DevOps mentality.
The cliché that change is not just about technology, but people and process as well, is truer than ever if you truly leverage the cloud. Customers that we spoke with at Amazon re:Invent this week emphasized to us that the benefits of cloud occur when you not only change the technology, but the people and processes as well.
As the cloud becomes more mainstream, it becomes more than a place just to run your systems. The cloud has morphed to cloud platform. Instead of running Oracle or SQL Server, you're running DynamoDB, Aurora, or Azure SQL Database. In place of your existing security and identity access management, you run the cloud provider's own counterparts.
To recap, Aurora, created by Amazon, was originally intended to draw MySQL workloads. With Postgres, you're looking at a more serious enterprise database with more mature, Oracle-like SQL and data types. There's little doubt that, as Oracle becomes cloud-first, it views Amazon as its chief rival; and as Amazon rounds out its database portfolio, it's taking dead aim at the core of Oracle's installed base.
One of the few places that Amazon and Oracle are on the same page is the matter of cloud inevitability; both predict that the bulk of on-premise servers will disappear by the middle of the next decade. So what happens when that happens?
The answer is that as the cloud becomes more of the default option, on-premise systems won't go away. Because few if any enterprises, with any legacy, ever have 100% of their systems following a common blueprint, no matter how powerful or influential their enterprise architects are. And so, the more cloud, the more hybrid.
That will ultimately push Amazon into territory that is physically outside its comfort zone.
At re:Invent, Amazon made several announcements acknowledging the fact that some of its capabilities (principally, its Lambda capability where you run programs without worrying about servers) will have to live on premises for some use cases.
Amazon announced Greengrass, a new capability for managing IoT applications; and Snowball Edge, an enhancement to its Snowball appliance that is typically used for moving on premise data to the cloud. The common thread for both cases is the need to preprocess data locally before it is brought to Amazon, as opposed to Amazon trying to get a foot literally inside the door of the enterprise.
It comes atop other Amazon initiatives designed to bridge the on-premise world with the cloud, such as the Database Migration Service, which is in essence, a bidirectional database replication service, and the recently announced VMware Cloud on AWS.
As Amazon's core competence is its ability to build globally networked, Internet-scale redundant data centers, we don't expect that Amazon will travel the road of Oracle with its Cloud at Customer portfolio, where Oracle manages a cloud appliance inside your four walls.
But ahead of it, Amazon customers will still have assets on both sides of the cloud frontier. But they will still aspire to having the ability to have a single view of all their assets and data. The last thing they want from a cloud transition is another silo. The question for Amazon is how far it will be willing to grow its on premises footprint to accommodate this reality.