With a few exceptions, the announcements at Oracle OpenWorld last week were largely aimed at filling in the blanks of its cloud and data platform portfolio. ZDNet colleague Stephanie Condon was on the scene last week, covering the new SaaS AI enhancements, a new Autonomous Linux, a new Oracle Cloud free tier that includes the Autonomous Database, not to mention Big on Data bro Andrew Brust's report on the new analytics for Oracle Fusion ERP. And check out Larry Dignan's take, which came out this morning.
There was also the re-emergence into the spotlight for APEX, the low-code/no-code database development language that may finally get its moment in the sun after nearly 15 years of scant marketing promotion. There's also Oracle's going more public in planting its stake in the sand for multi-model converged databases that expand the database footprint to encompass development tools, data integration, data virtualization, analytics, and machine learning, drawing the inevitable contrast to AWS's more specialized approach. And as we reported a few weeks back, Oracle is spreading its wings in machine learning and AI, suffusing it in application suites and building out an extension atop the database cloud service for managing the full lifecycle of machine learning and AI projects, from ingest to data lake management to consumption -- either through smart guided bi analytics or data science collaboration and development environment.
It's a long laundry list, but in this report, we'll focus on the two areas that jumped out at us: Oracle's first use of persistent memory, and our observations on autonomous database now that it has drawn at least a year of production in the field.
To us, the highlight was Oracle's announcement of the second refresh of Exadata this year with the new X8M model more than doubles transaction processing performance and chops latency by roughly a factor of ten. The enhancements come courtesy of several key design changes in storage, networking, and virtualization, of which the addition of a new tier of persistent memory storage takes the spotlight.
With the X8M announcement, Oracle is the next enterprise data platform player to follow SAP in embracing persistent memory. As we noted from our SAPPHIRE coverage last spring, persistent memory has been long on both promise and incubation. First hailed by Intel (today, the only supplier of persistent memory, which it brands Optane) as a new form of high-performance storage that was supposed to deliver almost the performance of memory at almost the price of Flash, an announcement to actual release took almost five years. And, as platform suppliers have learned the hard way, you can't just plop persistent memory into a storage server and get the promised performance. There are different modes that, in some cases, may require database and application platform vendors to rewrite their software, and then there is the form factor: persistent memory performs best when you put it into memory rather than storage drive slots.
Oracle has taken these lessons to heart and then added a few other tricks. They include a move from InfiniBand to a new converged 100-gigabit Ethernet fabric (which moves Exadata networking onto a more mainstream standard that is drawing the lion's share of development). Use of converged Ethernet makes possible use of an extremely high-performance protocol, Remote Direct Memory Access (RDMA), which bypasses the operating system to directly access memory. It's an industry-standard implementation, appropriately called RDMA over Converged Ethernet, or RoCE. Oracle is offering the new model at the same price level of the existing Exadata X8 generation, with the new model being primarily targeted at transaction processing workloads, which draw the most benefit.
It rounds out a line that, a few months ago, also added a new, low-cost Extended Storage (XT) model designed for active archiving use cases (making archival data directly accessible for analytics). While on the software side, Oracle has adopted a cloud-first strategy, the new Exadata models are debuting first with on-premises customers.
The addition of persistent memory is more than just a transaction turbocharge. If persistent memory delivers on its promises, it could replace, not Flash, but DRAM memory, as you could accommodate far more persistent memory in a memory slot than DRAM. But getting there has required platform developers like Oracle and SAP to optimize, and in some cases, rewrite their databases and applications to take advantage of it. Because we're still in the early chapters of this story, the emergence of persistent memory will be a sleeper rather than sudden disruption of the database and application server market because, for most vendors, it will require trial and error to get it right.
The other thing we were keeping an eye out for this year was looking for actual production results from the Autonomous Database, now that the customers have had it in their hands for over a year.
The results had a common theme -- as a managed cloud service, an autonomous database instance could be spun up in less than five minutes on average, the performance was fast, and DBAs had a lot less to do because autonomous databases automated the housekeeping chores, such as provisioning, patching, and tuning. As we saw, with the crowds lined up for a DBA vs. the Autonomous Database breakout session at OpenWorld 2018, there is a lot of fear, uncertainty, and doubt among this group about becoming obsolete.
We drilled down on DBA roles a bit further with a few customers. In most cases where organizations already had DBAs, their numbers were either reduced or redirected to related areas such as dealing with integration issues with data sources (e.g., other databases); designing, managing, and performing testing; advising application developers on schema design (because poorly designed schemas will bring performance of even the most autonomous platforms down to a groan); or learning-related new disciplines like blockchain.
Our other impression was that the early public references tended to be small companies or other SaaS providers. In other words, we're not talking the core of the Oracle installed base. On one hand, large users are not likely to publicize early proof of concepts, and they are also more likely to take wait-and-see approaches, not adopting v1 of any new platform.
But there is also another missing link; for now, Oracle does not yet have a migration path to motivate existing Exadata customers, which would be the sweet spot of the installed base, to move to the Autonomous Database service. Specifically, today, there are no software upgrades nor economic incentives for the core of the Exadata base to upgrade.
But Oracle has a unique opportunity here given that the architecture of Exadata is identical for on-premises racks that customers manage, and autonomous database services in the Oracle Public Cloud that Oracle manages. And, as Oracle has publicly stated its intent to add the Autonomous Database in an upcoming refresh of its new Exadata Gen 2 Cloud at Customer. Adding an autonomous service for Cloud at Customer should provide a golden opportunity for Oracle to add a migration path for existing late-model Exadata on-premises customers because it should be technically feasible to provide a software upgrade that at least gets them most of the way there.
As Larry Ellison keeps giving the Tesla self-driving analogy to describe the Autonomous Database (well, he's on the board), his company should follow that example full circle. Tesla keeps its product fresh, not so much from new model introductions, but software updates. It's a lesson that Oracle could take to heart to quickly scale the Autonomous Database footprint by opening a path for the core of the Exadata customer base.