Over the last few weeks I've been talking to database companies from both sides of the SQL divide, and the more I've talked about how their databases are developing - and how their users are using them - the more clear it's become that the divide between SQL and NoSQL is shrinking.
You just have to look at how even the largest databases are changing, with Oracle adopting NoSQL patterns in its in-memory tools, as Microsoft's SQL Server does the same with its new column stores. The big boys are learning from the new entrants, adopting their ways of working as customers demand that their existing tools support new use cases and new data models.
As the amount of data we have to work with carries on exploding, the tools we're using have to change. A massive NoSQL system may work well as a stream processing tool for the floods of data from billions of sensors in tomorrow's Internet of Things, but how are we going to work with terabytes of historical data stored on slow spinning discs somewhere in the cloud? By adopting techniques from SQL, NoSQL is able to support more of these new scenarios - and the same is true of SQL.
We're moving to a world where it doesn't matter if you're using SQL or NoSQL or, well, whatever. What matters is that you're using the appropriate database engine for your use case and for your applications. It might mean spending a little time evaluating tools before you go into production - or it might just mean using a different set of features in your database of choice. Neither is the right answer, but, more importantly, neither is also the wrong answer.
The truth is, all most users want is a place to store their data, and make basic queries on its content, searching for the information they need. That data can be in any format; it just needs to be accessible. This small data world is the opposite of the massive databases that power big data. For small data, the value is in the data itself, while for big data the value is in what the data can tell us.
I know I'm making a simplistic divide here: we all want both types of value from our data. But the underlying concept makes sense of the way our databases are changing and merging. I can have a key/value pair in SQL Server, with a column store for rapid in-memory queries. I can add a JSON query engine to MySQL, turning my data into an API for my apps. These are two very traditional relational databases, but with features we'd normally consider to be NoSQL.
Things are changing from the NoSQL side of things too. As banks and the finance industry build trading platforms around fast, in-memory NoSQL systems like Couchbase, they're having to add query languages of their own. The transition from the unstructured world of the original NoSQL systems to the structured data at the heart of the finance industry is forcing significant change on the NoSQL engines, turning them into highly tuned near-real time data engines.
Blending NoSQL and SQL makes sense. We can store data in whatever way makes sense for our projects, and pick and choose the engines we use. It doesn't matter if we use SQL Server to hold a key/value store, or if we build a complex set of queries into Couchbase, or use a CRUD API to work with MySQL - what matters is that we get the results we need to do our jobs.
We're at an intriguing inflection point in the history of the database, with a wide choice of engines, APIs, query types, and data schema. Put them all together, and the future of the database is an interesting one. Databases are going to remain at the heart of our business processes, delivering the right-time information we need to do our jobs - while at the same time, they're the engines that power the machine-learning AIs that are automating much of the world.