Microsoft's SQL Server 2014: More than just in-memory OLTP

Microsoft's SQL Server 2014: More than just in-memory OLTP

Summary: Microsoft's SQL Server 2014 is slated to deliver in-memory OLTP capabilities, plus a handful of other new and enhanced database features early next year.

SHARE:

Last year, Microsoft officials said the next version of its SQL Server database would include built-in in-memory online transaction processing (OLTP) technology. That was all they'd say at that point about the next version of SQL Server.

sqlserver2014

Last week, company officials reconfirmed plans to incorporate in-memory OLTP -- via a new engine codenamed "Hekaton" -- in the the next version of SQL Server, known officially as SQL Server 2014. But they also expanded on some of the other features that will be in the coming release.

A first Community Technology Preview (CTP) of SQL Server 2014 is due in late June 2013. The final product is expected to ship in early 2014. (Those interested in testing CTP1 can sign up for notification now.)

The Hekaton in-memory capabilities are being design to complement the existing in-memory data-warehousing and business-intelligence (BI) capabilities already in SQL Server, officials said during TechEd last week.

Officials reiterated that even though Microsoft is changing the core engine, the Hekaton technology will continue to work with traditional SQL Server tables, so that users will see performance gains even on existing hardware.

With SQL Server 2012, Microsoft introduced new column store capabilities into its database. But that column store was an in-memory index. With SQL Server 2014, this column store becomes updatable with faster query speeds and greater data compression, yielding more real-time analytics processing capabilities.

SQL Server 2014 also will include new buffer-pool-extension support for solid-state drives, enabling faster paging. Microsoft is enhancing its "AlwaysOn" technology, also introduced with SQL Server 2012, so that it delivers "mission-critical" availability, with up to eight readable secondaries and no down time during online indexing.

SQL Server 2014 will back-up more simply and seamlessly to Windows Azure, enabling users to back up their on-premises data to the cloud at an instance-level for disaster-recovery purposes. Backups can be automatic or manual, and a backup can be restored to a Windows Azure Virtual Machine, if need be.

When used in conjunction with Windows Server Blue -- a k a Windows Server 2012 R2, due out later this calendar year -- SQL Server 2014 will deliver increased scale in terms of compute, network virtualization and storage virtualization, officials said.

SQL Server is one of Microsoft's billion-dollar businesses. According to Microsoft officials, 46 percent of the databases deployed worldwide are now SQL Server, and customers are running 300,000 SQL Azure databases in Windows Azure.

Topics: Cloud, Data Management, Microsoft

About

Mary Jo has covered the tech industry for 30 years for a variety of publications and Web sites, and is a frequent guest on radio, TV and podcasts, speaking about all things Microsoft-related. She is the author of Microsoft 2.0: How Microsoft plans to stay relevant in the post-Gates era (John Wiley & Sons, 2008).

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

7 comments
Log in or register to join the discussion
  • Backing up on prem to the cloud is a great feature, if they make it as easy

    ....as adding a new tape or file system .trn file.

    I think most will not move their main databases to the cloud, for speed and latency issues. But doing transaction backups to the cloud with point in time restore saves a lot of tape management hassles....
    Mac_PC_FenceSitter
    • We have stopped using tapes years ago

      We backup to a share, but not to the cloud, as a third party cloud isn't under our control, and I personally doubt many companies will consider it. Of course if you run your business using Azure this is of no concern.
      sjaak327
    • reply @Mac_PC_FenceSitter

      The backup to cloud is actually quite integrated and easy to use, it's essentially using the same SSMS tool to backup to a URL instead of a local file/device, and that's it. Also note that WA inbound data transfer is free, but the outbound is not, but given how often people backup and restore is significantly less often, it makes a lot of sense.
      kevinliu173
  • Now if the in-memory feature

    allows sharding across multiple servers, I know I'll take a look. If they get serious about the $50k minimum licensing for SQL Enterprise (needed for the interesting features), I may actually buy.
    happyharry_z
  • SQL 2012 Column Store Index Correction

    CSI's are NOT "in-memory" constructs in either SQL 2012 or SQL 2014. They are non-clustered and they PREVENT modifications to their assocated base table and are HIGLY compressed in SQL 2012 - but they DO exist on disk and get pulled into memory as needed just like other "index" structures.
    IndiciumSQLGuru
  • Comparable to HBase or Cassandra?

    Hi,

    Can some of you share some insights if you are familiar with the matters.

    Q1. Does SQL Server 2014 column store technology make it comparable to HBase or Cassandra?

    Q2. Does "in memory" means the entire DB size + Indexes + transaction logs must now be stored entirely in memory? Or is it data only? In other words, if my server have 32GB Ram. The server has 10 DBs. Does this mean the total size of all the 10 DBs should be 32GB max?

    Thanks in advance for any clarification.
    RelaxWalk
    • In-memory DB size limit - reply@RelaxWalk

      The answer for Q2 is no. The integrated approach allows partial migration to In-Memory, that is, if you have a 2TB database, but only a partial area needing the extra performance, you can move that specific area to In-memory, and that is true for both Hekaton (OLTP) or columnstore.
      kevinliu173