FoundationDB: Back to the future with key-value store

FoundationDB: Back to the future with key-value store

Summary: Organizations that have maintained their investment in traditional SQL-based databases -- and are seeing the limitations of these databases -- have started adding No-SQL and Big Data databases to their portfolio. FoundationDB believes its database can do it all.

SHARE:
TOPICS: Big Data
2

FoundationDB's Dave Rosenthal and Ori Hernstadt dropped by to provide an update on the company and a future product. The discussion of that future product will have to wait a bit, however. Like my first conversation with the company (see "FoundationDB: A new take on an established data structure" for more information), the chat was fast-moving and very interesting. I got to momentarily re-live my glory days as a software engineer for a software supplier that developed a complete library automation system based upon a similar data structure.

Key-value store

FoundationDB's product is a database in which the data is stored as part of the index rather than in a separate storage space. This approach makes it possible for the database to be significantly smaller than a more traditional relational database and perform much more quickly. This approach is called a "key-value store."

This approach isn't new, however. Database systems, such as Pick and MUMPS (now called simply M), used this approach in the 1970s. At that time, a DEC PDP-34 could support over 150 users when running MUMPS. When running one of DEC's operating systems, the same system could only support 16 to 32 users.

FoundationDB just took this thought and ran in a new direction with it.

What's the new direction?

FoundationDB took the idea and implemented it using a distributed cluster or grid of systems. They've made it possible for this concept to execute on a large number of systems and offer users the support of very large, complex databases.

The database supports quite a number of different types of data including:

  • Text
  • Numbers
  • Binary large objects (Blob)
  • Graphs
  • Documents
  • Queues
  • Ranked sets
  • Simple indexes
  • Spatial indexes
  • Tables
  • Vectors

Special Feature

Making the Business Case For Big Data

Making the Business Case For Big Data

Big data involves mashing up company data with public data to improve real-time reporting and predictive analysis. In this special feature, ZDNet and TechRepublic explain how companies can get started and make good decisions about big data.

What's fascinating is this powerful data store can be accessed directly and the company has plans to add other "personality modules" that will allow developers familiar with relational and non-relational databases to easily work with FoundationDB.

Snapshot analysis

As I mentioned in my article on FoundationDB, SQL and NoSQL database engines are all the rage for extreme transaction processing or Big Data applications. The problem is that organizations have maintained their investment in traditional SQL-based databases and — seeing the limitations of these databases — have started adding No-SQL and Big Data databases to their portfolio.

One of FoundationDB's messages is that its database makes it possible for many different database engines to be replaced with a single database because all of the functions they offer are available in FoundationDB. The company points out that this would reduce both a company's software license cost and reduce the number of separate types of expertise they would need on staff.

I was highly impressed with the cleverness of FoundationDB's approach and was interested to learn of their successes and customer installations. If they could only find a way to rise above the noise of a very competitive database market, they could address many customers' problems.

Topic: Big Data

About

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

2 comments
Log in or register to join the discussion
  • Why do people think key value pair databases require a new product?

    Good grief you can do that in SQL server.... put in a PK and either an XML or BLOB data column. Use the PK as your graph entry point, and if you're using the XML data type, whatever common elements are in your structure as edge nodes.

    All these database hucksters trying to convince us that a new paradigm means uninstalling everything we've got on perm and then installing their crap... drives me crazy.
    Mac_PC_FenceSitter
    • No tables, no first normal form means easier development and support.

      The point is that RDBMS require that a complex schema be developed and then data streams must be broken down into tables. This is not the best way to deal with many types of data even though RDBMS appear to be the standard way to do things.

      Key-value store systems can deal with data that changes type on the fly -- a number today, a string of text tomorrow, a blob the day after that -- and make sense of things. Development is simplified because the data is what you set it to be last. There is no need to force the data into a set of tables. Data is already together in the data store, there is no need to do a join to get things back together once again.

      I don't believe that FoundationDB is recommending a rip and replace strategy. They are, however, suggesting that when plans are being made for something new, that their approach be considered.

      Dan K
      dkusnetzky