X
Business

The economics of computing

While I probably should have spent more of this weekend reading the Radio Times, I actually amused myself with an IDC white paper about the use of in-memory database technology. Here’s the reason why.
Written by Adrian Bridgwater, Contributor

While I probably should have spent more of this weekend reading the Radio Times, I actually amused myself with an IDC white paper about the use of in-memory database technology. Here’s the reason why.

The paper was attached to Sybase’s ASE product. Originally created for UNIX platforms in 1987, Adaptive Server Enterprise (ASE) has been around for a few years now. So I was primarily interested to find out what factors could make a technology leap forward when it has already been in existence (in one form or other) for more than two decades.

After all, you can’t fit a catalytic converter and a nitro fuel turbo charge onto a Hillman Imp can you? So how do you super boost an (albeit successful) product that is so long in the tooth?

In-memory database management system (or, IMDB) usage emerged as a means of boosting performance and scalability while containing storage costs in extremely high-speed data systems. But something has changed and this technology has become more mainstream. The reason for this change?

The economics of computing have changed.

Hardware has changed – we now have cheaper, faster multicore processors. Memory has changed – cheaper 64-bit memory makes large in-memory deployment affordable. Software has changed - internal DBMS data management software exploits both of the above to form a combined offering that is even greater than the sum of the parts.

Analysts, industry commentators and even vendors themselves are keen to talk up these trends and argue that these developments mean that we are better positioned to be ready for virtualised desktops and cloud-based environments.

IDC points out that data centres today need to reduce their physical footprint and operational cost while keeping the door open for growth and still delivering better performance. So how do they do it?

“The new economics of computing, which derive from large memory models, 64-bit addressability, fast processors and cheap memory, make it possible to design core database technology that is far faster and more scalable than was possible when the only option was to base data management on spinning disks,” says IDC.

Clustering and virtualisation have been already employed to help achieve scalability and data consolidation – and now, in-memory database technology joins the party too.

I have merely touched on the fringe of this subject, but I think it points to a few telling considerations.

1) We do not talk about database technology enough in the context of software application development as we tend to focus on the fringes, the periphery and the allure of GUI or mobile handset based stories.

2) In-memory database technology may be quite a beautiful coming together of both hardware and software development that gives us power advantages, economic advantages, ecological advantages and operational advantages. Yet it is unlikely to make many IT headlines this week.

3) Significant IT shifts generally only occur when a combination of technologies, usage patterns and market conditions all align consecutively. Andy Grove would call this a strategic inflexion point I think. This could well be one. I could be wrong, but it kept me from reading up on Lark Rise to Candleford that’s for sure!

Editorial standards