X
Business

SAP HANA sales fly but there's more to the in-memory story

While figures just out suggest impressive sales growth for SAP's in-memory database technology, rivals and critics are painting a more complex picture.
Written by Toby Wolpe, Contributor
BillPowellARI250x260
IT director Bill Powell on HANA: It came back so fast we would spend more time just checking those numbers because we just didn't believe them.

In-memory computing is riding high, with more vendors offering products based on the technology, and new sales figures from SAP showing healthy growth for its HANA product.

However, amid cries of hype, some people are questioning the economics of purely using memory chips rather than a hierarchy of storage types for processing — even though businesses seem happy with the fast analytics provided by the technology.

Vehicle fleet-management company ARI says it's seen significant speed improvements from SAP HANA, which it adopted 18 months ago.

The Oracle shop also uses IBM for its financial systems, as well as some Microsoft products, and wanted to improve analytics — particularly predictive analytics — for customers of web-based services ranging from fuel and maintenance management to telematics and leasing.

"We were able to, within the first few weeks, bring in financial information right away and perform analyses that before would just time out — it would time out after 24 hours. Sometimes it might return, sometimes it just wouldn't return. Those analyses now return in three to three and a half seconds," said ARI IT director Bill Powell.

"It came back so fast that we would spend more time just checking those numbers because we just didn't believe them, as opposed to just passing them on to the business," he said.

SAP describes HANA as the fastest-growing product in the company's history. It can already count more than 1,500 customers, up from 900 in March, and is expected to generate revenues of between €650m and €700m in 2013.

As well as speed, SAP HANA also provides ARI with a new flexibility, according to Powell. "You can just now do things and create new products and services that you were just not capable of doing before because you were restricted by the technology," he said.

Rates of data growth

While ARI says it doubles its information roughly every 14 months, Stephen Brobst, CTO at database software company Teradata, says current high rates of data growth raise an underlying issue with the pure in-memory-based approach.

"What's happened is there's been a huge amount of hype about in-memory. The discussion has been along the lines of memory is getting cheaper by 30 percent every 18 months — it's compounding — and if memory gets cheap enough we can store all your data," Brobst said.

"It sounds very attractive but it's marketing math because in real mathematics there are two sides of the equation. The other side is, 'How fast is your data growing?'. In an analytically sophisticated organisation, I would suggest that the number of customers and orders and accounts doesn't grow by more than 30 percent in a stable industry," he said.

"But when you look at going from transactions to interactions, when you look at the sophistication of organisations that are competing on analytics, the data is typically growing by at least 50 percent every 12 months. So data grows faster than memory gets cheaper."

Brobst thinks the other side of the problem is that, compared with Teradata's Intelligent Memory approach, putting all data into memory is not only unnecessarily expensive, it is simply wasteful.

"If you look at the access patterns, it turns out that in any given time period a small percent of the total data is accessed the vast majority of the time. If you put everything in memory, probably 85 percent of it is actually relatively cold data. Less than 15 percent of the data is typically going to be more than 90 percent of the I/Os," Brobst said.

"So storing all the data in memory actually doesn't make any sense — except to the hardware vendors or whoever is trying to sell it to you," he said.

The key point, according to Brobst, is really about the software to get the right data in the right place based on usage patterns and with no human involvement.

"These 100 percent in-memory technologies — I understand why they do it because the software can be very stupid. It's just total brute force — it's very easy just bring all the data into memory. Fine. Great. But there's no intelligence in that," he said.

Memory pricing and data growth

Brobst said there is little likelihood of the economics of memory pricing and data growth changing in the foreseeable future.

"If data stopped growing and memory got cheap enough then, yeah, you can just put it all on memory. But that's not going to happen any time soon, if ever," he said.

"If you look at the predictions on data, there's an exponential growth. Data is growing faster than memory is getting cheaper. In any horizon, using any reasonable assumptions, from any reasonable analysts, that statement is going to be true for my career — and beyond that I don't care."

Aneel Bhusri, chairman, co-founder and co-CEO of enterprise cloud apps company Workday, also believes that much of the stir caused by SAP and its HANA in-memory computing is hype.

"All the noise around HANA is a lot about nothing. From day one our system was in-memory object databases. We've been doing it since 2005. If you're in Silicon Valley, this kind of technology has been around for five or six years — it's not novel," Bhusri said.

"It might be novel to SAP but it's not novel in the marketplace; Google's been doing this, Facebook's been doing it, Workday's been doing it. The way I look at it is that it's technology that's not anywhere near as modern and as battle-tested as ours is. It's great for us. They raise the profile and people say, 'How do you respond?' We say, 'Well, we've had one for eight years'," he said.

Editorial standards