SAP HANA: Not The Only In-Memory Game In Town - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Data Management // Software Platforms
10:01 AM

SAP HANA: Not The Only In-Memory Game In Town

SAP HANA is not the only option for those looking for an in-memory database platform. Big rivals such as Microsoft and Oracle offers similar tech.

6 Characteristics Of Data-Driven Rock Stars
6 Characteristics Of Data-Driven Rock Stars
(Click image for larger view and slideshow.)

In the world of in-memory computing, SAP's HANA has the big name, but it's not the only game in town. Other databases can do all or part of their work in memory, though the definitions can get a little fuzzy around the edges of the market.

Let's be clear: Whenever the phrase "in-memory computing" comes up, the more accurate phrase might be "in-memory database."

Compact applications running against limited data sets aren't a big problem. When the application sits on top of an enterprise database, that's when the data's location starts to matter in a most significant way.

Microsoft's SQL Server 2014 provides in-memory computing … sort of.

Redmond is careful not to call what they do in-memory computing, referring to it as "In-Memory OLTP." (OLTP is on-line transaction processing.) According to a page on the MSDN website, "In-Memory OLTP is a memory-optimized database engine integrated into the SQL Server engine, optimized for OLTP."

What this means is that you can define part of the application data as being specifically for transactions -- typically, the high-speed, intensive reads and writes that come with market segments like retail and banking.

The defined parts of the database are kept in memory, where they benefit from low latency and high overall performance. On a regular basis, though, the transaction records will be rolled into a portion of the database that's reserved for analysis -- analysis that is typically performed through pre-defined reports run on a scheduled basis.

There are many possibilities for organizations looking for in-memory options for database applications.

(Image: OpenClips via Pixabay)

There are many possibilities for organizations looking for in-memory options for database applications.

(Image: OpenClips via Pixabay)

Oracle in-memory database, available with Oracle Database 12c, takes an approach much more like that of SAP's HANA. It is, according to Oracle, designed to run both OLTP and OLAP (on-line analytics procession) from the same database. From the database application perspective this is important because it boosts performance and capabilities in two critical ways.

First, the single-database approach eliminates the need to move data from one database (or part of a database) to another before analysis can be performed. Since the data movement is generally a performance-sapping process run at times when the OLTP needs are smaller, this means that queries can be made and reports run at any time, rather than on the next business day after database been moved.

Next, because the OLTP and OLAP databases are the same, queries can be run against the entire data set at any time. The ability to perform these "ad-hoc queries" has long been a holy grail of application designers -- and the top of data base administrators' nightmare lists.

There are other in-memory databases, as well. According to Wikipedia, there are 47 different in-memory databases currently available.

[Read how big data is scoring big during the NHL playoffs.]

Why all the interest and the options?

For Gary Orenstein, CMO of MemSQL -- one of the 47 listed options -- the answer is straightforward.

"I think that the ability to do transactions and analytics in the same database is critical. The market is betting on the need to do real-time information and answers," Orenstein said during a phone interview. "Companies now have to satisfy that demand for real-time information and more specifically real-time answers, and you simply don't have the option to move data around to reach an answer point," he explained.

The search for high-speed answers is running into the price of RAM in massive quantities. There's no question, though, that a growing number of companies are willing to pay the price for answers at the point of executive need -- whenever and wherever that need might occur.

[Did you miss any of the InformationWeek Conference in Las Vegas last month? Don't worry: We have you covered. Check out what our speakers had to say and see tweets from the show. Let's keep the conversation going.]

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
10/14/2015 | 11:22:20 AM
Reliability of In-Memory is key to successful implementations
While there are great performance gains in using in-memory databases, there is a significant requirement of reliable power availability. Most of the planet does not have better than 95% power availability, and a single power outage can wipe your entire database out. Not only does the power need to be ON all the time, it also must not vary more than a given amount. A recovery from backup often results in lost transactions. So, before any in-memory database implementations take place, it is key to allocate large Power battery backup's that can shield your servers from a power loss or at least dump the database to disk drives and shut down the servers amicably. Large server farms often make use of solar and fuel based power generators backed up by huge battery banks. These add significant costs to such implementations.
Blog Voyage
Blog Voyage,
User Rank: Strategist
7/6/2015 | 2:41:51 AM
Re: In-memory database started with TPF
Thanks for the precision mates !
Curt Franklin
Curt Franklin,
User Rank: Strategist
6/4/2015 | 2:07:27 PM
Re: In-memory database started with TPF
@Charlie, you're right -- "in-memory computing" is, like so much of today's enterprise computing, built on a mainframe foundation. There are a couple of big differences, though: One is the sheer size of the databases involved. The other is that, as you point out, in the 70s the in-memory architecture was all about supporting transaction speed. Today, it's as much about analytics as transactions, and that's a pretty big deal.
Charlie Babcock
Charlie Babcock,
User Rank: Author
5/28/2015 | 6:08:11 PM
In-memory database started with TPF
The original high speed, in-memory system is not a recent phenomenon but IBM's Transaction Processing Facility, or TPF, used in the first airline reservation systems to speed response times to customers. It was first fired up in 1979, or before the birth of some of today's NoSQL experts. 
How GIS Data Can Help Fix Vaccine Distribution
Jessica Davis, Senior Editor, Enterprise Apps,  2/17/2021
Graph-Based AI Enters the Enterprise Mainstream
James Kobielus, Tech Analyst, Consultant and Author,  2/16/2021
11 Ways DevOps Is Evolving
Lisa Morgan, Freelance Writer,  2/18/2021
White Papers
Register for InformationWeek Newsletters
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you.
Flash Poll