The Ehcache 2.2 elastic caching platform can manage up to a terabyte of data in server memory to meet higher traffic demand by speeding application response time.

Charles Babcock, Editor at Large, Cloud

July 22, 2010

3 Min Read

Terracotta is bidding to step ahead of the competition in helping companies build capacity into applications for coping with high demand situations. Its latest release of its elastic caching platform, Ehcache 2.2, can manage up to a terabyte of data in server memory.

Having data in memory, as opposed to needing to retrieve it from disk, gives applications the ability to scale up and meet higher traffic demand because application response times can move closer to the speed of light. Not only is the data being used by the application cached, but the application objects themselves run in memory, eliminating another source of calls to disk.

Terracotta is not the only player with this type of product. The field was established by another start-up, GigaSpaces, and Oracle with its Coherence caching system. IBM joined the lists with WebSphere eXtreme Scale and two Microsoft .Net specialists, Alachisoft and ScaleOut, are also players. GemStone Systems also has a respected caching product. GemStone was acquired by VMware in May for an undisclosed price.

In May, Forrester Research named GigaSpaces, Terracotta, Oracle and IBM as the leaders in the field in its report, "Forrester Wave: Elastic Caching Platforms Q2 2010."

Terracotta is seeking a terabyte of separation from the crowd. On Tuesday, it said that its Ehcache system can now manage "over a terabyte of data and hundreds of millions of entries in a single cache," said Amit Pandey, CEO of Terracotta, in an interview. Ehcache is the core Terracotta system renamed after a popular open source caching product (also Ehcache) that it acquired in the fall of 2009.

"Most of the caches we see in operation today are less than 50 gigabytes," he said. The previous version of Ehcache could cache up to 200 GBs of data. Trying to create caches larger than the capacity of the caching system causes programmers to make many changes in an application. Extra large caches are generated by partitioning the data across the random access memories of many servers in a cluster. Any changes in the partitions results in more changes to the application, since it needs to know what each partition contains and where it is.

Terracotta found a way to get around that restriction and create a terabyte of cache, according to Terracotta VP of engineering Steve Harris.

Large caches are often built by assembling a large number of servers and linking their memories into a common pool. In the past, the Terracotta system used memories found on both the application servers and other servers in a cluster, linking them in two separate layers of cache. Clients using the system, however, needed to be informed on what server data was located through a key sent down to them. Terracotta alterred the way the two layers were linked, eliminating the need to keep clients primed with a key, Harris said. "The way we communicate between the two layers has changed. Clients don't need to know anything about where the data they want to use is located," Harris said in an interview.

When a user asks the application to act on certain data, any server in the cluster knows where that data is located and how to route the request, he added.

Pandey says Ehcache users enter two lines of configuration code in Ehcache and, if they choose, their applications then have a terabyte of cache to work with. Such a cache can run on standard Intel or AMD-based servers and can be created with cluster of only 6-10 servers, if the customer chooses, he said.

Pandey acknowledged that a terabyte cache can be created with competitors systems as well but requires more work than the two line of configuration code. "We created a new data structure, a new mapping, on the cluster that lets you create such a large cache," he said.

Ehcache 2.2 is priced at $4,000 to $8,000 per server node in a cluster.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights