advertise
« Pursue robust indefinite scalability with the Movable Feast Machine | Main | Sponsored Post: Grid Dynamics, aiCache, Rocketfuel, FreeAgent, Percona Live!, Box, New Relic, Surge, Tungsten, AppDynamics, Couchbase, CloudSigma, ManageEngine, Site24x7 »
Tuesday
Sep272011

Use Instance Caches to Save Money: Latency == $$$

In the post Using memcache might be free, but it's costing you money with instance pricing! Use instance caches if possible made on the Google App Engine group, Santiago Lema brings up an oldie but a goody of an idea that was once used to improve performance, but now it's used to save money:

  • Santiago's GAE application went from about $9 to about $177 per month. 
  • Memcache is slow enough that under higher loads extra instances are created by the scheduler to handle the load.
  • For static or semi-static data, a way around the cost of the extra instances, is to keep a cache in the instance so requests can be served out of local memory rather than going to memcache or the database. A simple hashtable makes a good in-memory cache.
  • This solution made his app affordable again by reducing the number of instances back to 1 (sometimes 2).

Where have we seen this before?

This is a variant of the old Sticky Session idea where web sessions are stored in RAM in an application server and all further interaction with a user on that session are routed back to that same server. This approach has gone out fashion in favor of storing session state in the database, or in memcache, or by not having a sense of session state at all.

It's not completely out fashion however. The StackExchange folks, for example, use Sticky Sessions for speed reasons and to take a load off the network. All those cache requests put a lot stress on the network, so it's better to avoid them when possible.

There's also a parallel to the idea of replicating staticish tables across servers so that joins can be local, avoiding the expense of remote access. VoltDB uses this strategy to great effect.

It sounds like this technique may be making a comeback. All the usual cache consistency and memory limitation issues apply, but when latency is so strongly linked to cost, serving data out of local RAM in the web tier is as fast and cheap as it gets. 

Reader Comments (3)

"A simple hashtable makes a good in-memory cache."

What a discovery!

October 1, 2011 | Unregistered CommenterVladimir Rodionov

"rather than going to memcache or the database"

It merely shows that most developers don't really think for themselves anymore but are being tricked in believing that there are some magical solutions like Memcache that solve their problems straightaway.
Although Memcache is being praised for its simplicity, it it the same simplicity that prevents it from being what it could/should have been.

October 3, 2011 | Unregistered CommenterJack Bauer

For platforms like GAE where you can't run arbitrary processes on an instance, this makes sense. For platforms like EC2,
why wouldn't you run memcache on local instance itself and do away with sticky sessions? Sure, in-process access is faster than a local process call but you would still save costs and utilize a proven infrastructure.

October 9, 2011 | Unregistered CommenterKG

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>