advertise
Wednesday
Mar112009

Sharding and Connection Pools

Hi we are looking at sharding our existing Java/Oracle based application. We are looking to make the app servers able to process requests for multiple (any?) shard. The concern that has come up is the amount of memory that would be consumed by having so many connection pools on one app server. Additionally there is concern about having so many physical connections to the database server coming from all the various app servers that may talk to that particular shard. I was wondering if anyone else has dealt with this issue and how you resolved it? Thanks, Scott

Click to read more ...

Wednesday
Mar112009

13 Screencasts on How to Scale Rails

Gregg Pollack has made 13 screen casts on how to scale rails:

  • Episode #1 - Page Responsiveness
  • Episode #2 - Page Caching
  • Episode #3 - Cache Expiration
  • Episode #4 - New Relic RPM
  • Episode #5 - Advanced Page Caching
  • Episode #6 - Action Caching
  • Episode #7 - Fragment Caching
  • Episode #8 - Memcached
  • Episode #9 - Taylor Weibley & Databases
  • Episode #10 - Client-side Caching
  • Episode #11 - Advanced HTTP Caching
  • Episode #12 - Jesse Newland & Deployment
  • Episode #13 - Jim Gochee & Advanced RPM For a good InfoQ interview with Greg take a look at Gregg Pollack and the How-To of Scaling Rails.

    Click to read more ...

  • Wednesday
    Mar112009

    The Implications of Punctuated Scalabilium for Website Architecture

    Update: How do you design and handle peak load on the Cloud? by Cloudiquity. Gives a formula to try and predict and plan for peak load and talks about how GigaSpaces XAP, Scalr, RightScale and FreedomOSS can be used to handle peak load within EC2. Theo Schlossnagle, with his usual insight, talks about in Dissecting today's surges how the nature of internet traffic has evolved over time. Traffic now spikes like a heart attack, larger and more quickly than ever from traffic inflow sources like Digg and The New York Times. Theo relates how At least eight times in the past month, we've experienced from 100% to 1000% sudden increases in traffic across many of our clients and those spike can happen as quickly as 60 seconds. To me this sounds a lot like Punctuated equilibrium in evolution, a force that accounts for much creative growth in species... VMs don't spin up in less than 60 seconds so your ability to respond to such massive quick spikes is limited. This assumes of course that you've created an architecture that can automatically scale by adding VMs. Such elastic demand is usually met with a reservoir. You have more VMs in reserve to soak up temporary spikes. But who would do this in reality? Money would be going to non productive VMs, so you are likely to already have put those VMs into production. Interestingly, Theo ties handling sudden unexpected spikes back to performance. We are always told performance and scalability are separate issues. And while I accept this notionally, in my heart of hearts I think they have more in common than not and I think Theo nails why. A well performing system acts as a kind of reservoir for handling spikes before you can ever notice there's a spike. That gives you some time to add more resources to your site if a spike continues. With that reservoir you are just crushed. Theo gives four rules for for handling spikes: Be alert, Be prepared, Perform triage, and Be calm. Please see his site for more discussion of these rules. A few things that might help:

  • Create fast booting VMs. It's easy to create VMs that boot glacially (intentional irony). The more you leave to run-time like software downloads and configuration, the slower your VMs boot and the slower you can react to spikes.
  • Cloud vendors offer a service to maintain an image cache. It would be useful if a service was offered that could guaranteed faster provisioning of VMs and quicker download of images.
  • Would an in-cloud service to offer stem cell VMs make sense? This is a VM that could quickly become any one of a number of different images on demand. So a service could keep a reservoir of stem cell VMs up and running, shared by a number of customers, and an application could request the low latency spin up of one of the reserved VMs. The idea that internet traffic patterns have evolved such that even our cloud architectures can't easily cope is an interesting one. I find it ironic that many of the techniques needed to build real-time systems are helpful to handle this new world too when at first glance the problems look nothing alike. Sometimes piling on more resources isn't enough, efficiency matters too.

    Click to read more ...

  • Wednesday
    Mar112009

    Classifying XTP systems and how cloud changes which type startups will use

    I try to group XTP in to two main groups, type 1 and 2 and then subdivide type 2 in to 2a and 2b. I describe how I do this grouping and then amplify it a little in the context of cloud services.

    Click to read more ...

    Tuesday
    Mar102009

    Paper: Consensus Protocols: Paxos  

    Update:Barbara Liskov’s Turing Award, and Byzantine Fault Tolerance. Henry Robinson has created an excellent series of articles on consensus protocols. We already covered his 2 Phase Commit article and he also has a 3 Phase Commit article showing how to handle 2PC under single node failures. But that is not enough! 3PC works well under node failures, but fails for network failures. So another consensus mechanism is needed that handles both network and node failures. And that's Paxos. Paxos correctly handles both types of failures, but it does this by becoming inaccessible if too many components fail. This is the "liveness" property of protocols. Paxos waits until the faults are fixed. Read queries can be handled, but updates will be blocked until the protocol thinks it can make forward progress. The liveness of Paxos is primarily dependent on network stability. In a distributed heterogeneous environment you are at risk of losing the ability to make updates. Users hate that. So when companies like Amazon do the seemingly insane thing of creating eventually consistent databases, it should be a little easier to understand now. Partitioning is required for scalability. Partitioning brings up these nasty consensus issues. Not being able to write under partition failures is unacceptable. Therefor create a system that can always write and work on consistency when all the downed partitions/networks are repaired.

    Related Articles

  • Google's Paxos Made Live – An Engineering Perspective
  • ZooKeeper - A Reliable, Scalable Distributed Coordination System
  • Impossibility of Distributed Consensus with One Faulty Process by Lynch et al
  • Consensus, impossibility results and Paxos by Ken Birman
  • Paxos for System Builders by Jonathan Kirsch and Yair Amir

    Click to read more ...

  • Friday
    Mar062009

    Cloud Programming Directly Feeds Cost Allocation Back into Software Design

    Update 6: CARS = Cost Aware Runtimes and Services by William Louth.
    Update 5: Damn You Google, Damn You Yahoo! Why D'Ya Do This to Us? Free accounts on a cloud platform are a constant drain of money.
    Update 4: Caching becomes even more important in CPU based billing environments. Avoiding the CPU means saving money.
    Update 3: An interesting simple example of this idea showed up on the Google AppEngine list. With one paging algorithm and one use of AJAX the yearly cost of the site was $1000. By changing those algorithms the site went under quota and became free again. This will make life a lot more interesting for developers.
    Update 2: Business Model Influencing Software Architecture by Brandon Watson. The profitability of your project could disappear overnight on account of code behaving badly.
    Update: Amazon adds Elastic Block Store at $0.10 per 1 million I/O requests. Now I need some cost minimization storage algorithms!

    In the GAE Meetup yesterday a very interesting design rule came up: Design By Explicit Cost Model. A clumsy name I know, but it is explained like this:

     

    If you are going to be charged for an operation GAE wants you to explicitly ask for it. This is why some automatic navigation between objects isn't provided because that will force an explicit query to be written. Writing an explicit query is a sort of EULA for being charged. Click OK in the form of a query and you've indicated that you are prepared to pay for a database operation.

    Usually in programming the costs we talk about are time, space, latency, bandwidth, storage, person hours, etc. Listening to the Google folks talk about how one of their explicit design goals was to require programmers to be mindful of operations that will cost money made me realize in cloud programming cost will be another aspect of design we'll have to factor in.

    Instead of asking for the Big O complexity of an algorithm we'll also have to ask for the Big $ (or Big Euro) notation so we can judge an algorithm by its cost against a particular cloud profile. Maybe something like $(CPU=1.3,DISK=3,IN-BANDWIDTH=2,OUT=BANDWIDTH=3, DB=10). You could look at the Big $ notation for algorithm and shake your head saying that approach will never work for GAE, but it could work for Amazon. Can we find a cheaper Big $? 

    Typically infrastructure costs are part of the capital budget. Someone ponies up for the hardware and software is then "free" until more infrastructure is needed. The dollar cost of software design isn't usually an explicit factor considered.

    Now software design decisions are part of the operations budget. Every algorithm decision you make will have dollar cost associated with it and it may become more important to craft algorithms that minimize operations cost across a large number of resources (CPU, disk, bandwidth, etc) than it is to trade off our old friends space and time.

    Different cloud architecture will force very different design decisions. Under Amazon CPU is cheap whereas under GAE CPU is a scarce commodity. Applications between the two niches will not be easily ported.

    Don't be surprised if soon you go into an interview and they quiz you on Big $ notation and skip the dusty old relic that is Big O notation :-)

    Friday
    Mar062009

    Product: Lightcloud - Key-Value Database

    Lightcloud is a distributed and persistent key-value database from Plurk.com. Performance is said to be comparable to memcached. It's different than memcachedb because it scales out horizontally by adding new nodes. It's different than memcached because it persists to disk, it's not just a cache. Now you have one more option in the never ending quest to ditch the RDBMS. Their website does a nice job explaining the system:

  • Built on Tokyo Tyrant. One of the fastest key-value databases [benchmark]. Tokyo Tyrant has been in development for many years and is used in production by Plurk.com, mixi.jp and scribd.com (to name a few)...
  • Great performance (comparable to memcached!)
  • Can store millions of keys on very few servers - tested in production
  • Scale out by just adding nodes
  • Nodes are replicated via master-master replication. Automatic failover and load balancing is supported from the start
  • Ability to script and extend using Lua. Included extensions are incr and a fixed list
  • Hot backups and restore: Take backups and restore servers without shutting them down
  • LightCloud manager can control nodes, take backups and give you a status on how your nodes are doing
  • Very small foot print (lightcloud client is around ~500 lines and manager about ~400)
  • Python only, but LightCloud should be easy to port to other languages

    Click to read more ...

  • Thursday
    Mar052009

    Product: Amazon Simple Storage Service

    Update: HostedFTP.com - Amazon S3 Performance Report. How fast is S3? Based on their own study HostedFTP.com has found: 10 to 12 MB/second when storing and receiving files and 140 ms per file stored as a fixed overhead cost. Update: A Quantitative Comparison of Rackspace and Amazon Cloud Storage Solutions. S3 isn't the only cloud storage service out there. Mosso is saying they can save you so money while offering support. There are number of scenarios in their paper, but For 5TB of cloud storage Mosso will save you 17% over S3 without support and 42% with support. For their CDN on a Global test Mosso says the average response time is 333ms for CloudFront vs. 107ms for Cloud Files which means globally, Cloud Files is 3.1 times or 211% faster than CloudFront. Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. This service allows you to link directly to files at a cost of 15 cents per GB of storage, and 20 cents per GB transfer.

    Click to read more ...

    Thursday
    Mar052009

    Strategy: In Cloud Computing Systematically Drive Load to the CPU

    Update 2: Linear Bloom Filters by Edward Kmett. A Bloom filter is a novel data structure for approximating membership in a set. A Bloom join conserves network bandwith by exchanging cheaper, more plentiful local CPU utilization and disk IO. Update: What are Amazon EC2 Compute Units?. Cloud providers charge for CPU time in voodoo units like "compute units" and "core hours." Geva Perry takes on the quest of figuring out what these mean in real life. I attended Sebastian Stadil's AWS Training Camp Saturday and during the class Sebastian brought up a wonderfully counter-intuitive idea: CPU (EC2) costs a lot less than storage (S3, SDB) so you should systematically move as much work as you can to the CPU. This is said to be the Client-Cloud Paradigm. It leverages the well pummeled trend that CPU power follows Moore's Law while storage follows The Great Plains' Law (flat). And what sane computing professional would do battle with Sir Moore and his trusty battle sword of a law? Embedded systems often make similar environmental optimizations. CPU rich and memory poor means operate on compressed serialized data structures. Deserialized data structures use a lot of memory, so why use them? It's easy enough to create an object wrapper around a buffer. Programmers shouldn't care how their objects are represented anyway. Yet we waste ginormous amounts of time and memory uselessly transforming XML in and out of different representations. Just transport compressed binary objects around and use them in place. Serialization and deserialization happen only on access (Pimpl Idiom). It never occurred to me that in the land of AWS plenty similar "tricks" would make sense. But EC2 is a loss leader in AWS. CPU is plentiful and cheap. It's IO and storage that costs you... The implication is that in your system design you should try and use EC2 as much as possible:

  • Compress data. Saves on bandwidth and storage (the expensive bits) and uses cheaper CPU to compress/decompress.
  • Slurp data. Latency cost is higher than performing operations locally. SDB can take up to 400 msecs between data centers and 200 msecs inside the same data center. This is very slow. It's usually faster, but it can take that long. Following the more traditional serial processing path of "get a record do a record" will take forever and cost more. Slurp up all your records from SDB and farm them out to your CPU nodes to be worked on in parallel.
  • Think parallel. Do multiple operations at once on your cheap CPUs rather than serially performing high latency operations on expensive storage. With enough nodes, total execution time approaches max latency.
  • Client side joins. Pull all data from the relatively expensive SDB and perform client side joins on relatively cheap EC2 nodes.
  • Leverage SQS. It's a relatively cheap part of the ecosystem. Keeping a work queue in SDB would be far more expensive. When all the implications are fully explored it's a little different take on designing a system. I found some interesting numbers in a Slashdot thread comparing values: No persistent storage; not great value: And it's still not a great value. It seems cheap. $72/mo for a 1.7GB RAM server. Well, look at Slicehost and you can get a 2GB RAM Xen instance (same virtualization software as EC2) for $140 WITH persistent storage and 800GB of bandwidth. That doesn't sound like a great deal UNTIL you calculate what EC2 bandwidth costs. 800GB would cost you $144 at $0.18 per GB bringing the total cost to $216 ($76 more than Slicehost). That 18 cents doesn't sound like much, but it adds up. The same situation happens with Joyent. For $250 you get a 2GB RAM server from them (running under Solaris' Zones) with 10TB of bandwidth. That would cost you $1,872 with EC2. Even if you assume that you'll only use 10% of what Joyent is giving you, EC2 still comes in at a cost of $252 - and without persistent storage!

    Click to read more ...

  • Wednesday
    Mar042009

    Its time for auto scaling – avoid peak load provisioning for web applications

    Many web applications, including eBanking, Trading, eCommerce and Online Gaming, face large, fluctuating loads. In this post will describe how to achieve Right Sizing using virtualization and cloud computing. Will use a standard JEE web application to demonstrate how auto-scaling works on AWS Cloud without changing your application code.

    Click to read more ...