advertise
Friday
Dec282007

Amazon's EC2: Pay as You Grow Could Cut Your Costs in Half

Update 2: Summize Computes Computing Resources for a Startup. Lots of nice graphs showing Amazon is hard to beat for small machines and become less cost efficient for well used larger machines. Long term storage costs may eat your saving away. And out of cloud bandwidth costs are high. Update: via ProductionScale, a nice Digital Web article on how to setup S3 to store media files and how Blue Origin was able to handle 3.5 million requests and 758 GBs in bandwidth in a single day for very little $$$. Also a Right Scale article on Network performance within Amazon EC2 and to Amazon S3. 75MB/s between EC2 instances, 10.2MB/s between EC2 and S3 for download, 6.9MB/s upload. Now that Amazon's S3 (storage service) is out of beta and EC2 (elastic compute cloud) has added new instance types (the class of machine you can rent) with more CPU and more RAM, I thought it would be interesting to take a look out how their pricing stacks up. The quick conclusion:the more you scale the more you save. A six node configuration in Amazon is about half the cost of a similar setup using a service provider. But cost may not be everything... EC2 gets a lot of positive pub, so if you would like a few other perspectives take a look at Jason Hoffman of Joyent's blog post on Why EC2 isn't yet a platform for "normal" web applications and Hostingfu's Short Comings of Amazon EC2. Both are well worth reading and tell a much needed cautionary tale. The upshot is batch operations clearly work well within EC2 and S3 (storage service), but the jury is still out on deploying large database centric websites completely within EC2. The important sticky issues seem to be: static IP addresses, load balancing/fail over, lack of data center redundancy, lack of custom OS building, and problematic persistent block storage for databases. A lack of large RAM and CPU machines has been solved with the new instance types. Assuming you are OK with all these issues, will EC2 cost less? Cost isn't the only issue of course. If dynamically scaling VMs is a key feature, SQS (message queue service) looks attractive, or S3's endless storage are critical, then weight accordingly. My two use cases are my VPS, for selfish reasons, and a quote from a leading service provider for a 6 node setup for a startup. Six nodes is small, but since the architecture featured horizontal scaling, the cost of expanding was pretty linear and incremental. Here's a quick summary of Amazon's pricing:

  • Data transfer: The prices are $0.10 per GB - all data transfer in, $0.18 per GB - first 10 TB / month data transfer out $0.16 per GB - next 40 TB / month data transfer out, $0.13 per GB - data transfer out / month over 50 TB. You don't pay for data transfer between EC2 and S3, so that's an advantage of using S3 within EC2.
  • S3: $0.15 per GB-Month, $0.01 per 1,000 PUT or LIST requests, $0.01 per 10,000 GET and all other requests. I have no idea how many requests I would use.
  • Small Instance at 10 cents/hour:1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform. The CPU capacity is that of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.
  • Large Instance at 40 cents/hour: 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform.
  • Extra Large Instance at 80 cents/hour: 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform. You don't have to run these numbers by hand. To calculate the Amazon costs I used their handy dandy calculator. When performing calculations, per Amazon, I used 732 hours per month.

    Single VPS Configuration

    I was very curious about the economics of moving this simple site(http://highscalability.com) from a single managed VPS to EC2. Currently my plan provides:
  • 1GB RAM (with no room for expansion).
  • 50 GB of storage. I use about 4 GB.
  • 800 GB montly transfer. Of which I use 1GB/month in and 10GB/month out.
  • 8 IP addresses. Very nice for virtual hosts.
  • 100Mbps uplink speed.
  • Very responsive support. Very poor system monitoring.
  • 1 VM backup image. Would prefer two.
  • The CPU usage is hard to characterize, but it's been more than sufficient for my needs.
  • Cost: $105 per month. From Amazon:
  • The small instance looks good to me. What I need is more memory, not more CPU, so that's attractive. VPS memory pricing is painfully high.
  • 10 GB storage, 1 GB transfer in, 10 GB transfer out, 2000 requests.
  • Cost: about $80 per month Will I switch? Probably not. I don't know how well Drupal runs in EC2/S3 and it's not really worth it for me to find out. Drupal isn't horizontally scalable so that feature of EC2 holds little attraction. But the extra RAM and affordable disk storage are attractive. So for very small deployments using your standard off the shelf software, there's no compelling reason to switch to EC2.

    Six Node Configuration for Startup

    This configuration is targeted at a real-life Web 2.0ish startup needing about 300GB of fast, highly available database storage. Currently there are no requirements for storing large quantities of BLOBs. There are 6 nodes overall: two database servers in failover configuration, two load balanced web servers, and two application servers. From the service provider:
  • Two database servers are $1500/month total for dual processor quad core Xeon 5310, 4GB of RAM, and 4x 300GB 15K SCSI HDD in RAID 10 configuration, with 5 IP addresses. 10 Mbps Public & Private Networks. Public Bandwidth 2000 GB Bandwidth.
  • The other 4 servers have 2GB RAM each, single Quad Core Xeon 5310, 1 X 73GB SAS 10k RPM, for about $250 each.
  • For backup the cost is 500GB $200/month.
  • I'm not including load balancer or firewall services as these don't apply to Amazon, which may be a negative depending on your thinking. Plus the provider as an excellent service and management infrastructure.
  • Cost: $2700/month. From Amazon:
  • Two extra large instances for the database servers. Your architecture here is more open and could take some rethinking. You could just rent 1 and bring another another on-line from the pool on failure, which would save about $500 a month. I'll assume we load balance read and write traffic here so we'll have two. Using one extra large instance is about the same price as two large instances.
  • Four small instances for the other servers. Here is another place the architecture could be rethought. It would be easy enough to buy one or two servers upfront and then add servers in response to demand. That might save about $140/month under low load conditions. Adding another 4 servers adds about $300.
  • 300 GB of storage. Doubling to $600 GB only adds about $50/month. If storing large amounts of data does become a requirement this could be a big win.
  • 200 GB transfer in, 1800 GB transfer out. This is a guesstimate. Doubling the numbers adds another $400.
  • 40,000 requests. No idea, but these are cheap so being wrong isn't that expensive.
  • Cost: about $1300/month using two large instance for the database.
  • Cost: about $1900/month using two extra large instances for the database. The cost numbers really stand out. You pay half for a similar setup and the cost of incrementally scaling along any dimension is relatively inexpensive. You could in fact start much smaller and much cheaper and simply pay as you grow. The comparison is not apples to apples however. All the potential problems with EC2 have to be factored in as well. As someone said, architectecting for EC2/S3 takes a different way of thinking about things. And that's really true. Many of the standard tricks don't apply anymore. Deploying customer facing production websites in a grid is not a well traveled path. If you don't want to walk the bleeding edge then EC2 may not be for you. For example, in the service provider scenario you have blisteringly fast disks in a very scalable RAID 10 setup. That will work. Now, how will your database work over S3? Is it even possible to deploy your database over S3 with confidence? Do you need to add a ton of caching nodes? Will you have to radically change your architecture in a way that doesn't fit your skill set or schedule? Will the extra care and monitoring needed by EC2 be unacceptable? Is the single data center model a problem? Does the lack of a hardware firewall and a load balancer seem like too big a weakness? Can you have any faith in Amazon as a grid provider? Only the shadow may know the answers to these questions, but the potential cost savings and the potential ease of scaling make the questions worth answering.

    Related Articles

  • Build an Infinitely Scalable Infrastructure for $100 Using Amazon
  • An Unorthodox Approach to Database Design : The Coming of the Shard

    Click to read more ...

  • Wednesday
    Dec262007

    Golden rule of web caching

    Effective content caching is one of the key features of scalable web sites. Although there are several out-of-the-box options for caching with modern web technologies, a custom built cache still provides the best performance.

    Click to read more ...

    Wednesday
    Dec262007

    Finding an excellent LAMP developer

    Hi... I have this idea to start a really great and scalable website, and I am building it! So far I'm doing everything myself - coding, networking, architecture planning, everything. I haven't even gotten into the legal aspects yet....... It would be MUCH easier if I had a technical person to handle that end of the operation. I'm a good coder, but like Bill Gates at Harvard for Math, I'm not the very best. I'd like to FIND that very best person available, to handle the technical aspects. For worse or better, I don't presently know somebody who fits this bill. I've posted a bazillion ads on Craig's List, with no really qualified responses. I've put out feelers among my own network, same result. Not sure what else I can do. Shoestring budget, so it's sweat equity in the beginning. That can actually be a plus, as it forces people to focus. Any ideas about what else I can do, to attract the right person? Thanks Jason

    Click to read more ...

    Tuesday
    Dec252007

    IBMer Says LAMP Can't Scale

    A very entertaining and somewhat educational article on IBM Poopheads say LAMP Users Need to "grow up". The physical three tier architecture turns out to be the root of all evil and shared nothing architectures brings simplicity and light. In the comments Simon Willison makes an insightful comment on why fine grained caching works for personalized pages and proxy's don't: Great post, but I have to disagree with you on the finely grained caching part. If you look at big LAMP deployments such as Flickr, LiveJournal and Facebook the common technology component that enables them to scale is memcached - a tool for finely grained caching. That's not to say that they aren't doing shared-nothing, it's just that memcached is critical for helping the database layer scale. LiveJournal serves around 50% of its page views "permission controlled" (friends only) so an HTTP proxy on the front end isn't the right solution - but memcached reduces their database hits by 90%.

    Click to read more ...

    Sunday
    Dec232007

    Synchronizing Memcached application

    I have an application with couple of web servers that uses MemcacheD. How can i synchronize concurrent put to the cache? The value of the entry is list. Atomic append operation could have been helpful, but unfortunately memcahe doesn't support atomic append.

    Click to read more ...

    Saturday
    Dec222007

    This was a porn/spam post

    Seems as though anonymous users can edit old posts w/o any authentication. This post was loaded with spam/porn links. Now it is not. /anonymous

    Click to read more ...

    Friday
    Dec212007

    Strategy: Limit Result Sets

    Release It! author Michael Nygard tells a tale of two web sites, both brought low by unexpectedly huge unbounded results sets that slowed down their sites to the speed of a Christmas checkout line. I've committed this error more than a few times. During testing the results sets are often small, so you don't see problems. Or when a product is new you don't have a lot of data so everything is fine, until some magic line is crossed and you get that dreaded 2AM fix it call. My most embarrassing bug of this type caused a rather spectacular failure at a customer site as the variance in response times was out of spec and this kicked in penalty clauses. What happened was the customer had a larger network than we could even test (customers always get the good stuff). I took a lock and went to get all the data. Because the result set was so much larger in their larger system I took the lock for many more milliseconds than I should have. Unknown to me a chunk of code on the critical path also was in the lock path and all hell broke loose. I had to change the logic to process the result set in fixed size deterministic chunks, releasing locks as I went. I even had to measure CPU usage and back off after a certain amount of CPU was used. But all was well again. I then hunted down every other place I made the same mistake. And there were a few. To solve this problem in general I developed an architecture supporting scheduling work by CPU usage. A common theme in many of the profiles on this site is protecting your system from requests that can bring down the system. Mailinator has a lot resource exhaustion problems and does a good job solving them. Ebay has an interesting strategy of doing as little work as possible in the database which leads them to do joins in application space. Which is exactly the opposite of this strategy's conclusion. But I think this may be going too far. With proper indexes performing selects in the database to minimize the result sets would seem to be a win as databases are good at this sort of thing. Yah, relational databases suck at doing top 10 type of logic, so calculate that on the fly and cache it. How can you bound results sets?

  • Michael's strategy: You should make your apps be paranoid about their data. If your app processes one record at a time, then looping through an entire result set might be OK---as long as you're not making a user wait while you do. But if your app that turns rows into objects, then it had better be very selective about its SELECTs. The relationships might not be what you expect. The data producer might have changed in a surprising way, particularly if it's not under your control. Purging routines might not be in place, or might have gotten broken. Definitely don't trust some other application or batch job to load your data in a safe way.
  • A diagnostic tactic is to benchmark all page response times and look for pages that take too long. This should be an automated system that emails you or shows alerts in your dashboard system. And because you logged everything you can figure out why the response was slow and then take measures to fix it.

    Click to read more ...

  • Wednesday
    Dec192007

    How can I learn to scale my project?

    This is a question asked on the ycombinator list and there are some good responses. I gave a quick response, but I particularly like neilk's knock out of the park insightful answer:

  • Read Cal Henderson's book. (I'd add in Theo's book and Release It! too)
  • The center of your design should be the data store, not a process. You transition the data store from state to state, securely and reliably, in small increments.
  • Avoid globals and session state. The more "pure" your function is, the easier it will be to cache or partition.
  • Don't make your data store too smart. Calculations and renderings should happen in a separate, asynchronous process.
  • The data store should be able to handle lots of concurrent connections. Minimize locking. (Read about optimistic locking).
  • Protect your algorithm from the implementation of the data store, with a helper class or module or whatever. But don't (DO NOT) try to build a framework for any conceivable query. Just the ones your algorithm needs. Viewing an application as a series of state transitions instead of a blizzard of actions and events is a way under appreciated design perspective. This is one of they key design approaches for making robust embedded systems. A great paper talking about this sort of stuff is Mission Planning and Execution Within the Mission Data System - an effort to make engineering flight software more straightforward and less prone to error through the explicit modeling of spacecraft state. Another interesting paper is CLEaR: Closed Loop Execution and Recovery High-Level Onboard Autonomy for Rover Operations. In general I call these Fact Based Architectures. I'm really glad neilk brought it up.

    Click to read more ...

  • Friday
    Dec142007

    The Current Pros and Cons List for SimpleDB

    Not surprisingly opinions on SimpleDB vary from it sucks, don't take my database, to it will change the world, who needs a database anyway? From a quick survey of the blogosphere, here's where SimpleDB stands at the moment:

    SimpleDB Cons

  • No SLA. We don't know how reliable it will be, how fast it will be, or how consistent the performance will be.
  • Consistency constraints are relaxed. Reading data immediately after a write may not reflect the latest updates. To programmers used to transactions, this may be surprising, but many people think this is one of the tradeoffs that needs to be made to scale.
  • Database is a core competency. If you don't control your database you can't out compete your competition.
  • When your database is out of your control you can't guarantee it will work properly. You can't create the proper indexes and other optimizations.
  • No join or IN operator. You'll need to do multiple client side calls to simulate joins, which will be slow.
  • No stored procedures, referential integrity, and other relational goodies. This is not a professional product.
  • Attribute size limited to 1024 bytes. It's not designed for content serving.
  • Latency from outside Amazon will be high.
  • Setting up and maintaining a database is cheap and easy these days, so why bother? It costs too much compared when compared to running your own servers.
  • What happens when you need to super scale to very large datasets?
  • No API support from common languages like PHP, Ruby, etc.
  • All your existing code and infrastructure needs to be rewritten.
  • Not geographically distributed with nearest datacenter routing.
  • Queries are lexigraphical. So you’ll need to store data in lexicographical order. This means says inside looking out: zero-padding your integers, adding positive offsets to negative integer sets, and converting dates into something like ISO 8601.
  • Attribute values are typeless which could lead to a lot of typing related errors and inefficient queries.
  • The 10 GB maximum per domain is too limiting.
  • It's not Dynamo. Amazon is keeping the really good stuff to themselves.
  • Text searching is not supported. You'll need to construct your own fast search indexes.
  • Queries are limited to 5 seconds running time. It's only for getting and setting, nothing more SQLish.
  • No cloned APIs for unit testing. Need to be able to develop locally against other data stores.
  • Your data is under Amazon's control, so there could be security and privacy problems.
  • The XML based protocol unnecessarily increases overhead, latency, and cost.
  • Lockin. If you decide to leave Amazon’s cloud how do you move all your data and get a similar system up and working outside the cloud?
  • Open cash register. Since SDB is charge on use, a malicious user can simply setup a loop to query your site, which costs you an unbounded amount of money.

    SimpleDB Pros

  • SimpleDB is not a relational database. Relational databases are too complex and don't scale well. Keeping data access simple is a selling point, not a weakness.
  • Low setup costs and pay-as-you-go expansion make it perfect for startups. The price is reasonable given the functionality and the hands off admin.
  • Setting up and maintaining a highly available clustered database that is constantly growing is extremely difficult. Building your application on a building block that does all this for you adds a lot of value.
  • Setting up a database inside EC2 is a pain. The makes getting basic database functionality trivial. No need to worry about scaling, capacity planning, or partitioning.
  • It has a decent query language, which is unusual for this type of data store.
  • Data are stored across multiple nodes which supports parallel query execution.
  • It's built on Erlang and that's cool.
  • You don't need to seek funding to hire a database team and buy hardware. Depending on how you weight each factor, SimpleDB could be way behind or way ahead of other options. What's interesting is to see what people think is important. For many people the only real database is relational and if it doesn't have transactions, joins, etc it's not real. Databases like beauty seem to be in the eye of the beholder.

    Click to read more ...

  • Thursday
    Dec132007

    Amazon SimpleDB - Scalable Cloud Database

    Amazon has announced the limited beta of Amazon SimpleDB - a simple web services interface to create and store multiple data sets, query your data easily, and return the results. Together with the Simple Storage Service (S3), Elastic Compute Cloud (EC2) and other web services Amazon offers a complete utility computing platform. SimpleDB was the missing piece of AWS - the scalable structured database. Check out my blog entry: http://innowave.blogspot.com/2007/12/amazon-simpledb-scalable-cloud-database.html I was waiting for this one :-) Geekr

    Click to read more ...