Some Questions from a newbie

Hello highscalability world. I just discovered this site yesterday in a search for a scalability resource and was very pleased to find such useful information. I have some questions regarding distributed caching that I was hoping the scalability intelligentsia trafficking this forum could answer. I apologize for my lack of technical knowledge; I'm hoping this site will increase said knowledge! Feel free to answer all or as much as you want. Thank you in advance for your responses and thank you for a great resource! 1.) What are the standard benchmarks used to measure the performance of memcached or mySQL/memcached working together (from web 2.0 companies etc)? 2.) The little research I've conducted on this site suggests that most web 2.0 companies use a combination of mySQL and a hacked memcached (and potentially sharding). Does anyone know if any of these companies use an enterprise vendor for their distributed caching layer? (At this point in time I've only heard of Jive software using Coherence). 3.) In terms of a web 2.0 oriented startup, what are the database/distributed caching requirements typically needed to get off the ground and grow at a fairly rapid pace? 4.) Given the major players in the web 2.0 industry (facebook, twitter, myspace, PoF, Flickr etc, I'm ignoring google/amazon here because they have a proprietary caching layer) what is the most common, scalable back-end setup (mySQL/memcached/sharding etc)? What are its limitations/problems? What features does said setup lack that it really needs? Thank you so much for your insight!

Click to read more ...


Map-Reduce for Machine Learning on Multicore

We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up.
In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time.
Specifically, we show that algorithms that fit the Statistical Query model can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

Read more about this study here (PDF - you can download also)

Click to read more ...


Scale-up vs. Scale-out: A Case Study by IBM using Nutch/Lucene

Scale-up solutions in the form of large SMPs have represented the mainstream of commercial computing for the past several years. The major server vendors continue to provide increasingly larger and more powerful machines. More recently, scale-out solutions, in the form of clusters of smaller machines, have gained increased acceptance for commercial computing.
Scale-out solutions are particularly effective in high-throughput web-centric applications. In this paper, we investigate the behavior of two competing approaches to parallelism, scale-up and scale-out, in an emerging search application. Our conclusions show that a scale-out strategy can be the key to good performance even on a scale-up machine.
Furthermore, scale-out solutions offer better price/performance, although at an increase in management complexity.

Read more about scaling out/up and about the conclusions here (PDF - you can also download it)

Click to read more ...


Poem: Partly Cloudy

As any reader of this site knows we're huge huge supporters of the arts. To continue that theme here's a visionary poem by Mason Hale. Few have reached for inspiration and found their muse in the emotional maelstrom that is cloud computing, but Mason has and the results speak for themselves: Partly Cloudy We have a dream A vision An aspiration To compute in the cloud To pay as we go To drink by the sip To add cores at our whim To write to disks with no end To scale up with demand And scale down when it ends Elasticity Scalability Redundancy Computing as a utility This is our dream Becoming reality But… There’s a hitch. There’s a bump in the road There’s a twist in the path There’s a detour ahead on the way to achieving our goal It’s the Database Our old friend He is set in his ways He deals in transactions to keeps things consistent He maintains the integrity of all his relations He eats disks for breakfast He hungers for RAM He loves queries and joins, and gives each one a plan He likes his schemas normal and strict His changes are atomic That is his schtick He’s an old friend as I said We all know him well So it pains me to say that in this new-fangled cloud He doesn’t quite fit Don’t get me wrong, our friend can scale as high as you want But there’s a price to be paid That expands as you grow The cost is complexity It’s more things to maintain More things that can go wrong More ways to inflict pain On the poor DBA who cares for our friend The one who backs him up and, if he dies, restores him again I love our old friend I know you do too But it is time for us all to own up to the fact That putting him into the cloud Taking him out of the rack Just causes us both more pain and more woe So… It’s time to move on Time to learn some new tricks Time to explore a new world that is less ACIDic It’s time to meet some new friends Those who were born in the cloud Who are still growing up Still figuring things out There’s Google’s BigTable and Werner’s SimpleDB There’s Hive and HBase and Mongo and Couch There’s Cassandra and Drizzle And not to be left out There’s Vertica and Aster if you want to spend for support There’s a Tokyo Cabinet and something called Redis I’m told It’s a party, a playgroup of newborn DB’s They scale and expand, they re-partition with ease They are new and exciting And still flawed to be sure But they’ll learn and improve, grow and mature They are our future We developers should take heed If our databases can change, then maybe Just maybe So can we

Click to read more ...


INFOSCALE 2009 in June in Hong Kong

In case you are interested here's the info: INFOSCALE 2009: The 4th International ICST Conference on Scalable Information Systems. 10-12 June 2009, Hong Kong, China. In the last few years, we have seen the proliferation of the use of heterogeneous distributed systems, ranging from simple Networks of Workstations, to highly complex grid computing environments. Such computational paradigms have been preferred due to their reduced costs and inherent scalability, which pose many challenges to scalable systems and applications in terms of information access, storage and retrieval. Grid computing, P2P technology, data and knowledge bases, distributed information retrieval technology and networking technology should all converge to address the scalability concern. Furthermore, with the advent of emerging computing architectures - e.g. SMTs, GPUs, Multicores. - the importance of designing techniques explicitly targeting these systems is becoming more and more important. INFOSCALE 2009 will focus on a wide array of scalability issues and investigate new approaches to tackle problems arising from the ever-growing size and complexity of information of all kinds. For further information visit

Click to read more ...


Heroku - Simultaneously Develop and Deploy Automatically Scalable Rails Applications in the Cloud

Update 4: Heroku versus GAE & GAE/J

Update 3: Heroku has gone live!. Congratulations to the team. It's difficult right now to get a feeling for the relative cost and reliability of Heroku, but it's an impressive accomplishment and a viable option for people looking for a delivery platform.

Update 2: Heroku Architecture. A great interactive presentation of the Heroku stack. Requests flow into Nginx used as a HTTP Reverse Proxy. Nginx routes requests into a Varnish based HTTP cache. Then requests are injected into an Erlang based routing mesh that balances requests across a grid of dynos. Dynos are your application "VMs" that implement application specific behaviors. Dynos themselves are a stack of: POSIX, Ruby VM, App Server, Rack, Middleware, Framework, Your App. Applications can access PostgreSQL. Memcached is used as an application caching layer.

Update: Aaron Worsham Interview with James Lindenbaum, CEO of Heroku. Aaron nicely sums up their goal: Heroku is looking to eliminate all the reasons companies have for not doing software projects.

Adam Wiggins of Heroku presented at the lollapalooza that was the Cloud Computing Demo Night. The idea behind Heroku is that you upload a Rails application into Heroku and it automatically deploys into EC2 and it automatically scales using behind the scenes magic. They call this "liquid scaling." You just dump your code and go. You don't have to think about SVN, databases, mongrels, load balancing, or hosting. You just concentrate on building your application. Heroku's unique feature is their web based development environment that lets you develop applications completely from their control panel. Or you can stick with your own development environment and use their API and Git to move code in and out of their system.

For website developers this is as high up the stack as it gets. With Heroku we lose that "build your first lightsaber" moment marking the transition out of apprenticeship and into mastery. Upload your code and go isn't exactly a heroes journey, but it is damn effective...

I must confess to having an inherent love of Heroku's idea because I had a similar notion many moons ago, but the trendy language of the time was Perl instead of Rails. At the time though it just didn't make sense. The economics of creating your own "cloud" for such a different model wasn't there. It's amazing the niches utility computing will seed, fertilize, and help grow. Even today when using Eclipse I really wish it was hosted in the cloud and I didn't have to deal with all its deployment headaches. Firefox based interfaces are pretty impressive these days. Why not?

Adam views their stack as:
1. Developer Tools
2. Application Management
3. Cluster Management
4. Elastic Compute Cloud

At the top level developers see a control panel that lets them edit code, deploy code, interact with the database, see logs, and so on. Your website is live from the first moment you start writing code. It's a powerful feeling to write normal code, see it run immediately, and know it will scale without further effort on your part. Now, will you be able toss your Facebook app into the Heroku engine and immediately handle a deluge of 500 million hits a month? It will be interesting to see how far a generic scaling model can go without special tweaking by a certified scaling professional. Elastra has the same sort of issue.

Underneath Heroku makes sure all the software components work together in Lennon-McCartney style harmony. They take care (or will take care of) starting and stopping VMs, deploying to those VMs, billing, load balancing, scaling, storage, upgrades, failover, etc. The dynamic nature of Ruby and the development and deployment infrastructure of Rails is what makes this type of hosting possible. You don't have to worry about builds. There's a great infrastructure for installing packages and plugins. And the big hard one of database upgrades is tackled with the new migrations feature.

A major issue in the Rails world is versioning. Given the precambrian explosion of Rails tools, how does Heroku make sure all the various versions of everything work together? Heroku sees this as their big value add. They are in charge of making sure everything works together. We see a lot companies on the web taking on the role of curator ([1], [2], [3]). A curator is a guardian or an overseer. Of curators Steve Rubel says: They acquire pieces that fit within the tone, direction and - above all - the purpose of the institution. They travel the corners of the world looking for "finds." Then, once located, clean them up and make sure they are presentable and offer the patron a high quality experience. That's the role Heroku will play for their deployable Rails environment.

With great automated power comes great restrictions. And great opportunity. Curating has a cost for developers: flexibility. The database they support is Postgres. Out of luck if you wan't MySQL. Want a different Ruby version or Rails version? Not if they don't support it. Want memcache? You just can't add it yourself. One forum poster wanted, for example, to use the command line version of ImageMagick but was told it wasn't installed and use RMagick instead. Not the end of the world. And this sort of curating has to be done to keep a happy and healthy environment running, but it is something to be aware of.

The upside of curation is stuff will work. And we all know how hard it can be to get stuff to work. When I see an EC2 AMI that already has most of what I need my heart goes pitter patter over the headaches I'll save because someone already did the heavy curation for me. A lot of the value in services like rPath offers, for example, is in curation. rPath helps you build images that work, that can be deployed automatically, and can be easily upgraded. It can take a big load off your shoulders.

There's a lot of competition for Heroku. Mosso has a hosting system that can do much of what Heroku wants to do. It can automatically scale up at the webserver, data, and storage tiers. It supports a variery of frameworks, including Rails. And Mosso also says all you have to do is load and go.

3Tera is another competitor. As one user said: It lets you visually (through a web ui) create "applications" based on "appliances". There is a standard portfolio of prebuilt applications (SugarCRM, etc.) and templates for LAMP, etc. So, we build our application by taking a firewall appliance, a CentOS appliance, a gateway, a MySql appliance, glue them together, customize them, and then create our own template. You can specify down to the appliance level, the amount of cpu, memory, disk, and bandwidth each are assigned which let's you scale up your capacity simply by tweaking values through the UI. We can now deploy our Rails/Java hosted offering for new customers in about 20 minutes on our grid. AppLogic has automatic failover so that if anything goes wrong, it reploys your application to a new node in your grid and restarts it. It's not as cheap as EC2, but much more powerful. True, 3Tera won't help with your application directly, but most of the hard bits are handled.

RightScale is another company that combines curation along with load balancing, scaling, failover, and system management.

What differentiates Heroku is their web based IDE that allows you to focus solely on the application and ignore the details. Though now that they have a command line based interface as well, it's not as clear how they will differentiate themselves from other offerings.

The hosting model has a possible downside if you want to do something other than straight web hosting. Let's say you want your system to insert commercials into podcasts. That sort of large scale batch logic doesn't cleanly fit into the hosting model. A separate service accessed via something like a REST interface needs to be created. Possibly double the work. Mosso suffers from this same concern. But maybe leaving the web front end to Heroku is exactly what you want to do. That would leave you to concentrate on the back end service without worrying about the web tier. That's a good approach too.

Heroku is just getting started so everything isn't in place yet. They've been working on how to scale their own infrastructure. Next is working on scaling user applications beyond starting and stopping mongrels based on load. They aren't doing any vertical scaling of the database yet. They plan on memcaching reads, implementing read-only slaves via Slony, and using the automatic partitioning features built into Postgres 8.3. The idea is to start a little smaller with them now and grow as they grow. By the time you need to scale bigger they should have the infrastructure in place.

One concern is that pricing isn't nailed down yet, but my gut says it will be fair. It's not clear how you will transfer an existing database over, especially from a non-Postgres database. And if you use the web IDE I wonder how you will normal project stuff like continuous integration, upgrades, branching, release tracking, and bug tracking? Certainly a lot of work to do and a lot of details to work out, but I am sure it's nothing they can't handle.

Related Articles

  • Heroku Rails Podcast
  • Heroku Open Source Plugins etc
  • Thursday

    Which Key value pair database to be used

    My Table has 2 columsn .Column1 is id,Column2 contains information given by user about item in Column1 .User can give 3 types of information about item.I separate the opinion of single user by comma,and opinion of another user by ;. Example- 23-34,us,56;78,in,78 I need to calculate opinions of all users very fast.My idea is to have index on key so the searching would be very fast.Currently i m using mysql .My problem is that maximum column size is below my requirement .If any overflow occurs i make new row with same id and insert data into new row. Practically I would have around maximum 5-10 for each row. I think if there is any database which removes this application code. I just learn about key value pair database which is exactly i needed . But which doesn't put constraint(i mean much better than RDMS on column size. This application is not in production.

    Click to read more ...


    Gear6 Web cache - the hardware solution for working with Memcache

    The Gear6 Web Cache hybrid DRAM-flash memory architecture allows for 5-10 times more memcache memory per unit of rack space than DRAM-only configurations, and cuts memory costs by 50%. Other software enhancements include a slab allocator that is more efficient than traditional memcache implementations due to its fine-grained bucket sizing. Gear6 Web Cache also supports object sizes greater than 1 megabyte and manages evictions based on the cost of replacing objects, depending on the size and frequency of object access. It intelligently places cache instances across DRAM and flash, taking into account their different characteristics, while at the same time monitoring their health and detecting and de‐allocating faulty or failing memory.

    Gear6 Web Cache is a Memcached protocol compliant solution that scales and accelerates web applications, reduces memory footprint, enhances availability and implements comprehensive Memcached management features. Designed to work with all popular memcache clients, Gear6 Web Cache integrates seamlessly into existing deployments and immediately provides a scalable, high density caching solution for your web application environment.

    Some of the web services which are using Gear6 are (wiki answers), (online video), (social network).

    Read more about Gear6 hardware and customer cases studies on Gear6 website


    Thread Pool Engine in MS CLR 4, and Work-Stealing scheduling algorithm

    I just saw this article in HFadeel blog that spaek about Parallelism in .NET Framework 4, and how Thread Pool work, and the most faoums scheduling algorithm : Work-stealing algorithm. With preisnation to see it in action.

    Click to read more ...


    What CDN would you recommend?

    Update 10: The Value of CDNs by Mike Axelrod of Google. Google implements a distributed content cache from within large ISPs. This allows them to serve content from the edge of the network and save bandwidth on the ISPs backbone. Update 9: Just Jump: Start using Clouds and CDNs. Bob Buffone gives a really nice and practical tutorial of how to use CloudFront as your CDN. Update 8: Akamai’s Services Become Affordable for Anyone! Blazing Web Site Performance by Distribution Cloud. Distribution Cloud starts at $150 per month for access to the best content distribution network in the world and the leader of Content Distribution Networks. Update 7: Where Amazon’s Data Centers Are Located, Expanding the Cloud: Amazon CloudFront. Why Amazon's CDN Offering Is No Threat To Akamai, Limelight or CDN Pricing. Amazon has launched their CDN with "“low latency, high data transfer speeds, and no commitments.” The perfect relationship for many. The majority of the locations are in North America, but some are in Europe and Asia. Update 6: Amazon Launching New Content Delivery Network: No Threat To Major CDNs, Yet. All the Amazon will kill all other CDNs is a bit overblown. As usual Dan Rayburn sets us straight: The offering won't support streaming, live broadcasting, or provide many of the other products and services that video content owners need...the real story here is that Amazon is going to offer a high performance method of distributing content with low latency and high data transfer rates. Update 5: When It Comes To Content Delivery Networks, What Is The "Edge"?. Dan Rayburn is on edge about the misuse of the term edge: closest location to the user does not guarantee quality, often content is not delivered from the closest location, all content is not replicated at every "edge" location. Lots of other essential information. Update 4: David Cancel runs a great test to see if you should be Using Amazon S3 as a CDN?. Conclusion: "CacheFly performed the best but only slightly better than EdgeCast. The S3 option was the worst with the Nginx/DIY option performing just over 100 ms faster." Also take look at Part 2 - Cacheability? Update 3: Mr. Rayburn takes A Detailed Look At Akamai's Application Delivery Product . They create a "bi-nodal overlay network" where users and servers are always within 5 to 10 milliseconds of each other. Your data center hosted app can't compete. The problem is that people (that is, me) can understand the data center model. I don't yet understand how applications as a CDN will work. Update 2: Dan Rayburn starts an interesting series of articles on Highlights Of My Day In Cambridge With Akamai. Akamai is moving strong into the application distribution business. That would make an interesting cloud alternative.. Update: Streamingmedia links to new CDN DF Splash that specializes in instant-on TV-quality video streaming. A question was raised on the forum asking for a CDN recommendation. As usual there are no definitive answers, but here are three useful articles that may help your deliberations.

  • First, Tony Chang shows how to drive down response times using edge acceleration strategies.
  • Then Pingdom gives a nice overview and introduction to CDNs.
  • And last but not least, Dan Rayburn from gives a master class in how much you should pay for your CDN, what you should be getting for your money, and how to find the right provider for your needs. Lots and lots of good stuff to learn, even if you didn't roll out of bed this morning pondering the deeper mysteries of content delivery networks and the Canadian dollar.

    Edge Acceleration Strategies: Akamai by Tony Chang

    The edge network is the "network physically closest to the end user and the 'origin' is where the application(s) is hosted." Tony talks about how you use CDNs to manage the user experience through meeting millisecond+ level SLAs using edge acceleration services. He does this in an interesting way. He follows a request through its life cycle and shows how to turn your caterpillar into a butterfly at each stage:
  • An edge DNS means a name server closest to the end user will serve the DNS request.
  • Static content is easily cached on the edge.
  • Dynamic content is moving to the edge using what Akamai calls Web Application Accelerators.
  • And something I've never heard of is to use your CDN to improve routing performance by up to 33%. The service bypasses BGP using its own more optimized route tables to decrease latency.

    Pingdom's A look at Content Delivery Networks, or “how to serve lots of content really fast”

    CDNs are the hidden powerhouse of the internet. The unsung mitochondria powering bits forward. Cost, convenience and performance are the reasons people turn to CDNs. A CDN does what you can't, it put lots of servers in lots of different places. Panther Express, for example, puts 800 servers in 22 different geographical locations. Since CDNs sell delivery capacity capacity planning is one of their big challenges. And Pingdom would like you to recognize the importance of monitoring for detecting and quickly reacting to problems :-) The future of CDNs lies in larger caches for HD video, better locality, and more integration with hosting providers.

    Video on Content Delivery Network Pricing, Costs for Outsourced Video Delivery by Dan Rayburn

    Also CDN Pricing Data: Average Cost Per GB Declines In Q4 Due To Startups. It's evident Dan really knows his stuff. His articles and presentations are highly educational for anyone interested in the complex and confusing CDN world. Dan sees hundreds of real-life customer-CDN vendor contracts a year and he reports on real prices averaged over all the contracts he has seen. One of the hardest things as a consumer is knowing what a good price is for your basket of goods and Dan gives you the edge, so to speak. What I learned:
  • You decide who is a CDN.There's no central agency setting a standard. Dan's minimal definition is a service delivering live video in the US and Europe.
  • CDN market has gone from 10 to 30 vendors. VCs are pumping hundreds of millions into the space.
  • CDN providers provide a wide variety of services: application caching, static caching, streaming video, progressive video, etc. Dan concentrates only on video delivery.
  • You can't say vendor A is better than vendor B. It depends on your needs.
  • When comparing vendors you need to do an "apples to apples" comparison. He really likes that phrase. You can't compare vendors, only like products between vendors.
  • Video serving is complex because there are few standards in the market. There are multiple platforms, multiple encoding standards, etc.
  • Customer's don't buy on price alone. Delivery of bits over a network is a commodity. Buy on SLA, customer service, product, format, support, geographic reach, and performance.
  • There appears to be no way to compare vendors on the performance of their network. There are too many variables in play to make an accurate comparison.He's quite adament about this. Performance could mean: SLA, customer service, upload content, buffering, etc. No way to measure performance network performance across networks. Static image performance is very different than streaming performance. People all over the globe are accessing your content so what is the "performance" of that?
  • A trend this year is demand for P2P pricing and services.
  • To price your video delivery you need to answer 4 questions: 1) How many hours? 2) How many users? 3) How long will they watch? 4) What encoding and what bit rate?
  • Price varies on product bundle. Vendors need to specialize so they can move themselves out of the commodity market. If you would pay 8 cents a gig for delivered video your price will be different if you add application and static caching.
  • Contracts are at 12 months. Maybe 2 years when bundling services.
  • P2P is not necessarily cheaper so compare. Pricing is very confusing.
  • You can sometimes get a lower price by using the vendor's player.
  • Flash streaming is more expensive because of licensing fees. The number varies because each vendor cuts their own licensing deals. Could be 20% more, or it could be double, depends on volume.
  • When signing a vendor think if you need global reach or is regional reach sufficient? Use a regional service provider if you need a CDN just once in a while. It's matter of picking based on your needs. You can often get a less expensive deal and get a quarterly commit versus a montly commit.
  • Storage costs have really fallen. High of $10/gig and low of 10 cents per gig.
  • Most CDNs will pull from your origin storage and cache, which reduces your storage cost.
  • CDNs don't want to get paid with promises of ad sharing.
  • Pick a CDN vendor that will take the time to educate you. They should ask you about your business first, they shouldn't talk about themselves first. He mentions this point a few times and it makes a lot of sense.
  • Consider a dual vendor strategy where you pick one vendor for video and another for application.
  • Quality in the industry is very high. People rarely complain about the network. Customers want better support and reporting. Poor reporting is the #1 complaint. Run away if a vendor wants to charge for reporting. Lots of good stuff.

    Related Articles

  • Highscalability CDN Tag Cloud
  • Edge Acceleration Strategies: Akamai by Tony Chang
  • A look at Content Delivery Networks, or “how to serve lots of content really fast”
  • Content Delivery Network Pricing, Costs for Outsourced Video Delivery
  • CDN Pricing Data: Average Cost Per GB Declines In Q4 Due To Startups
  • A Taxonomy and Survey of Content Delivery Networks
  • Content Delivery Networks (CDN) Research Directory

    Click to read more ...