Companies earnings outstrip forecasts, consumer confidence is retuning and city bonuses are back. What does this mean for business? Growth! After the recent years of cost cutting in IT budgets, there is the sudden fear induced from increased demand. Pre-existing trouble points in IT infrastructures that have lain dormant will suddenly be exposed. Monthly reporting and real time analytics will suffer as data grows. IT departments across the land will be crying out “The engine canna take no more captain”. What can be done?
When you are on the bleeding edge of scale like Facebook is, you run into some interesting problems. As of 2008 Facebook had over 800 memcached servers supplying over 28 terabytes of cache. With those staggering numbers it's a fair bet to think they've seen their share of Dr. House worthy memcached problems.
Jeff Rothschild, Vice President of Technology at Facebook, describes one such problem they've dubbed the Multiget Hole.
You fall into the multiget hole when memcached servers are CPU bound, adding more memcached servers seems like the right way to add more capacity so more requests can be served, but against all logic adding servers doesn't help serve more requests. This puts you in a hole that adding more servers can't dig you out of. What's the treatment?
Caching/data-grids are going through a similar evolution to databases. As with databases, we started by using caching as an embedded service to the application. Now we are in the phase where we need to be able to share the data between multiple applications, or in cases where we don’t want to share the data, we need to be able to share the resources for managing the data, while keeping a high degree of isolation.
The demand for these sort of requirements becomes much more common with SOA or SaaS-based applications. As we approach the next generation of middleware and data-centers, it becomes clear that we cannot move to the next wave of virtualization and cloud computing without a strong security and isolation solution that is built-in to all layers of our application and middleware.
Disk-oriented approaches to online storage are becoming increasingly problematic: they do not scale grace-fully to meet the needs of large-scale Web applications, and improvements in disk capacity have far out-stripped improvements in access latency and bandwidth. This paper argues for a new approach to datacenter storage called RAMCloud, where information is kept entirely in DRAM and large-scale systems are created by aggregating the main memories of thousands of commodity servers. We believe that RAMClouds can provide durable and available storage with 100-1000x the throughput of disk-based systems and 100-1000x lower access latency. The combination of low latency and large scale will enable a new breed of data-intensive applications.
The essence of my work is coming into daily contact with innovative technologies. A recent example was at the request of a partner company who wanted to answer- which one of these tools will best solve my virtualized datacenter headache? After initial analysis all the products could be classified as tools that troubleshoot VM sprawl, but there was no universally accepted term for them. The most descriptive term that I found was Virtual Resource Manager (VRM) from DynamicOps. As I delved deeper into their workings, the distinction between VRMs and Private Clouds became blurred. What are the differences?
Drupal 7 is having a scalability makeover. Karoly Negyesi, Drupal Core Developer and Public Development Team Lead, explains the process in this video: Drupal 7 APIs, scalability mindset. Karoly states the general theme of the changes as: You give up some control and you get back scalability. An interesting comment on the politics of scalability?
Makeover may not be quite the right word though. A makeover implies a cosmetic change, looking better by changing the surface. Drupal's changes will go deeper than that, right to Drupal's core. It's a genuine and authentic change that will hopefully allow one of the Internet's most venerable Content Management Systems (CMSs) to compete with a constant stream of younger and sexier models.
Drupal is based on an older LAMP stack approach where PHP modules are scooped up and merged together each time a request is made to Drupal. Drupal's most intriguing idea is how it is built, expands, and changes by weaving together a single system out of individual components called modules. Built-in modules include comments, RSS, contact forms, forums, and Clean URLs. Add in modules include things like CSE to add Google's Custom Search Engine, modules to add in AdSense, CAPTCHA, and Sitemaps. Drupal establishes AOP extension points that allow modules to work remarkably well together, creating a site that feels like one single site even though it has been constructed from dozens of modules hunted and gathered from all over the digital world.
The problem is the PHP code can directly access the database and directly render to the UI, there is little required layering. Part of Drupal's amazing configurability and extensibility has been how easy it is for everything to work together by changing the database. But when there's no layering it's almost impossible to optimize the system. If you have 20 different modules they each can make 20 separate calls to the database when what we really want is one call. And because of the direct SQL access when the number of writes increases there's no systematic way to distribute the writes across multiple servers. So we see as Drupal sites grow in the number of modules and the number of users both performance and scalability tank.
The younger models architect their systems differently. Sites like Google, Amazon, Facebook are written terms of an API and a framework, a service based approach. Using a service based approach the web tier can be programmed in terms of services that themselves are scalable so the entire system is scalable. When the API is skipped there are no leverage points that can be made to scale. It becomes a big ball of mud.
More layering and more APIs is exactly the direction Drupal is taking. Exactly how is Drupal changing?
Online Social Networks (OSN) face serious scalability challenges due to their rapid growth and popularity. To address this issue we present a novel approach to scale up OSN called One Hop Replication (OHR). Our system combines partitioning and replication in a middleware to transparently scale up a centralized OSN design, and therefore, avoid the OSN application to undergo the costly transition to a fully distributed system to meet its scalability needs. OHR exploits some of the structural characteristics of Social Networks: 1) most of the information is one-hop away, and 2) the topology of the network of connections among people displays a strong community structure. We evaluate our system and its potential benefits and overheads using data from real OSNs: Twitter and Orkut. We show that OHR has the potential to provide out-of-the-box transparent scalability while maintaining the replication overhead costs in check.
Real-time social graphs (connectivity between people, places, and things). That's why scaling Facebook is hard says Jeff Rothschild, Vice President of Technology at Facebook. Social networking sites like Facebook, Digg, and Twitter are simply harder than traditional websites to scale. Why is that? Why would social networking sites be any more difficult to scale than traditional web sites? Let's find out.
Traditional websites are easier to scale than social networking sites for two reasons:
Jeff Rothschild, Vice President of Technology at Facebook gave a great presentation at UC San Diego on our favorite subject: "High Performance at Massive Scale – Lessons learned at Facebook". The abstract for the talk is:
Facebook has grown into one of the largest sites on the Internet today serving over 200 billion pages per month. The nature of social data makes engineering a site for this level of scale a particularly challenging proposition. In this presentation, I will discuss the aspects of social data that present challenges for scalability and will describe the the core architectural components and design principles that Facebook has used to address these challenges. In addition, I will discuss emerging technologies that offer new opportunities for building cost-effective high performance web architectures.
There's a lot of interesting about this talk that we'll get into later, but I thought you might want a head start on learning how Facebook handles 30K+ machines, 300 million active users, 20 billion photos, and 25TB per day of logging data.