advertise
Monday
Feb172014

How the AOL.com Architecture Evolved to 99.999% Availability, 8 Million Visitors Per Day, and 200,000 Requests Per Second

This is a guest post by Dave Hagler Systems Architect at AOL.

The AOL homepages receive more than 8 million visitors per day.  That’s more daily viewers than Good Morning America or the Today Show on television.  Over a billion page views are served each month.  AOL.com has been a major internet destination since 1996, and still has a strong following of loyal users.

The architecture for AOL.com is in it’s 5th generation.  It has essentially been rebuilt from scratch 5 times over two decades.  The current architecture was designed 6 years ago.  Pieces have been upgraded and new components have been added along the way, but the overall design remains largely intact.  The code, tools, development and deployment processes are highly tuned over 6 years of continual improvement, making the AOL.com architecture battle tested and very stable.

The engineering team is made up of developers, testers, and operations and totals around 25 people.  The majority are in Dulles, Virginia with a smaller team in Dublin, Ireland.

In general the technology in use are Java, JavaServer Pages, Tomca, Apache, CentOS 5, Git, Jenkins, Selenium, and jQuery.  There are some other technologies which are used outside that stack, but these are the main components.

Design Principles

Click to read more ...

Friday
Feb142014

Stuff The Internet Says On Scalability For February 14th, 2014

Hey, it's HighScalability time:

  • 5 billion: Number of phone records NSA collects per day; Facebook: 1.23 billion users, 201.6 billion friend connections, 400 billion shared photos, and 7.8 trillion messages sent since the start of 2012.
  • Quotable Quotes:
    • @ShrikanthSS: people repeatedly underestimate the cost of busy waits
    • @mcclure111: Learning today java․net․URL․equals is a blocking operation that hits the network shook me badly. I don't know if I can trust the world now.
    • @hui_kenneth: @randybias: “3 ways 2 be market leader - be 1st, be best, or be cheapest. #AWS was all 3. Now #googlecloud may be best & is the cheapest.”
    • @thijs: The nice thing about Paper is that we can point out to clients that it took 18 experienced designers and developers two years to build.
    • @neil_conway: My guess is that the split between Spanner and F1 is a great example of Conway's Law.
  • How Facebook built the real-time posts search feature of Graph search. It's a big problem: one billion new posts added every day, the posts index contains more than one trillion total posts, comprising hundreds of terabytes of data. 

  • Chartbeat Engineering shares some of their experiences in two excellent articles: Part 1,  Part 2. Lessons: DNS is not a great means of load balancing traffic; Modifying sysctl values from their defaults can be important to ensure reliability; Graphing metrics is your friend;  Through TCP tuning and utilizing AWS Elastic Load Balancer we were able to decrease our response time by 98.5%, decrease our server footprint by 20% on our front end servers;  Enabling cross-zone load balancing got our request count distribution extremely well balanced;  planning to move from the m1.large instance type to the c3.large.  The c3.large is almost 50% cheaper and gives us more compute units which in turn yields slightly better response times.

  • Creating a resilient organization is a little like getting an allergy shot, you have to ingest a little of what ails you to boost your immune system. That's the idea behind DiRT, Disaster Recovery Testing event. In Weathering the Unexpected is the story of how far Google goes to improve their corporate immune system with disaster scenarios. Disasters can range from a walk-through of a backup restore to a company wide zombie attack simulation. More here and here.

  • 37signals' shows the power of focus by shedding all their products except Basecamp and even renaming themselves to be just Basecamp. A company can can grow wild unless pruned and shaped to let in the maximum amount of sunlight, growing the most and ripest fruit. While a hard prune is common in the orchard, it's not so common in an organization. A very brave move.

  • When I suggested this I was laughed at. So there! Patch Panels in the Sky:A Case for Free-Space Optics in Data Centers: We explore the vision of an all-wireless inter-rack datacenter fabric. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Click to read more ...

Thursday
Feb132014

Snabb Switch - Skip the OS and Get 40 million Requests Per Second in Lua

Snabb Switch - a toolkit for solving novel problems in networking. If you are building a new packet-processing network appliance then you can use Snabb Switch to get the job done more quickly.

Here's a great impassioned overview from erichocean:

Or, you could just avoid the OS altogether: https://github.com/SnabbCo/snabbswitch

Our current engineering target is 1 million writes/sec and > 10 million reads/sec on top of an architecture similar to that, on a single box, to our fully transactional, MVCC database (write do not block reads, and vice versa) that runs in the same process (a la SQLite), which we've also merged with our application code and our caching tier, so we're down to—literally—a single process for what would have been at least three separate tiers in a traditional setup.

The result is that we had to move to measuring request latency in microseconds exclusively. The architecture (without additional application-specific processing) supports a wire-to-wire messaging speed of 26 nanoseconds, or approx. 40 million requests per second. And that's written in Lua!

To put that in perspective, that kind of performance is about 1/3 of what you'd need to be able to do to handle Facebook's messaging load (on average, obviously, Facebook bursts higher than the average at times...).

Point being, the OS is just plain out-of-date for how to solve heavy data plane problems efficiently. The disparity between what the OS can do and what the hardware is capable of delivering is off by a few orders of magnitude right now. It's downright ridiculous how much performance we're giving up for supposed "convenience" today.

Wednesday
Feb122014

Paper: Network Stack Specialization for Performance 

In the scalability is specialization department here is an interesting paper presented at HotNets '13 on high performance networking: Network Stack Specialization for Performance.

The idea is generalizing a service so it fits in the kernel comes at a high performance cost. So move TCP into user space.  The result is a web server with ~3.5x the throughput of Nginx "while experiencing low CPU utilization, linear scaling on multicore systems, and saturating current NIC hardware."

Here's a good description of the paper published on Layer 9:

Click to read more ...

Monday
Feb102014

13 Simple Tricks for Scaling Python and Django with Apache from HackerEarth

HackerEarth is a coding skill practice and testing service that in a series of well written articles describes the trials and tribulations of building their site and how they overcame them: Scaling Python/Django application with Apache and mod_wsgi, Programming challenges, uptime, and mistakes in 2013, Post-mortem: The big outage on January 25, 2014, The Robust Realtime Server, 100,000 strong - CodeFactory server, Scaling database with Django and HAProxy, Continuous Deployment System, HackerEarth Technology Stack.

What characterizes these articles and makes them especially helpful is a drive for improvement and an openness towards reporting what didn't work and how they figured out what would work.

As they say, mistakes happen when you are building a complex product with a team of just 3-4 engineers, but investing in infrastructure allowed them to take more breaks, roam the streets of Bangalore while their servers are happily serving thousands of requests every minute, while reaching a 50,000 user base with ease.

Here's a gloss on how they did it:

Click to read more ...

Friday
Feb072014

Stuff The Internet Says On Scalability For February 7th, 2014

Hey, it's HighScalability time:


  • 5 billion requests per day: Heroku serves 60,000 requests per second; 500 Petabytes: Backblaze's New Data Center; 25000 simultaneous connections: on a Percona Server

  • How algorithms help determine that shape of our world. First we encode normative rules of an idealized world in algorithms. Second those algorithms help enforce those expectations by nudging humans in to acting accordingly. A fun example is the story of Ed Bolian's Record-Breaking Drive. Ed raced from New York to L.A. at speeds of up to 158 mph, "breaking countless laws – and the previous record, by more than two hours." His approach is one that any nerd would love. He had three radar detectors, two laser jammers, two nav systems, a CB radio, a scanner, two iPhones and two iPads running applications like Waze, lookouts in the back seat scanning for cops, and someone scouting ahead. Awesome! For the moral of the story, they were going so fast Ed said that "AmEx froze my credit card. They didn't think I could've traveled from one station to another as fast as I did." A human didn't act as expected so the human was denied access to the System. And that System is what will mediate all human interactions going forward. When we think of our AI mediated future be aware that it will destroy as many degrees of freedom as it creates.

  • Not what you want to have happen when you've spent $4.5 million on a Super Bowl ad. Maserati’s Ghibli SuperBowl Ad Crashes Maserati Website. They fell back to a YouTube video, which is a good strategy. Still, nice looking car. 

  • AWS cost savings alert: new prices for the M3 instances are now more cost-effective than the M1 instances. Take a look at rebalancing your instance portfolio.

  • Urs Hölzle with a little Google nostalgia: Back in 2000, the main ads database was on a single machine, f41. The ads group was five engineers back then, so we took turns carrying a pager. I had to abort a dinner date one night (it was Outback Steakhouse in Campbell) to come back to the Googleplex because f41 was wedged; Jeff Dean chips in with the story of his first visit to a datacenter

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Click to read more ...

Wednesday
Feb052014

Little’s Law, Scalability and Fault Tolerance: The OS is your bottleneck. What you can do?

This is a guest repost by Ron Pressler, the founder and CEO of Parallel Universe, a Y Combinator company building advanced middleware for real-time applications.

Little’s Law helps us determine the maximum request rate a server can handle. When we apply it, we find that the dominating factor limiting a server’s capacity is not the hardware but the OS. Should we buy more hardware if software is the problem? If not, how can we remove that software limitation in a way that does not make the code much harder to write and understand?

Many modern web applications are composed of multiple (often many) HTTP services (this is often called a micro-service architecture). This architecture has many advantages in terms of code reuse and maintainability, scalability and fault tolerance. In this post I’d like to examine one particular bottleneck in the approach, which hinders scalability as well as fault tolerance, and various ways to deal with it (I am using the term “scalability” very loosely in this post to refer to software’s ability to extract the most performance out of the available resources). We will begin with a trivial example, analyze its problems, and explore solutions offered by various languages, frameworks and libraries.

Our Little Service

Let’s suppose we have an HTTP service accessed directly by the client (say, web browser or mobile app), which calls various other HTTP services to complete its task. This is how such code might look in Java:

Tuesday
Feb042014

Sponsored Post: Logentries, Booking, Apple, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, ManageEngine, Site24x7  

Who's Hiring?

  • Apple is hiring for multiple positions. Imagine what you could do here. At Apple, great ideas have a way of becoming great products, services, and customer experiences very quickly.
    • Senior Server Side Engineer. The Emerging Technology team is looking for a highly motivated, detail-oriented, energetic individual with experience in a variety of big data technologies.  You will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data, including: Develop scalable, robust systems that will gather, process, store large amount of data Define/develop Big Data technologies for Apple internal and customer facing applications. Please apply here.
    • Senior Server Side Engineer. The Emerging Technology team is looking for a highly motivated, detail-oriented, energetic individual with experience in a variety of big data technologies.  You will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data, including: Develop scalable, robust systems that will gather, process, store large amount of data Define/develop Big Data technologies for Apple internal and customer facing applications. Please apply here.
    • Senior Engineer: Emerging Technology. Apple’s Emerging Technology group is looking for a senior engineer passionate about exploring emerging technologies to create paradigm shifting cloud based solutions. Please apply here
    • Sr Software Engineer. The Emerging Technology team is looking for a highly motivated, detail-oriented, energetic individual with experience in a variety of big data technologies. You will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data. Please apply here.
    • C++ Senior Developer and Architect- Maps. The Maps Team is looking for a senior developer and architect to support and grow some of the core backend services that support Apple Map's Front End Services. Please apply here.  

  • We need awesome people @ Booking.com - We want YOU! Come design next
    generation interfaces, solve critical scalability problems, and hack on one of the largest Perl codebases. Apply: http://www.booking.com/jobs.en-us.html

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.

Fun and Informative Events

  • Aerospike Webinar: “Getting the Most out of Your Flash/SSDs”. Tune in to Aerospike's latest webinar, “Getting the Most Out of your Flash/SSDs” at 10am PST Tuesday, Feb. 18 to learn how to select, test and prepare your drives for maximum database performance. Register now. 

Cool Products and Services

  • Log management made easy with Logentries Billions of log events analyzed every day to unlock insights from the log data the matters to you. Simply powerful search, tagging, alerts, live tail and more for all of your log data. Automated AWS log collection and analytics, including CloudWatch events. 

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • MongoDB Backup Free Usage Tier Announced. We're pleased to introduce the free usage tier to MongoDB Management Service (MMS). MMS Backup provides point-in-time recovery for replica sets and consistent snapshots for sharded systems with minimal performance impact. Start backing up today at mms.mongodb.com.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Click to read more ...

Monday
Feb032014

How Google Backs Up the Internet Along With Exabytes of Other Data

Raymond Blum leads a team of Site Reliability Engineers charged with keeping Google's data secret and keeping it safe. Of course Google would never say how much data this actually is, but from comments it seems that it is not yet a yottabyte, but is many exabytes in size. GMail alone is approaching low exabytes of data.

Mr. Blum, in the video How Google Backs Up the Internet, explained common backup strategies don’t work for Google for a very googly sounding reason: typically they scale effort with capacity. If backing up twice as much data requires twice as much stuff to do it, where stuff is time, energy, space, etc., it won’t work, it doesn’t scale.  You have to find efficiencies so that capacity can scale faster than the effort needed to support that capacity. A different plan is needed when making the jump from backing up one exabyte to backing up two exabytes. And the talk is largely about how Google makes that happen.

Some major themes of the talk:

  • No data loss, ever. Even the infamous GMail outage did not lose data, but the story is more complicated than just a lot of tape backup. Data was retrieved from across the stack, which requires engineering at every level, including the human.

  • Backups are useless. It’s the restore you care about. It’s a restore system not a backup system. Backups are a tax you pay for the luxury of a restore. Shift work to backups and make them as complicated as needed to make restores so simple a cat could do it.

  • You can’t scale linearly. You can’t have 100 times as much data require 100 times the people or machine resources. Look for force multipliers. Automation is the major way of improving utilization and efficiency.

  • Redundancy in everything. Google stuff fails all the time. It’s crap. The same way cells in our body die. Google doesn’t dream that things don’t die. It plans for it.

  • Diversity in everything. If you are worried about site locality put data in multiple sites. If you are worried about user error have levels of isolation from user interaction. If you want production from a software bug put it on different software. Store stuff on different vendor gear to reduce large vendor bug effects.

  • Take humans out of the loop. How many copies of an email are kept by GMail? It’s not something a human should care about. Some parameters are configured by GMail and the system take care of it. This is a constant theme. High level policies are set and systems make it so. Only bother a human if something outside the norm occurs.

  • Prove it. If you don’t try it it doesn’t work. Backups and restores are continually tested to verify they work.

There’s a lot to learn here for any organization, big or small. Mr. Blum’s talk is entertaining, informative, and well worth watching. He does really seem to love the challenge of his job.

Here’s my gloss on this very interesting talk where we learn many secrets from inside the beast:

Click to read more ...

Friday
Jan312014

Stuff The Internet Says On Scalability For January 31st, 2014

Hey, it's HighScalability time:


Largest battle ever on Eve Online. 2,000 players. $200K in damage. Awesome pics.

 

  • teaspoon of soil: hosts up to a billion bacteria spread among a million species.

  • Quotable Quotes: 
    • Vivek Prakash: The problem of scaling always takes a toll on you.
    • @jcsalterego: See This One Weird Trick Hypervisors Don't Want You To Know

  • Upgrades are the great killer of software systems. Do you really want a pill that would supply materials with instructions for nanobots to form new neurons and place them near existing cells to be replaced so you have a new brain within six months? Scary as hell. But there's an nanoapp for that.

  • Ted Nelson has a fascinating series of Computers for Cynics vidcasts on YouTube. I'd ony really known of Mr. Nelson from his writings on hypertext, but he has a broad and penetrating insight into the early days of the computer industry. He's not really cynical, but I've always had a hard time differentiation realism from cynicism. Suffice it to say he thinks there have been many wrong paths chosen by our industry and not everything is as told by various industry hagiographies. You may like: The Nightmare of Files and Directories, The Database Mess, The Dance of Apple and Microsoft, and The Real Story of the World Wide Web.

  • From small beginnings. Where it all started: "the internet" in 1969: His idea for the project was the "spirit of community" and was interested in "having computers help people communicate with other people" (Licklider, Licklider, and Robert Taylor) as opposed to using the computer to communicate for us.... By the end of 1969, ARPANET was able to connect to four locations: UCLA, UC Santa Barbara, SRI, and Utah.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Click to read more ...