Super Bowl Advertisers Ready for the Traffic? Nope..It's Lights Out.

Advertising for the Super Bowl is bigger than the game for many viewers. So you gotta figure advertisers are ready for the traffic bursts generated by their expensive ads? Not exactly...

Yottaa reports an amazing 13 advertiser websites crashed during the Super Bowl. Coke was interactively au currant, asking viewers to vote for the ending of a commercial, but load times went to 62 seconds. SodaStream, Calvin Klein, Axe, Got Milk? The Walking Dead, many movie sites, and many car sites, all were flagged with delay of fame penalties.

Lots of time, money, and creative energy is spent lovingly perfecting every detail of these commercials. It won't be a surprise to any programmer that this can't usually be said of the follow through on the backend.

So what can you do? Yottaa has some good tips and Michael Hamrah has a wonderful post on dealing with the Super Bowl Burst Problem:

Yottaa's tips:

  • Reduce the number of assets and asset weight to create smaller, more lightweight pages with faster page load times
  • Use website performance monitoring to stay on top of any issues your website may be having 24x7
  • Enlist a CDN to better reach your users geographically across the world and offload the majority your traffic
  • Perform live load testing of your expected traffic, as it is the only way to gauge your actual performance under heavy load.

Michael Hamrah took a great angle on tackling traffic bursts in his article How to Handle a Super Bowl Size Spike in Web Traffic:

To handle more requests there are three things you can do: produce (render) content faster, deliver (download) content faster and add more servers to handle more connections. Each of these solutions has a limit. Designing for these limits is architecting for scale.

Michael makes a bunch of trophy winning suggestions towards these goals:

  • Make Assumptions. Most traffic will be anonymous which means you can increase cache times and create specific rendering pipelines that avoid costly dynamic calls.
  • Understand HTTP. Learn how to use caching headers to your advantage.
  • Try Varnish and ESI. Cache and deliver preloaded dynamic content for faster response times. Edge Side Includes allow you to mix static and dynamic content together.
  • Use a CDN and Multiple Data Centers. Fan out work across the Internet without bogging down your own servers.
  • Use Auto Scaling Groups or Alerting. Ramp up servers as load increases.
  • Compress and Serialized Data Across the Wire. Compressed content reduces network I/O and increases cache efficiency.
  • Shut Down Features. Keep your entire system alive by shutting down less important features during the burst.
  • Non-Blocking I/O. Queue requests and scale the number of processors in response to queue sizes.
  • Think at Scale (very good):
When dealing with a high-load environment nothing can be off the table. What works for a few thousand users will grow out of control for a few million. Even small issues will become exponentially problematic.Scaling isn’t just about the tools to deal with load. It’s about the decisions you make on how your application behaves. The most important thing is determining page freshness for users. The decisions for an up-to-the-second experience for every user are a lot different than an up-to-the-minute experience for anonymous users. When dealing with millions of concurrent requests one will involve a lot of engineering complexity and the other can be solved quickly.

Read the original post as each of these topics is treated in much more detail. Really good stuff.