Strategy: Serve Pre-generated Static Files Instead Of Dynamic Pages

Pre-generating static files is an oldy but a goody, and as Thomas Brox Røst says, it's probably an underused strategy today. At one time this was the dominate technique for structuring a web site. Then the age of dynamic web sites arrived and we spent all our time worrying how to make the database faster and add more caching to recover the speed we had lost in the transition from static to dynamic.

Static files have the advantage of being very fast to serve. Read from disk and display. Simple and fast. Especially when caching proxies are used. The issue is how do you bulk generate the initial files, how do you serve the files, and how do you keep the changed files up to date? This is the process Thomas covers in his excellent article  Serving static files with Django and AWS - going fast on a budget", where he explains how he converted 600K thousand previously dynamic pages to static pages for his site Eventseer.net, a service for tracking academic events.

Eventseer.net was experiencing performance problems as search engines crawled their 600K dynamic pages. As a solution you could imagine scaling up, adding more servers, adding sharding, etc etc, all somewhat complicated approaches.  Their solution was to convert the dynamic pages to static pages in order to keep search engines from killing the site. As an added bonus non logged-in users experienced a much faster site and were more likely to sign up for the service.

The article does a good job explaining what they did, so I won't regurgitate it all here, but I will cover the highlights and comment on some additional potential features and alternate implementations...

They estimated it would take 7 days on single server to generate the initial 600K pages. Ouch. So what they did was use EC2 for what it's good for, spin up a lot of boxes to process data. Their data is backed up on S3 so the EC2 instances could read the data from S3, generate the static pages, and write them to their deployment area. It took 5 hours, 25 EC2 instances, and a meager $12.50 to perform the initial bulk conversion. Pretty slick.

The next trick is figuring out how to regenerate static pages when changes occur. When a new event is added to their system hundreds of pages could be impacted, which would require the effected static pages to be regenerated. Since it's not important to update pages immediately they queued updates for processing later. An excellent technique. A local queue of changes was maintained and replicated to an AWS SQS queue. The local queue is used in case SQS is down.

Twice a day EC2 instances are started to regenerate pages. Instances read twork requests from SQS, access data from S3, regenerate the pages, and shutdown when the SQS is empty. In addition they use AWS for all their background processing jobs.

Comments



I like their approach a lot. It's a very pragmatic solution and rock solid in operation. For very little money they offloaded the database by moving work to AWS. If they grow to millions of users (knock on wood) nothing much will have to change in their architecture. The same process will still work and it still not cost very much. Far better than trying to add machines locally to handle the load or moving to a more complicated architecture.

Using the backups on S3 as a source for the pages rather than hitting the database is inspired. Your data is backed up and the database is protected. Nice.

Using batched asynchronous work queues rather than synchronously loading the web servers and the database for each change is a good strategy too.

As I was reading I originally thought you could optimize the system so that a page only needed to be generated once. Maybe by analyzing the events or some other magic. Then it hit me that this was old style thinking. Don't be fancy. Just keep regenerating each page as needed. If a page is regenerated a 1000 times versus only once, who cares? There's plenty of cheap CPU available.

The local queue of changes still bothers me a little because it adds a complication into the system. The local queue and the AWS SQS queue must be kept synced.  I understand that missing a change would be a disaster because the dependent pages would never be regenerated and nobody would ever know. The page would only be regenerated the next time an event happened to impact the page. If pages are regenerated frequently this isn't a serious problem, but for seldom touched pages they may never be regenerated again.

Personally I would drop the local queue. SQS goes down infrequently. When it does go down I would record that fact and regenerate all the pages when SQS comes back up. This is a simpler and more robust architecture, assuming SQS is mostly reliable.

Another feature I have implemented in similar situations is to setup a rolling page regeneration schedule where a subset of pages are periodically regenerated, even if no event was detected that would cause a page to be regenerated. This protects against any event drops that may cause data be undetectably stale. Over a few days, weeks, or whatever, every page is regenerated. It's a relatively cheap way to make a robust system resilient to failures.