Moshe Kaplan of RockeTier shows the life cycle of an affiliate marketing system that starts off as a cub handling one million events per day and ends up a lion handling 200 million to even one billion events per day. The resulting system uses ten commodity servers at a cost of $35,000.
Mr. Kaplan's paper is especially interesting because it documents a system architecture evolution we may see a lot more of in the future: database centric --> cache centric --> memory grid.
As scaling and performance requirements for complicated operations increase, leaving the entire system in memory starts to make a great deal of sense. Why use cache at all? Why shouldn't your system be all in memory from the start?
General Approach to Evolving the System to Scale
One Million Event Per Day System
2.5 Million Event Per Day System
20 Million Event Per Day System
200 Million Event Per Day System
At this point architecture supports near-linear scaling and it's projected that it can easily scale to a billion events per day.