advertise
« Product: nginx | Main | Rather small site architecture. »
Friday
May022008

Friends for Sale Architecture - A 300 Million Page View/Month Facebook RoR App

Update: Jake in Does Django really scale better than Rails? thinks apps like FFS shouldn't need so much hardware to scale.

In a short three months Friends for Sale (think Hot-or-Not with a market economy) grew to become a top 10 Facebook application handling 200 gorgeous requests per second and a stunning 300 million page views a month. They did all this using Ruby on Rails, two part time developers, a cluster of a dozen machines, and a fairly standard architecture. How did Friends for Sale scale to sell all those beautiful people? And how much do you think your friends are worth on the open market? 

Site: http://www.facebook.com/apps/application.php?id=7019261521

Information Sources

  • Siqi Chen and Alexander Le, co-creators of Friends for Sale, answering my standard questionairre.
  • Virality on Facebook

    The Platform

  • Ruby on Rails
  • CentOS 5 (64 bit)
  • Capistrano - update and restart application servers.
  • Memcached
  • MySQL
  • Nginx
  • Starling - distributed queue server
  • Softlayer - hosting service
  • Pingdom - for website monitoring
  • LVM - logical volume manager
  • Dr. Nics Magic Multi-Connections Gem - split database reads and writes to servers

    The Stats

  • 10th most popular application on Facebook.
  • Nearly 600,000 active users.
  • Half a million unique visitors a day and growing fast.
  • 300 million page views a month.
  • 300% monthly growth rate, but that is plateauing.
  • 2.1 million unique visitors in the past month
  • 200 requests per second.
  • 5TB of bandwidth per month.
  • 2 part time (now full time), and 1 remote DBA contractor.

  • 4 DB servers, 6 application servers, 1 staging server, and 1 front end server.
    - 6, 4 core 8 GB application servers.
    - Each application server runs 16 mongrels for a total of 96 mongrels. -
    - 4 GB memcache instance on each application server
    - 2 32GB 4 core servers with 4x 15K SCSI RAID 10 disks in a master-slave setup

    Getting to Know You

  • What is your system is for?

    Our system is designed for our Facebook application, Friends for Sale.
    It's basically Hot-or-Not with a market economy. At the time of this
    writing it's the 10th most popular application on Facebook.

    Their Facebook description reads: Buy and sell your friends as pets! You can make your pets poke, send gifts, or just show off for you.
    Make money as a shrewd pets investor or as a hot commodity! Friends for Sale is the bees knees!


  • Why did you decide to build this system?

    We designed this as more of an experiment to see if we understood virality concepts and metrics on Facebook. I guess we do. =)

  • What particular design/architecture/implementation challenges do your system have?

    As a Facebook application, every request is dynamic so no page caching is possible. Also, it is a very interactive, write heavy application so scaling the database was a challenge.

  • What did you do to meet these challenges?

    We memcached extensively early on - every page reload results in 0 SQL calls. We use Rail's fragment caching with custom expiration logic mostly.

  • How big is your system?

    We had more than half a million unique visitors yesterday and growing fast. We're on track to do more than 300 million page views this month.

  • What is your in/out bandwidth usage?

    We used around 3 terabytes of bandwidth last month. This month should be at least 5TB or so. This number is just for a few icons and XHTML/CSS.

  • How many documents, do you server? How many images? How much data?

    We don't really have unique documents ... we do have around 10 million user profiles though.

    The only images we store are a few static image icons.

  • How fast are you growing?

    We went from around 3M page views per day a month ago to more than 10M page views a day. A month before that we were doing 1M page views per day. So that's around a 300% monthly growth rate but that is plateauing. On a request per second basis, we get around 200 requests per second.

  • What is your ratio of free to paying users?

    It's all free.

  • What is your user churn?

    It's around 1% per day, with a growth rate of 3% or so per day in terms of installed users.

  • How many accounts have been active in the past month?

    We had roughtly 2.1 million unique visitors in the past month according to Google.

  • What is the architecture of your system?

    It's a relatively standard Rails cluster. We have a dedicated front end proxy balancer / static web server running nginx, which proxies directly to 6, 4 core 8 GB application servers. Each application server runs 16 mongrels for a total of 96 mongrels. The front end load balancer proxies directly to the mongrel ports. In addition, we run a 4 GB memcache instance on each application server, along with a local starling distributed queue server and misc background processes.

    We use god to monitor our processes.

    On the DB layer, we have 2 32GB 4 core servers with 4x 15K SCSI RAID 10 disks in a master-slave setup. We use Dr Nic's magic multi-connection's gem in production split reads and writes to each
    box.

    We are adding more slaves right now so we can distribute the read load better and have better redundancy and backup policies. We also get help from Percona (the mysqlperformanceblog guys) for remote DBA work.

    We're hosted on Softlayer - they're a fantastic host. The only problem was that their hardware load balancing server doesn't really work very well ... we had lots of problems with hanging connections and latency. Switching a dedicated box running just nginx fixed everything.

  • How is your system architected to scale?

    It really isn't. On the application layer we are shared-nothing so it's pretty trivial. On the database side we're still with a monolithic master and we're trying to push off sharding for as long as we can. We're still vertically scaled on the database side and I think we can get away with it for quite some time.

  • What do you do that is unique and different that people could best learn from?

    The three things that are unique is -

    1. Neither of the two developers in involved had previous experience in large scale Rails deployment.
    2. Our growth trajectory is relatively rare in the history of Rails deployments
    3. We had very little opportunity for static page caching - each request does hit the full Rails stack

  • What lessons have you learned? Why have you succeeded? What do you wish you would have done differently? What wouldn't you change?

    We learned that a good host, good hardware, and a good DBA are very important. We used to be hosted on Railsmachine, which to be fair is an excellent shared hosting company and they did go out of there way to support us. In the end though, we were barely responsive for a good month due to hardware problems, and it only took two hours to get up and running on Softlayer without a hitch. Choose a good host if you plan on scaling, because migrating isn't fun.

    The most important thing we learned is that your scalability problems is pretty much always, always, always the database. Check it first, and if you don't find anything, check again. Then check again. Without exception, every performance problem we had can be traced to the database server, the database configuration, the query, or the use and non-use of indices.

    We definitely should have gotten on to a better host earlier in the game so we would have been up.

    We definitely wouldn't change our choice of framework - Rails was invaluable for rapid application development, and I think we've pretty much proven that two guys without a lot of scaling experience can scale a Rails app up. The whole 'but does Rails scale?' discussion sounds like a bunch of masturbation - the point is moot.

  • How is your team setup?

    We have two Rails developers, inclusive of me. We very recently retained the services of a remote DBA for help on the database end.

  • How many people do you have?

    On the technical side, 2 part time (now full time), and 1 remote DBA contractor.

  • Where are they located?

    The full time employees are also located in the SOMA area of San Francisco.

  • Who performs what roles?

    The two developers server as co-founders . I (Siqi) was responsible for front end design and development early on, but since I had some experience with deployment I also ended up handling network operations and deployment as well. My co founder Alex is responsible for the bulk of the Rails code - basically all the application logic is from him. Now I find myself doing more deep back end network operations tasks like MySQL optimization and replication - it's hard to find time to get back to the front end which is what I love. But it's been a real fun learning experience so I've been eating up all I can from this.

  • Do you have a particular management philosophy?

    Yes - basically find the smartest people you can, give them the best deal possible, and get out of their way. The best managers GET OUT OF THE WAY, so I try to run the company as much as I can with that in mind. I think I usually fail at it.

  • If you have a distributed team how do you make that work?

    We'd have to have some really good communication tools in the cloud - somebody would have to be a Basecamp nazi. I think remote work / outsourcing is really difficult - I prefer to stay away with from it
    for core development. For something like MySQL DBA or even sysadmin - it might make more sense.

    What do you use?

    We use Rails with a bunch of plugins, most notable cache-fu from Chris Wanstrath and magic multi connections from Dr. Nic. I use VIM as the editor with the rails.vim plugin.

  • Which languages do you use to develop your system?

    Ruby / Rails

  • How many servers do you have?

    We now have 12 servers in the cluster.

  • How are they allocated?

    4 DB servers, 6 application servers, 1 staging server, and 1 front end server.

  • How are they provisioned?

    We order them from Softlayer - there's a less than 4 hour turn around for most boxes, which is awesome.

  • What operating systems do you use?

    CentOS 5 (64 bit)

  • Which web server do you use?

    nginx

  • Which database do you use?

    MySQL 5.1

  • Do you use a reverse proxy?

    We just use nginx's built in proxy balancer.

  • How is your system deployed in data centers?

    We use a dedicated hosting service, Softlayer.

  • What is your storage strategy?

    We use NAS for backups but internal SCSI drives for our production boxes.

  • How much capacity do you have?

    Across all of our boxes we probably have around ... 5 TB of storage or
    thereabouts.

  • How do you grow capacity?

    Ad-hoc. We haven't done a proper capacity planning study, to our detriment.

  • Do you use a storage service?

    Nope.

  • Do you use storage virtualization?

    Nope.

  • How do you handle session management?

    Right now we just persist it to the database - it would be fairly easy to use memcache directly for this purpose though.

  • How is your database architected? Master/slave? Shard? Other?

    Master/slave right now. We're moving towards a Master/Multi-slave with a read only load balancing proxy to the slave cluster.

  • How do you handle load balancing?

    We do it in software via nginx.

  • Which web framework/AJAX Library do you use?

    Rails.

  • Which real-time messaging frame works do you use?

    None.

  • Which distributed job management system do you use?

    Starling

  • How do you handle ad serving?

    We run network ads. We also weight our various ad networks by eCPM on our application layer.

  • Do you have a standard API to your website?

    Nope.

  • How many people are in your team?

    2 developers.

  • What skill sets does your team possess?

    Me: Front end design, development, limited Rails. Obviously, recently proficient in MySQL optimization and large scale Rails deployment.
    Alex: application logic development, front end design, general software engineering.

  • What is your development environment?

    Alex develops on OSX while I develop on Ubuntu. We use SVN for version control. I use VIM for editing and Alex uses TextMate.

  • What is your development process?

    On the logic layer, it's very test driven - we test extensively. On the application layer, it's all about quick iterations and testing.

  • What is your object and content caching strategy?

    We cache both in memcache with no TTL, and we just manually expire.

  • What is your client side caching strategy?

    None.

    How do you manage your system?

  • How do check global availability and simulate end-user performance?

    We use Pingdom for external website monitoring - they're really good.

  • How do you health check your server and networks?

    Right now we're just relying on our external monitoring and Softlayer's ping monitoring. We're investigating FiveRuns for monitoring as a possible solution to server monitoring.

  • How you do graph network and server statistics and trends?

    We don't.

  • How do you test your system?

    We deploy to staging and run some sanity tests, then we do a deploy to all application servers.

  • How you analyze performance?

    We trace back every SQL query in development to make sure we're not doing any unnecessary calls or model instantiations. Other than that, we haven't done any real benchmarking.

  • How do you handle security?

    Carefully.

  • How do you decide what features to add/keep?

    User feedback and critical thinking. We are big believers in simplicity so we are pretty careful to consider before we add any major features.

  • How do you implement web analytics?

    We use a home grown metrics tracking system for virality optimization,
    and we also use Google Analytics.

  • Do you do A/B testing?

    Yes, from the time to time we will tweak aspects of our design to optimize for virality.

    How is your data center setup?

  • Which firewall product do you use?
  • Which DNS service do you use?
  • Which routers do you use?
  • Which switches do you use?
  • Which email system do you use?
  • How do you handle spam?
  • How do you handle virus checking of email and uploads?

    Don't know to all of the above.

  • How do you backup and restore your system?

    We use LVM to do incrementals on a weekly and daily basis.

  • How are software and hardware upgrades rolled out?

    Right now they are done manually, except for new Rails application deployments. We use capistrano to update and restart our application servers.

  • How do you handle major changes in database schemas on upgrades?

    We usually migrate on a slave first and then just switch masters.

  • What is your fault tolerance and business continuity plan?

    Not very good.

  • Do you have a separate operations team managing your website?

    Oh we wish.

  • Do you use a content delivery network? If so, which one and what for?

    Nope

  • What is your revenue model?

    CPM - more page views more money. We also have incentivized direct offers through our virtual currency.

  • How do you market your product?

    Word of mouth - the social graph. We just leverage viral design tactics to grow.

  • Do you use any particularly cool technologies are algorithms?

    I think Ruby is pretty particularly cool. But no, not really - we're not doing rocket science, we're just trying to get people laid.

  • Do your store images in your database?

    No, that wouldn't be very smart.

  • How much up front design should you do?

    Hm. I'd say none if you haven't scaled up anything before, and a lot if you have. It's hard to know what's actually going to be the problem until you've actually been through and see what real load problems look like. Once you've done that, then you have enough domain knowledge to do some actual meaningful up front design on our next go around.

  • Has anything surprised your either for the good or bad?

    How unreliable vendor hardware can be, and how different support can be from host to host. The number one most important thing you will need is a scaled up dedicated host who can support your needs. We use Softlayer and we can't recommend them highly enough.

    On the other hand, it's surprising how far just a master-multislave setup can take you on commodity hardware. You can easily do a Billion page views per month on this setup.

  • How does your system evolve to meet new scaling challenges?

    It doesn't really, we just fix bottle necks as they come and we see them coming.

  • Who do you admire?

    Brad Fitzpatrick for inventing memcache, and anyone who has successfully horizontally scaled anything.

  • How are you thinking of changing your architecture in the future?

    We will have to start sharding by users soon as we hit database size and write limits.

    Their Thoughts on Facebook Virality

  • Facebook models the social graph in digital form as accurately and completely as possible.
  • Social graph is more important that features.
  • Facebook enables rapid social distribution of new applications through the social graph.
  • Your application idea should be: social, engaging, and universal.
  • The social aspect makes it viral.
  • Engaging makes it monetizable.
  • Universal gives it potential.
  • Friends for Sale is social because you are buying and selling your social graph.
  • It's engaging because it's a twist on an idea, low pressure, flirty, and a bit cynical.
  • It's universal because everyone is vain, has a price, and wants to flirt with hot people.
  • Every touch point in the application is a potential for recruiting new users.
  • Every user converts 1.4 other users which is the basis for exponential growth.
  • For every new user track the number of invites, notifications, minifeed items, profile clicks, and other channels.
  • For every channel track the percent clicked, converted, uninstalls.

    Lessons Learned

  • Scaling from the start is a requirement on Facebook. They went to 1 million pages/day in 4 weeks.
  • Ruby on Rails can scale.
  • Anything scales on the right architecture. Focus on architecture and operations.
  • You need a good DBA, good host, and good well configured hardware.
  • With caching and the heavy duty servers available today, you can go a long time without adopting more complicated database architectures.
  • The social graph is real. It's truly staggering the number of accessible users on Facebook with the right well implemented viral application.
  • Most performance problems are in the database. Look to the database server, the database configuration, the query, or the use and non-use of indexes.
  • People still use Vi!

    I'd really like to thank Siqi taking the time to answer all my questions and provide this fascinating look in to their system. It's amazing what you've done in so little time. Excellent job and thanks again.
  • Reader Comments (52)

    It pretty much funds it self. 10MM pages a day, times some reasonable CPM. You can kind of do the math.

    November 29, 1990 | Unregistered CommenterSiqi Chen

    Very interesting. Thanks for the interview and thanks for the insightful answers!

    November 29, 1990 | Unregistered CommenterRahul

    Um you guys probably weren't at the meeting when it was decided that "Rails doesn't scale". I'll forgive you this once, but don't let me catch you scaling Rails again.

    November 29, 1990 | Unregistered CommenterDr Nic

    How many lines of code? Test/code ratio? Coverage stats? What plugins?

    November 29, 1990 | Unregistered CommenterPhil

    My roommate is addicted to friends for sale.. he's always on it, scouting out hot girls with cheap pictures because "he'll make a lot of money off them" I think its a lot of work for $1000 of virtual cash, but hey, I wasn't that in to Hot or Not either. great job for just being two people.

    November 29, 1990 | Unregistered CommenterCollegeDude

    Thanks for the fascinating interview! Good luck scaling even further!

    November 29, 1990 | Unregistered CommenterAnonymous

    "I'd love to know how they funded this project. Since they're not charging for anything, how can they afford "4 DB servers, 6 application servers, 1 staging server, and 1 front end server"? How can they afford "3 terabytes of data" a month?"

    Their setup costs less than a Honda Civic..

    -- Greg

    November 29, 1990 | Unregistered CommenterGreg

    may i point out the notice on homepage? "Are we down again? Call our ghetto monitoring system at 714-423-2748!"

    November 29, 1990 | Unregistered CommenterAnonymous

    Nice info and comments!

    November 29, 1990 | Unregistered CommenterPerformance guy

    yes, thanks for the insights, guys

    November 29, 1990 | Unregistered CommenterEsdee

    96 mongrels means they just handle 96 simultaneous users. Using 6 quad-core 8Gb machines this number is simply ridiculous!
    Any share-nothing system scales. 1 user-per-box scales, but c'mon! do the math!
    Indeed "Rails doesn't scale"!

    November 29, 1990 | Unregistered CommenterLucindo

    Lucindo:

    You are a little confused about how Mongrels work in Rails. 96 mongrels means that we can handle, on average:

    96 * (1 second)/(avg request time in seconds) requests, which is about 400 requests per second, which is probably a couple thousand 'simultaneous users'.

    People who say Rails doesn't scale - can't scale.

    November 29, 1990 | Unregistered CommenterSiqi Chen

    Siqi Chen:

    I think you are a little confused with the difference between simultaneous users and requests per second. Rails isn't thread-safe, this means each process (mongrel, fastcgi, or whatever you use to serve a rails request) will handle one user at time.
    If you take 240ms to complete a request, with 96 process you can handle 400 requests per second on average. But you really can only have 96 simultaneous requests.

    November 29, 1990 | Unregistered CommenterLucindo

    Indeed, memcached scales wonderfully. It's a shame that rails is verty limited due to its nature, otherwise we would have another good tool for high load applications. To use such huge servers for that low amount of connections is really a shame (4GB for memcached + 4GB for system/rails for 16 simultaneous sessions).

    Keep up the good work and let us know when you manage to go beyond the 16 users barriers. I'm very interested to check what rails are doing to get better.

    November 29, 1990 | Unregistered Commentertuna

    Lucindo:

    No, I'm quite clear on the difference. How are you defining 'simultaneous users'? That's a very different metric than the 'simultaneous requests' term you just threw out.

    You acknowledge that we can handle 400 requests per second. In practice, an actual user sends out a request maybe once every 10-30 seconds, so 400 requests a second translates to thousands of simultaneous, actual users.

    November 29, 1990 | Unregistered CommenterSiqi Chen

    Siqi Chen:

    Simply as this: if 99 users send a request to the system at the very same time, one will have to wait. If you prefer to call this "simultaneous requests", ok. A system with these hardware handling at most 98 simultaneous requests is simply ridiculous.
    But I'm pretty sure the problem is Rails itself, not the application.

    November 29, 1990 | Unregistered CommenterLucindo

    Lucindo, first of all your claims about thread safety are way too ridiculous. Obviously, you have never heard of Buzzwords like Copy-on-write, Event Based Web server, Multi-core ( write them down, you might be able to use them in your next trollfest article ;-) ).

    If you want to waste 50 years to earn 10 M$ and buy an F1 car to commute to work, may the god bless you.

    November 29, 1990 | Unregistered CommenterPratik

    oy, rails troll ahoy. pratik needs to learn how to read.

    November 29, 1990 | Unregistered Commentertuna

    Pratik:

    I've never used Rails or Mongrel, but from what I've read above it seems you're advocating an "Event Based Web server" that uses a thread-per-request strategy instead of a multiplexed I/O strategy one, which sounds pretty inconsistent. This means the OS is handling the events, NOT the blocking I/O server in question. In case you're backed by a good threads implementation (NPTL, whatever) then your cost will be the additional stack-per-client. Obviously, you have never heard of Buzzwords like ACE, Grizzly, Erlang, Stackless and even Unify ;-)

    November 29, 1990 | Unregistered CommenterRicardo Herrmann

    Ricardo,

    Clearly you're not qualified enough to speak about Rails as you have never used it :) Now, if you have a tiny bit of idea about Ruby, you'd know that Ruby uses green threads and not OS level threads.

    But hey, sometimes knowing the buzzword is just not enough. For example, you mention Erlang without knowing a jackshit about light weight concurrency! Please google it.

    November 29, 1990 | Unregistered CommenterPratik

    Pratik,

    Please don't say you think Erlang is based on green threads in the same sense as Ruby. Threads are allowed to share state, Erlang's "green processes" are not (but, hey, copy-on-write, as you mentioned, is a useful buzzword). Google for Joe Armstrong's thesis.

    Why is it that average railers feel omniscient ? I've got to try it someday ... and as a bonus become "qualified", because I'm just one more sinner that used Perl's Maypole before DHH took inspiration from it and thus am not surprised by RoR. Btw, your strong belief that "I know jackshit" just confirms what I've said.

    PS: In case you didn't realize, I'm just mirroring your attitude, you probably know who Joe Armstrong and Doug Schmidt are. Let's all be friends now.

    November 29, 1990 | Unregistered CommenterRicardo Herrmann

    "In case you're backed by a good threads implementation (NPTL, whatever) then your cost will be the additional stack-per-client. Obviously, you have never heard of Buzzwords like .....ERLANG....... ;-)"

    NPTL and Erlang's processes

    More than this far from each other ;-)

    It's not about feeling omniscient, it's about being so sick and tired of non-rails people talking about scaling without never having used it themselves. Lame, innit ?

    November 29, 1990 | Unregistered CommenterPratik

    Patrik:

    Don't feel personally offended by what I said. I'm just making a point about rails scalability, but first lets define some concepts[1]:

    - scalability: is a desirable property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner, or to be readily enlarged.
    - scale up: To scale vertically means to add resources to a single node in a system.
    - scale out: To scale horizontally means to add more nodes to a system.

    On a real live system you have to worry about the two means of scalability: scale out and scale up. And why is that?
    Take a system that handle one request per node each time and scales out easily. When the load on your system grows the only thing you can do is scale out, adding more nodes. Soon you'll see that is impracticable, you need to be able to scale up also.

    The problem with rails is: it is a non thread-safe behemoth. I may be wrong, but what you think the system described on this post runs only 16 mongrels per node? And 16 mongrels means 16 simultaneous requests. This is the only point I'm trying to make.

    And I'm not considering development time and all other Rails claims, or if it is good or any other Rails merit. As any tool, Rails isn't suitable for all jobs.

    If Rails was thread-safe you could run just one mongrel and use the ruby green threads (maybe hundreds of it), and even the event driven mongrel.

    REFS:

    [1] http://en.wikipedia.org/wiki/Scalability

    November 29, 1990 | Unregistered CommenterLucindo

    Lucindo,

    Yeah, 16 mongrels == 16 concurrent requests. But *practically* speaking, overall throughput is of much more importance than that.

    Also, ruby uses green threads which suffer from IO blocking. Due to that, thread never give true concurrency and you need to run multiple processes in order to exhaust available cpu/memory.

    November 29, 1990 | Unregistered CommenterPratik

    Love the Application, my friends and I are stating careers in Investment Banking and Trading, so find it a great way relax and unwind. Our trades are mostly within a select circle of mutual friends. A suggestion (if not already in progress) would be a friends online now on friends for sale feature.

    Keep up the good work

    November 29, 1990 | Unregistered CommenterAnonymous

    PostPost a New Comment

    Enter your information below to add a new comment.
    Author Email (optional):
    Author URL (optional):
    Post:
     
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>