advertise
« MemSQL Architecture - The Fast (MVCC, InMem, LockFree, CodeGen) and Familiar (SQL) | Main | Stuff The Internet Says On Scalability For August 10, 2012 »
Monday
Aug132012

Ask HighScalability: Facing scaling issues with news feeds on Redis. Any advice?

We just released a social section to our iOS app several days ago and we are already facing scaling issues with the users' news feeds.

We're basically using a Fan-out-on-write (push) model for the users' news feeds (posts of people and topics they follow) and we're using Redis for this (backend is Rails on Heroku).  However, our current 60,000 news feeds is ballooning our Redis store to almost 1GB in a just a few days (it's growing way too fast for our budget). Currently we're storing the entire news feed for the user (post id, post text, author, icon url, etc) and we cap the entries to 300 per feed.

I'm wondering if we need to just store the post IDs of each user feed in Redis and then store the rest of the post information somewhere else?  Would love some feedback here.  In this case, our iOS app would make an api call to our Rails app to retrieve a user's news feed.  Rails app would retrieve news feed list (just post IDs) from Redis, and then Rails app would need to query to get the rest of the info for each post.  Should we query our Postgres DB directly?  But that will be a lot of calls to our DB.  Should we create another Redis store (so at least it's in memory) where we store all of the posts from our DB and we query this to get the post information?  Or should we forget Redis and go with MongoDB or Cassandra so we can have higher storage limits?

Thanks for your help in advance.

Reader Comments (28)

redis is so powerful when data are all fit in memory, but if overflow,you have to do something to washout,so using redis as a cache, not the store. store need relational database or other nosql stores. when compare with memcached, redis is useful for list\set\hash data struct, and can do replication for high availability and backup to disk for fast recovery.

August 15, 2012 | Unregistered Commenterleverly

On one of my projects, we've had good results using MongoDB Replica Sets (we will be sharding the replica sets soon probably) for this type of thing and keeping just the ID's in redis as well as the "hot" data (last few days in this case) in redis a la a reverse proxy mechanism of sorts. It's a bit more involved than that but that recipe has worked quite well. In one case we are processing various types of feeds w/ custom (simple) node.js stream processors.

This is a very effective recipe that let's us get the best out of MongoDB, Redis, and our Search databases.

Cheers,
Kent Langley
ProductionScale

August 19, 2012 | Unregistered CommenterKent Langley

Use disk and just concatenate news items to the end of a user's feed file on the filesystem.

August 21, 2012 | Unregistered Commenterhans

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>