advertise
Thursday
Oct012009

Private Data Cloud: 'Do It Yourself' with Eucalyptus 

Private Clouds provide many of the benefits of the Public Cloud, namely elastic scalability, faster time-to-market and reduced OpEX, all within the Enterprises own perimeter that complies to its governance. Leading commercial Private Cloud products include VMware, Univa UD, Unisys. Open source solutions include pro ducts like Globus Nimbus, Enomaly Elastic Computing Platform, RESERVOIR and Eucalyptus.

Read more at: http://bigdatamatters.com/bigdatamatters/2009/09/private-cloud-eucalyptus.html

Thursday
Oct012009

Moving Beyond End-to-End Path Information to Optimize CDN Performance

You go through the expense of installing CDNs all over the globe to make sure users always have a node close by and you notice something curious and furious: clients still experience poor latencies. What's up with that? What do you do to find the problem? If you are Google you build a tool (WhyHigh) to figure out what's up. This paper is about the tool and the unexpected problem of high latencies on CDNs. The main problems they found: inefficient routing to nearby nodes and packet queuing. But more useful is the architecture of WhyHigh and how it goes about identifying bottle necks. And even more useful is the general belief in creating sophisticated tools to understand and improve your service. That's what professionals do. From the abstract:
Replicating content across a geographically distributed set of servers and redirecting clients to the closest server in terms of latency has emerged as a common paradigm for improving client performance. In this paper, we analyze latencies measured from servers in Google’s content distribution network (CDN) to clients all across the Internet to study the effectiveness of latency-based server selection. Our main result is that redirecting every client to the server with least latency does not suffice to optimize client latencies. First, even though most clients are served by a geographically nearby CDN node, a sizeable fraction of clients experience latencies several tens of milliseconds higher than other clients in the same region. Second, we find that queueing delays often override the benefits of a client interacting with a nearby server.
To help the administrators of Google’s CDN cope with these problems, we have built a system called WhyHigh. First, WhyHigh measures client latencies across all nodes in the CDN and correlates measurements to identify the prefixes affected by inflated latencies. Second, since clients in several thousand prefixes have poor latencies, WhyHigh prioritizes problems based on the impact that solving them would have, e.g., by identifying either an AS path common to several inflated prefixes or a CDN node where path inflation is widespread. Finally, WhyHigh diagnoses the causes for inflated latencies using active measurements such as traceroutes and pings, in combination with datasets such as BGP paths and flow records. Typical causes discovered include lack of peering, routing misconfigurations, and side-effects of traffic engineering. We have used WhyHigh to diagnose several instances of inflated latencies, and our efforts over the course of a year have significantly helped improve the performance offered to clients by Google’s CDN.

Related Articles

  • Product: Akamai
  • Tuesday
    Sep222009

    How Ravelry Scales to 10 Million Requests Using Rails

    Tim Bray has a wonderful interview with Casey Forbes, creator of Ravelry, a Ruby on Rails site supporting a 400,000+ strong community of dedicated knitters and crocheters.

    Casey and his small team have done great things with Ravelry. It is a very focused site that provides a lot of value for users. And users absolutely adore the site. That's obvious from their enthusiastic comments and rocket fast adoption of Ravelry.

    Ten years ago a site like Ravelry would have been a multi-million dollar operation. Today Casey is the sole engineer for Ravelry and to run it takes only a few people. He was able to code it in 4 months working nights and weekends. Take a look down below of all the technologies used to make Ravelry and you'll see how it is constructed almost completely from free of the shelf software that Casey has stitched together into a complete system. There's an amazing amount of leverage in today's ecosystem when you combine all the quality tools, languages, storage, bandwidth and hosting options.

    Now Casey and several employees makes a living from Ravelry. Isn't that the dream of any small business? How might you go about doing the same thing?

    Site: http://www.ravelry.com

    Statistics

  • 10 million requests a day hit Rails (AJAX + RSS + API)
  • 3.6 million pageviews per day
  • 430,000 registered users. 70,000 active each day. 900 new sign ups per day.
  • 2.3 million knitting/crochet projects, 50,000 new forum posts each day, 19 million forum posts, 13 million private messages, 8 million photos (the majority are hosted by Flickr).
  • Started on a small VPS and demand exploded from the start.
  • Monetization: advertisers + merchandise store + pattern sales

    Platform

  • Ruby on Rails (1.8.6, Ruby GC patches)
  • Percona build of MySQL
  • Gentoo Linux
  • Servers: Silicon Mechanics (owned, not leased)
  • Hosting: Colocation with Hosted Solutions
  • Bandwidth: Cogent (very cheap)
  • Capistrano for deployment.
  • Nginx is much faster and less memory hungry than Apache.
  • Xen for virtualization
  • HAproxy for load balancing.
  • Munin for monitoring.
  • Tokyo Cabinet/Tyrant for large object caching
  • Nagios for alerts
  • HopToad for exception notifications.
  • NewRelic for tuning
  • Syslog-ng for log aggregation
  • S3 for storage
  • Cloudfront as a CDN
  • Sphinx for the search engine
  • Memcached for small object caching

    Architecture

  • 7 Servers (Gentoo Linux). Virtualization (Xen) creates 13 virtual servers.
  •  Front end uses Nginx and HAproxy. The request flow: nginx -> haproxy -> (load balanced) -> apache + mod_passenger. Nginx is first so it can provide functions like serving static files and redirects before passing a request to HAproxy for load balancing. Apache is probably used because it is more configurable than Nginx.
  •  One small backup server.
  • One small utility server for non-critical processes and staging.
  •  2 32 GB of RAM servers for the master database, slave database, Sphinx search engine.
  •  3 application servers running 6 Apache Passenger and Ruby instances, each capped at a pool size of 20. 6 quad core processors and 40 GB of RAM total. There's RAM to spare.
  • 5 terabytes of storage on Amazon S3. Cloudfront is used as a CDN.
  • Tokyo Cabinet/Tyrant is used instead of memcached in some places for caching larger objects. Specifically markdown text that has been converted to HTML.
  • HAproxy and Capistrano are used for rolling deploys of new versions of the site without affecting performance/traffic.

    Lessons Learned

  • Let your users create the site for you. Iterate and evolve. Start with something that works, get people in it, and build it together. Have a slow beta. Invite new people on slowly. Talk to the users about what they want every single day. Let your users help build your site. The result will be more reassuring, comforting, intuitive, and effective.
  • Let your users fund you. Ravelry was funded in part from users who donated $71K. That's a gift. Not stock. Don't give up equity in your company. It took 6 months of working full time and bandwidth/server costs before they started making a profit and this money helped bridge that gap. They key is having a product users feel passionate about and being the kind of people users feel good about supporting. That requires love and authenticity.
  • Become the farmer's market of your niche. Find an under serviced niche. Be anti-mass market. You don't always have to create something for the millions. The millions will likely yawn. Create something and do a good job for a smaller passionate group and that passion will transfer over to you.
  • Success is not about scale, it’s about sustainable execution. This lovely quote is from Jeff Putz.
  • The database is always the problem. Nearly all of the scaling/tuning/performance related work is database related. For example, MySQL schema changes on large tables are painful if you don’t want any downtime. One of the arguments for schemaless databases.
  • Keep it fun. Casey switched to Ruby on Rails because he was looking to make programming fun again. That reenchantment helped make the site possible.
  • Invent new things that delight your users. Go for magic. Users like that. This is one of Costco's principles too. This link, for example, describes some very innovative approaches to forum management.
  • Ruby rocks. It's a fun language and allowed them to develop quickly and release the site twice a day during beta.
  • Capture more profit using low margin services. Ravelry has their own merchandise store, wholesale accounts, printers, and fulfillment company. This allows them to keep all their costs lower so their profits aren't going third party services like CafePress.
  • Going from one server to many servers is the hardest transition to make. Everything changes and becomes more difficult. Have this transition in mind when you are planning your architecture.
  • You can do a lot with a little in today's ecosystem. It doesn't take many people or much money anymore to build a complex site like Ravelry. Take a look at all the different programs Ravelry uses to build there site and how few people are needed to run the site.

    Some people complain that there aren't a lot of nitty gritty details about how Raverly works. I think it should be illuminating that a site of this size doesn't need to have a lavish description of arcane scaling strategies. It can now be built from off the shelf parts smartly put together. And that's pretty cool.

    Related Articles

  • Ravelry gets funding from its own community.
  • Appache/Passenger vs Nginx/Mongrel by Matt Darby
  • The Ravelry Blog (note the number of comments on posts).
  • Podcast - Episode 4: Y Ravelry (featuring Jess & Casey)
  • Beta testing and beyond
  • Hacker News Thread - I included the reasoning from a user named Brett for why the HTTP request path is "Nginx out front passing requests to HAProxy and THEN to Apache + mod_rails."
  • Sunday
    Sep202009

    PaxosLease: Diskless Paxos for Leases

    PaxosLease is a distributed algorithm for lease negotiation. It is based on Paxos, but does not require disk writes or clock synchrony. PaxosLease is used for master lease negotation in the open-source Keyspace replicated key-value store.

    Saturday
    Sep192009

    Space Based Programming in .NET

    Space-based architectures are an alternative to the traditional n-tier model for enterprise applications. Instead of a vertical tier partitioning, space based applications are partitioned horizontally into self-sufficient units. This leads to almost linear scalability of stateful, high-performance applications.

    This is a recording of a talk I did last month where I introduce space based programming and demonstrate how that works in practice on the .NET platform using Oracle Coherence and GigaSpaces.

    Thursday
    Sep172009

    Infinispan narrows the gap between open source and commercial data caches 

    Recently I attended a lecture presented by Manik Surtani , JBoss Cache & Infinispan project lead. The goal of the talk was to provide a technical overview of both products and outline Infinispan's road-map. Infinispan is the successor to the open-source JBoss Cache. JBoss Cache was originally targeted at simple web page caching and Infinispan builds on this to take it into the Cloud paradigm.

    Why did I attend? Well, over the past few years I have worked on projects that have used commercial distributed caching (aka data grid) technologies such as GemFire, GigaSpaces XAP or Oracle Coherence . These projects required more functionality than is currently provided by open-source solutions such as memcached or EHCache. Looking at the road-map for Infinispan, I was struck by its ambition – will it provide the functionality that I need?

    Read more at: http://bigdatamatters.com/bigdatamatters/2009/09/infinispan-vs-gigaspaces.html

    Thursday
    Sep172009

    Hot Links for 2009-9-17 

  • Save 25% on Hadoop Conference Tickets
    Apache Hadoop is a hot technology getting traction all over the enterprise and in the Web 2.0 world. Now, there's going to be a conference dedicated to learning more about Hadoop. It'll be Friday, October 2 at the Roosevelt Hotel in New York City.

    Hadoop World, as it's being called, will be the first Hadoop event on the east coast. Morning sessions feature talks by Amazon, Cloudera, Facebook, IBM, and Yahoo! Then it breaks out into three tracks: applications, development / administration, and extensions / ecosystems. In addition to the conference itself, there will also be 3 days of training prior to the event for those looking to go deeper. In addition to general sessions speakers, presenters include Hadoop project creator Doug Cutting, as well as experts on large-scale data from Intel, Rackspace, Softplayer, eHarmony, Supermicro, Impetus, Booz Allen Hamilton, Vertica, About.com, and other companies.

    Readers get a 25% discount if you register by Sept. 21: http://hadoop-world-nyc.eventbrite.com/?discount=hadoopworld_promotion_highscalability.

  • Essential storage tradeoff: Simple Reads vs. Simple Writes by Stephan Schmidt. Data in denormalized chunks is easy to read and complex to write.
  • Kickfire's approach to parallelism by DANIEL ABADI. Kickfire uses column-oriented storage and execution to address I/O bottlenecks and FPGA-based data-flow architecture to address processing and memory bottlenecks.
  • "Just in Time" Decompression in Analytic Databases by Michael Stonebraker. A DBMS that is optimized for compression through and through--especially with a query executor that features just in time decompression will not just reduce IO and storage overhead, but also offer better query performance with lower CPU resource utilization.
  • Reverse Proxy Performance – Varnish vs. Squid (Part 2) by Bryan Migliorisi. My results show that in raw cache hit performance, Varnish puts Squid to shame.
  • Building Scalable Databases: Denormalization, the NoSQL Movement and Digg by Dare Obasanjo. As a Web developer it's always a good idea to know what the current practices are in the industry even if they seem a bit too crazy to adopt…yet.
  • How To Make Life Suck Less (While Making Scalable Systems) by Bradford Stephens. Scalable doesn’t imply cheap or easy. Just cheaper and easier.
  • Some perspective to this DIY storage server mentioned at Storagemojo by by Joerg Moellenkamp. It's about making decision. Application and hardware has to be seen as one. When your application is capable to overcome the limitations and problems of such ultra-cheap storage
  • Wednesday
    Sep162009

    The VeriScale Architecture - Elasticity and efficiency for private clouds

    The modern datacenter is evolving into the network centric datacenter model, which is applied to both public and private cloud computing. In this model, networking, platform, storage, and software infrastructure are provided as services that scale up or down on demand. The network centric model allows the datacenter to be viewed as a collection of automatically deployed and managed application services that utilize underlying virtualized services. Providing sufficient elasticity and scalability for the rapidly growing needs of the datacenter requires these collections of automatically-managed services to scale efficiently and with essentially no limits, letting services adapt easily to changing requirements and workloads. Sun’s VeriScale architecture provides the architectural platform that can deliver these capabilities. Sun Microsystems has been developing open and modular infrastructure architectures for more than a decade. The features of these architectures, such as elasticity, are seen in current private and public cloud computing architectures, while the non-functional requirements, such as high availability and security, have always been a high priority for Sun. The VeriScale architecture leverages experience and knowledge from many Sun customer engagements and provides an excellent foundation for cloud computing. The VeriScale architecture can be implemented as an overlay, creating a virtual infrastructure on a public cloud or it can be used to implement a private cloud.

    Read more at: http://wikis.sun.com/display/BluePrints/The+VeriScale+Architecture+-+Elasticity+and+Efficiency+for+Private+Clouds

    Wednesday
    Sep162009

    Paper: A practical scalable distributed B-tree

    We've seen a lot of NoSQL action lately built around distributed hash tables. Btrees are getting jealous. Btrees, once the king of the database world, want their throne back. Paul Buchheit surfaced a paper: A practical scalable distributed B-tree by Marcos K. Aguilera and Wojciech Golab, that might help spark a revolution.

    From the Abstract:

    We propose a new algorithm for a practical, fault tolerant, and scalable B-tree distributed over a set of servers. Our algorithm supports practical features not present in prior work: transactions that allow atomic execution of multiple operations over multiple B-trees, online migration of B-tree nodes between servers, and dynamic addition and removal of servers. Moreover, our algorithm is conceptually simple: we use transactions to manipulate B-tree nodes so that clients need not use complicated concurrency and locking protocols used in prior work. To execute these transactions quickly, we rely on three techniques: (1) We use optimistic concurrency control, so that B-tree nodes are not locked during transaction execution, only during commit. This well-known technique works well because B-trees have little contention on update. (2) We replicate inner nodes at clients. These replicas are lazy, and hence lightweight, and they are very helpful to reduce client-server communication while traversing the B-tree. (3)We replicate version numbers of inner nodes across servers, so that clients can validate their
    transactions efficiently, without creating bottlenecks at the root node and other upper levels in the tree.

    Distributed hash tables are scalable because records area easily distributed across a cluster which gives the golden ability to perform many writes in parallel. The problem is keyed access is very limited.

    A lot of the time you want to iterate through records or search records in a sorted order. Sorted could mean time stamp order, for example, or last name order as another example.

    Access to data in sorted order is what btrees are for. But we simply haven't seen distributed btree systems develop. Instead, you would have to use some sort of map-reduce mechanism to efficiently scan all the records or you would have to maintain the information in some other way.

    This paper points the way to do some really cool things at a system level:

  • It's distributed so it can scale dynamically in size and handle writes in parallel.
  • It supports adding and dropping servers dynamically, which is an essential requirement for architectures based on elastic cloud infrastructures.
  • Data can be migrated to other nodes, which is essential for maintenance.
  • Multiple records can be involved in transactions which is essential for the complex data manipulations that happen in real systems. This is accomplished via a version number mechanism that looks something like MVCC.
  • Optimistic concurrency, that is, the ability to change data without explicit locking, makes the job for programmers a lot easier.

    These are the kind of features needed for systems in the field. Hopefully we'll start seeing more systems offering richer access structures while still maintaining scalability.
  • Sunday
    Sep132009

    How is Berkely DB fare against other Key-Value Database

    I want to know how is Berkeley DB compared against other key-value solution. I read it from Net that Google uses it for their Enterprise Sign-on feature. Is anyone has any experience using Berkeley DB. Backward compatibility is poor in Berkley DB but that is fine for me. How easy to scale using Berkeley DB.