advertise
Wednesday
Sep162009

Paper: A practical scalable distributed B-tree

We've seen a lot of NoSQL action lately built around distributed hash tables. Btrees are getting jealous. Btrees, once the king of the database world, want their throne back. Paul Buchheit surfaced a paper: A practical scalable distributed B-tree by Marcos K. Aguilera and Wojciech Golab, that might help spark a revolution.

From the Abstract:

We propose a new algorithm for a practical, fault tolerant, and scalable B-tree distributed over a set of servers. Our algorithm supports practical features not present in prior work: transactions that allow atomic execution of multiple operations over multiple B-trees, online migration of B-tree nodes between servers, and dynamic addition and removal of servers. Moreover, our algorithm is conceptually simple: we use transactions to manipulate B-tree nodes so that clients need not use complicated concurrency and locking protocols used in prior work. To execute these transactions quickly, we rely on three techniques: (1) We use optimistic concurrency control, so that B-tree nodes are not locked during transaction execution, only during commit. This well-known technique works well because B-trees have little contention on update. (2) We replicate inner nodes at clients. These replicas are lazy, and hence lightweight, and they are very helpful to reduce client-server communication while traversing the B-tree. (3)We replicate version numbers of inner nodes across servers, so that clients can validate their
transactions efficiently, without creating bottlenecks at the root node and other upper levels in the tree.

Distributed hash tables are scalable because records area easily distributed across a cluster which gives the golden ability to perform many writes in parallel. The problem is keyed access is very limited.

A lot of the time you want to iterate through records or search records in a sorted order. Sorted could mean time stamp order, for example, or last name order as another example.

Access to data in sorted order is what btrees are for. But we simply haven't seen distributed btree systems develop. Instead, you would have to use some sort of map-reduce mechanism to efficiently scan all the records or you would have to maintain the information in some other way.

This paper points the way to do some really cool things at a system level:

  • It's distributed so it can scale dynamically in size and handle writes in parallel.
  • It supports adding and dropping servers dynamically, which is an essential requirement for architectures based on elastic cloud infrastructures.
  • Data can be migrated to other nodes, which is essential for maintenance.
  • Multiple records can be involved in transactions which is essential for the complex data manipulations that happen in real systems. This is accomplished via a version number mechanism that looks something like MVCC.
  • Optimistic concurrency, that is, the ability to change data without explicit locking, makes the job for programmers a lot easier.

    These are the kind of features needed for systems in the field. Hopefully we'll start seeing more systems offering richer access structures while still maintaining scalability.
  • Sunday
    Sep132009

    How is Berkely DB fare against other Key-Value Database

    I want to know how is Berkeley DB compared against other key-value solution. I read it from Net that Google uses it for their Enterprise Sign-on feature. Is anyone has any experience using Berkeley DB. Backward compatibility is poor in Berkley DB but that is fine for me. How easy to scale using Berkeley DB.

    Saturday
    Sep122009

    How Google Taught Me to Cache and Cash-In

    A user named Apathy on how Reddit scales some of their features, shares some advice he learned while working at Google and other major companies.

    To be fair, I [Apathy] was working at Google at the time, and every job I held between 1995 and 2005 involved at least one of the largest websites on the planet. I didn't come up with any of these ideas, just watched other smart people I worked with who knew what they were doing and found (or wrote) tools that did the same things. But the theme is always the same:

    1. Cache everything you can and store the rest in some sort of database (not necessarily relational and not necessarily centralized).
    2. Cache everything that doesn't change rapidly. Most of the time you don't have to hit the database for anything other than checking whether the users' new message count has transitioned from 0 to (1 or more).
    3. Cache everything--templates, user message status, the front page components--and hit the database once a minute or so to update the front page, forums, etc. This was sufficient to handle a site with a million hits a day on a couple of servers. The site was sold for $100K.
    4. Cache the users' subreddits. Blow out the cache on update.
    5. Cache the top links per subreddit. Blow out cache on update.
    6. Combine the previous two steps to generate a menu from cached blocks.
    7. Cache the last links. Blow out the cache on each outlink click.
    8. Cache the user's friends. Append 3 characters to their name.
    9. Cache the user's karma. Blow out on up/down vote.
    10. Filter via conditional formatting, CSS, and an ajax update.
    11. Decouple selection/ranking algorithm(s) from display.
    12. Use Google or something like Xapian or Lucene for search.
    13. Cache "for as long as memcached will stay up." That depends on how many instances you're running, what else is running, how stable the Python memcached hooks are, etc.
    14. The golden rule of website engineering is that you don't try to enforce partial ordering simultaneously with your updates.
    15. When running a search engine operate the crawler separately from the indexer.
    16. Ranking scores are used as necessary from the index, usually cached for popular queries.
    17. Re-rank popular subreddits or the front page once a minute. Tabulate votes and pump them through the ranker.
    18. Cache the top 100 per subreddit. Then cache numbers 100-200 when someone bothers to visit the 5th page of a subreddit, etc.
    19. For less-popular subreddits, you cache the results until an update comes in.
    20. With enough horsepower and common sense, almost any volume of data can be managed, just not in realtime.
    21. Never ever mix your reads and writes if you can help it.
    22. Merge all the normalized rankings and cache the output every minute or so. This avoids thousands of queries per second just for personalization.
    23. It's a lot cheaper to merge cached lists than build them from scratch. This delays the crushing read/write bottleneck at the database. But you have to write the code.
    24. Layering caches is a clasisc strategy for milking your servers as much as possilbe. First look for an exact match. If that's not found, look for the components and build an exact match.
    25. The majority of traffic on almost all websites comes from the default, un-logged-in front page or from random forum/comment/result pages. Make sure those are cached as much as possible.. If one or more of the components aren't found, regenerate those from the DB (now it's cached!) and proceed. Never hit the database unless you have to.
    26. You (almost) always have to hit the database on writes. The key is to avoid hitting it for reads until you're forced to do so.
    Friday
    Sep112009

    The interactive cloud

    How many times have you been called in the middle of the night by your operation guys telling you that your application throws some odd red alerts? How many times did you found out that when those issues happens you don't have enough information to analyze this incident? have you tried to increase the log level just to find out that your problem became even worse - now your application throws tons of information in a continues basis most of which is complete garbage...

    The current separation between the way we implement our application and the way we manage it leads to many of this ridicules situations. Cloud makes those things even worse.

    In this post i suggest an alternative approach. Why don't we run our application the way we run our business? I refer to this approach as the "interactive cloud" where our application behaves just like our project team and the operations just like our managers. As with our business our application would need to take more responsibility to the way it runs and take corrective actions such as balancing it own resources, re-assign tasks to the available resources in case of failure etc. It will need to involve its manager only when it runs out of resource. It will need to provide reports in a way that makes sense to our managers.

    In the first part of this post describes the general concept behind this model and the second part provides technical background which include code snippet based on our experience in GigaSpaces.

    Thursday
    Sep102009

    When optimizing - don't forget the Java Virtual Machine (JVM) 

    Recently, I was working on a project that was coming to a close. It was related to optimizing a database using a Java based in-memory cache to reduce the load. The application had to process up to a million objects per day and was characterized by its heavy use of memory and the high number of read, write and update operations. These operations were found to be the most costly, which meant that optimization efforts were concentrated here.

    The project had already achieved impressive performance increases, but one question remained unanswered - would changing the JVM increase performance?


    Read more at: http://bigdatamatters.com/bigdatamatters/2009/08/jvm-performance.html

    Thursday
    Sep102009

    The technology behind Tornado, FriendFeed's web server

    Today, we are open sourcing the non-blocking web server and the tools that power FriendFeed under the name Tornado Web Server. We are really excited to open source this project as a part of Facebook's open source initiative, and we hope it will be useful to others building real-time web services.

    You can download Tornado at tornadoweb.org.

    Read more on Brett Taylor's blog (co-founder of FriendFeed)

    Thursday
    Sep102009

    Building Scalable Databases: Denormalization, the NoSQL Movement and Digg

    Database normalization is a technique for designing relational database schemas that ensures that the data is optimal for ad-hoc querying and that modifications such as deletion or insertion of data does not lead to data inconsistency. Database denormalization is the process of optimizing your database for reads by creating redundant data. A consequence of denormalization is that insertions or deletions could cause data inconsistency if not uniformly applied to all redundant copies of the data within the database.

    Read more on Carnage4life blog...

    Thursday
    Sep102009

    How to handle so many socket connection

    In my application, we receive request from many clients through socket.Client can connect to this socket and send data. this connection has to be maintained for indefinite hours. Client are continously sending data . There are so many clients simultaneously to server. I am using java to make application which listen on port and do processing on data. How can i scale this socket overhead. Is there any product which helps in maintening socket .

    Wednesday
    Sep092009

    GridwiseTech revolutionizes data management

    GridwiseTech has developed AdHoc, an advanced framework for sharing geographically distributed data and compute resources. It simplifies the resource management and makes cooperation secure and effective.
    The premise of AdHoc is to enable each member of the associated institution to control access to his or her resources without an IT administrator’s help, and with high security level of any exposed data or applications assured.
    It takes 3 easy steps to establish cooperation within AdHoc: create a virtual organization, add resources and share them. The application can be implemented within any organization to exchange data and resources or between institutions to join forces for more efficient results.
    AdHoc was initially created for a consortium of hospitals and institutions to share medical data sets. As a technical partner in that project, GridwiseTech implemented the Security Framework to provide access to that data and designed a graphical tool to facilitate the administration of the entire system.

    Every participant agreed to grant access to its resources to other partners in the project. Analysis of more patients’ records meant bigger samples and, potentially, better research. As most of these data are subject to a strict privacy policy, they could only be accessible for specific research purposes within defined time periods. In each case, patients’ identity remained anonymous and they provided consent to use their data for experiments. AdHoc enabled easy dynamic access rights management and, at the same time, prevented unauthorized access to sensitive information.
    “Advanced international scientific consortia need to set up ad-hoc collaborations. For this reason, we used the concept of Virtual Organizations, introduced by international Grid projects. However, to create such a VO and grant people access to different resources, a lot of administrative effort is needed, including admins’ time and paperwork. GridwiseTech's AdHoc software is the first application I know of truly dynamic Virtual Organizations, where users themselves are responsible for their resources and can share them easy in real time without involving an administrator” said Andrea De Luca, Clinician and Researcher at the Institute of Clinical Infectious Diseases, Catholic University of Rome, Italy.
    In this critical domain, the GridwiseTech software system proved to be versatile. Its combination of security and simplicity makes it a unique tool for rapid collaborations and modern e-Science.
    Read more at www.gridwisetech.com/adhoc

    Acknowledgments
    -AdHoc bases on open–source components such as Shibboleth from Internet2.
    -AdHoc was used within the ViroLab,project, an EU-funded research initiative in the scope of the 6th Framework Programme. ViroLab’s main objective is to develop a “Virtual Laboratory” for medical experts enabling clinical studies, medical knowledge discovery, and decision support for HIV drug resistance.

    Monday
    Sep072009

    Product: Infinispan - Open Source Data Grid

    Infinispan is a highly scalable, open source licensed data grid platform in the style of GigaSpaces and Oracle Coherence.

    From their website:

    The purpose of Infinispan is to expose a data structure that is highly concurrent, designed ground-up to make the most of modern multi-processor/multi-core architectures while at the same time providing distributed cache capabilities. At its core Infinispan exposes a JSR-107 (JCACHE) compatible Cache interface (which in turn extends java.util.Map). It is also optionally is backed by a peer-to-peer network architecture to distribute state efficiently around a data grid.

    Offering high availability via making replicas of state across a network as well as optionally persisting state to configurable cache stores, Infinispan offers enterprise features such as efficient eviction algorithms to control memory usage as well as JTA compatibility.

    In addition to the peer-to-peer architecture of Infinispan, on the roadmap is the ability to run farms of Infinispan instances as servers and connecting to them using a plethora of clients - both written in Java as well as other popular platforms.

    A few observations:

  • Open source is an important consideration, depending on your business model. As you scale out your costs don't go up. The downside is you'll likely put in more programming effort to implement capabilities the commercial products have already solved.
  • It's from the makers of Jboss Cache so it's likely to have a solid implmentation, even so early in it's development cycle. The API looks very well thought out.
  • Java only. Plan is to add more bindings in the future.
  • Distributed hash table only. Commercial products have very advanced features like distributed query processing which can make all the difference during implementation. We'll see how the product expands from its caching roots into a full fledged data manipulation platform.
  • MVCC and a STM-like approach provide lock- and synchronization-free data structures. This means dust off all those non-blocking algorithms you've never used before. It will be very interesting to see how this approach performs under real-life loads programmed by real-life programmers not used to such techniques.
  • Data is made safe using a configurable degree of redundancy. State is distributed across a cluster. And it's peer-to-peer, there's no central server.
  • API based (put and get operations). XML, bytecode manipulation and JVM hooks aren't used.
  • Future plans call for adding a compute-grid for map-reduce style operations.
  • Distributed transactions across multiple objects are supported. It also offers eviction strategies to ensure individual nodes do not run out of memory and passivation/overflow to disk. Warm-starts using preloads are also supported.

    It's exciting to have an open source grid alternative. It will be interesting to see how Infinispan develops in quality and its feature set. Making a mission critical system of this type is no simple task.

    I don't necessarily see Infinispan as just a competitor for obvious players like GigaSpaces and Coherence, it may play even more strongly in the NoSQL space. For people looking for a reliable, highly performant, scalable, transaction aware hash storage system, Ininispan may look even more attractive than a lot of the disk based systems.

    Related Articles

  • Video Interview with Manik Surtani, Founder & Project Lead at JBoss Cache, Infinispan Data Grid
  • Infinispan Interview by Mark Little on InfoQ.
  • Are Cloud Based Memory Architectures the Next Big Thing?
  • Infinispan - data grids meets open source on TheServerSide.com
  • Technical FAQs
  • Anti-RDBMS: A list of distributed key-value stores
  • Infinispan Wiki
  • Distribution instead of Buddy Replication