advertise
Sunday
Jul122009

SPHiveDB: A mixture of the Key/Value Store and the Relational Database.

The Key/Value Store becames more and more popular. When we use the Key/Value Store to store objects, we need to serialize/deserialize the objects as binary buffer. We have many ways to serialize/deserialize objects. A possible way is to use the Relational Database. Every value we store in the Key/Value Store is a SQLite instance. We can use the power of the Relational Database to manipulate the value. The SQL is very powerful for processing query request.

SPHiveDB = TokyoCabinet + SQLite
http://code.google.com/p/sphivedb/

SPHiveDB is a server for sqlite database. It use JSON-RPC over HTTP to expose a network interface to use SQLite database. It supports combining multiple SQLite databases into one file ( through tokyo cabinet ). It also supports the use of multiple files.

Thursday
Jul092009

No to SQL? Anti-database movement gains steam – My Take

In this post i wrote my view on the anti SQL database movement and where the alternative approach fits in:

- SQL databases are not going away anytime soon.
- The current "one size fit it all" databases thinking was and is wrong.
- There is definitely a place for a more a more specialized data management solutions alongside traditional SQL databases.

In addition to the options that was mentioned on the original article i pointed out the the in-memory alternative approach and how that fits into the puzzle. I used a real life scenario: scalable Social network based eCommerce site where i outlined how in-memory approach was the only option they could scale and meet their application performance and response time requirements.

Wednesday
Jul082009

Servers Component - How to choice and build perfect server

There are a lot of questions about how the server components, and how to build perfect server with consider the power consumption. Today I will discuss the Server components, and how we can choice better server components with consider the power consumption, efficacy, performance, and price.

Key points:

  • What kind of components the servers needs?
  • The Green Computing and the Servers components
  • How much power the server consume
  • Choice the right components:
    • Processor
    • Hard Disk Drive
    • Memory
    • Operating system
  • Build Server, or buy?
Wednesday
Jul082009

Art of Parallelism presentation

This presentation about parallel computing, and it’s discover the following topic:

  • What is parallelism?

  • Why now?

  • How it’s works?

  • What is the current options

  • Parallel Runtime Library. (for more information go there)

Note: All of my presentation is open source, so feel free to copy it, use it, and re-distribute it.
Download

Thursday
Jul022009

It Must be Crap on Relational Dabases Week 

It's hard to be a relational database lately. After years of faithful service everywhere you look the world is turning against you:

  • Recently at the NoSQL conference 150 revolutionaries met with their new anti-RDBMS arms suppliers. And you know what happens when revolutionaries are motivated, educated, funded, and well armed.
  • The revolution has gone mainstream when Computerworld writes No to SQL? Anti-database movement gains steam. It's not just whispers anymore, it's everywhere.
  • And perennial revolutionary Michael Stonebraker runs from blog to blog shouting the The End of a DBMS Era (Might be Upon Us). Relational vendors are selling legacy software, are 50x slower than other alternatives, and that can not stand.
  • The Greek Chorus on Hacker News sings of anger and lies.

    Certainly some say stick with the past. It's your fault, you aren't doing it right, give us another chance and all will be as it ever was. Some smirk saying this is nothing but a return to a more ancient time when IBM was King.

    But it's in the air. It's in the code. A revolution is coming. To what? That is what is not yet clear.

    See also:
  • NoSQL? by Curt Monash.
  • CouchDB says BigTable clones are too complex.
  • Yahoo! Developer Network Blog. Very nice summary of the different talks.
  • Thursday
    Jul022009

    Product: Hbase

    Update 3: Presentation from the NoSQL Conference: slides, video.
    Update 2: Jim Wilson helps with the Understanding HBase and BigTable by explaining them from a "conceptual standpoint."
    Update: InfoQ interview: HBase Leads Discuss Hadoop, BigTable and Distributed Databases. "MapReduce (both Google's and Hadoop's) is ideal for processing huge amounts of data with sizes that would not fit in a traditional database. Neither is appropriate for transaction/single request processing."

    Hbase is the open source answer to BigTable, Google's highly scalable distributed database. It is built on top of Hadoop (product), which implements functionality similar to Google's GFS and Map/Reduce systems. 

    Both Google's GFS and Hadoop's HDFS provide a mechanism to reliably store large amounts of data. However, there is not really a mechanism for organizing the data and accessing only the parts that are of interest to a particular application.

    Bigtable (and Hbase) provide a means for organizing and efficiently accessing these large data sets.

    Hbase is still not ready for production, but it's a glimpse into the power that will soon be available to your average website builder.

    Google is of course still way ahead of the game. They have huge core competencies in data center roll out and they will continually improve their stack.

    It will be interesting to see how these sorts of tools along with Software as a Service can be leveraged to create the next generation of systems.

    Thursday
    Jul022009

    Hypertable is a New BigTable Clone that Runs on HDFS or KFS

    Update 3: Presentation from the NoSQL conference: slides, video 1, video 2.

    Update 2: The folks at Hypertable would like you to know that Hypertable is now officially sponsored by Baidu, China’s Leading Search Engine. As a sponsor of Hypertable, Baidu has committed an industrious team of engineers, numerous servers, and support resources to improve the quality and development of the open source technology.

    Update: InfoQ interview on Hypertable Lead Discusses Hadoop and Distributed Databases. Hypertable differs from HBase in that it is a higher performance implementation of Bigtable.

    Skrentablog gives the heads up on Hypertable, Zvents' open-source BigTable clone. It's written in C++ and can run on top of either HDFS or KFS. Performance looks encouraging at 28M rows of data inserted at a per-node write rate of 7mb/sec.

    Thursday
    Jul022009

    Product: Facebook's Cassandra - A Massive Distributed Store

    Update 2: Presentation from the NoSQL conference: slides, video.
    Update: Why you won't be building your killer app on a distributed hash table by Jonathan Ellis. Why I think Cassandra is the most promising of the open-source distributed databases --you get a relatively rich data model and a distribution model that supports efficient range queries. These are not things that can be grafted on top of a simpler DHT foundation, so Cassandra will be useful for a wider variety of applications.

    James Hamilton has published a thorough summary of Facebook's Cassandra, another scalable key-value store for your perusal. It's open source and is described as a "BigTable data model running on a Dynamo-like infrastructure." Cassandra is used in Facebook as an email search system containing 25TB and over 100m mailboxes.

  • Google Code for Cassandra - A Structured Storage System on a P2P Network
  • SIGMOD 2008 Presentation.
  • Video Presentation at Facebook
  • Facebook Engineering Blog for Cassandra
  • Anti-RDBMS: A list of distributed key-value stores
  • Facebook Cassandra Architecture and Design by James Hamilton
  • Thursday
    Jul022009

    Product: Project Voldemort - A Distributed Database

    Update: Presentation from the NoSQL conference: slides, video 1, video 2.

    Project Voldemort is an open source implementation of the basic parts of Dynamo (Amazon’s Highly Available Key-value Store) distributed key-value storage system. LinkedIn is using it in their production environment for "certain high-scalability storage problems where simple functional partitioning is not sufficient."

    From their website:

  • Data is automatically replicated over multiple servers.
  • Data is automatically partitioned so each server contains only a subset of the total data
  • Server failure is handled transparently
  • Pluggable serialization is supported to allow rich keys and values including lists and tuples with named fields, as well as to integrate with common serialization frameworks like Protocol Buffers, Thrift, and Java Serialization
  • Data items are versioned to maximize data integrity in failure scenarios without compromising availability of the system
  • Each node is independent of other nodes with no central point of failure or coordination
  • Good single node performance: you can expect 10-20k operations per second depending on the machines, the network, and the replication factor
  • Support for pluggable data placement strategies to support things like distribution across data centers that are geographical far apart.

    They also have a nice design page going over some of their architectural choices: key-value store only, no complex queries or joins; consistent hashing is used to assign data to nodes; JSON is used for schema definition; versioning and read-repair for distributed consistency; a strict layered architecture with put, get, and delete as the interface between layers.

    Just a hint when naming a project: don't name it after one of the most popular key words in muggledom. The only way someone will find your genius via search is with a dark spell. As I am a Good Witch I couldn't find much on Voldemort in the real world. But the idea is great and is very much in line with current thinking on scalable database design. Worth a look.

    Related Articles

  • The CouchDB Project
  • Wednesday
    Jul012009

    Podcast about Facebook's Cassandra Project and the New Wave of Distributed Databases

    In this podcast, we interview Jonathan Ellis about how Facebook's open sourced Cassandra Project took lessons learned from Amazon's Dynamo and Google's BigTable to tackle the difficult problem of building a highly scalable, always available, distributed data store.