Improving performance and scalability with DDD

Distributed systems are not typically a place domain driven design is applied. Distributed processing projects often start with an overall architecture vision and the idea about a processing model which basically drives the whole thing, including object design if it exists at all. Elaborate object designs are thought of as something that just gets in the way of distribution and performance, so the idea of spending time to apply DDD principles gets rejected in favour of raw throughput and processing power. However, from my experience, some more advanced DDD concepts can significantly improve the performance, scalability and throughput of distributed systems when applied correctly.

This article a summary of the presentation titled "DDD in a distributed world" from the DDD Exchange 09 in London.


Building a data cycle at LinkedIn with Hadoop and Project Voldemort

Update: Building Voldemort read-only stores with Hadoop.

A write up on what LinkedIn is doing to integrate large offline Hadoop data processing jobs with a fast, distributed online key-value storage system, Project Voldemort.


GemFire 6.0: New innovations in data management

GemStone has unveiled GemFire 6.0 which is the culmination of several years of development and the continuous solving of the hardest data management problems in the world. With this release GemFire touts some of the latest innovative features in data management.

In this release:

- GemFire introduces a resource manager to continuously monitor and protect cache instances from running out of memory, triggering rebalancing to migrate data to less loaded nodes or allow dynamic increase/decrease in the number of nodes hosting data for linear scalability without impeding ongoing operations (no contention points).

- GemFire provides explicit control over when rebalancing can be triggered, on what class of data and even allows the administrator to simulate a "rebalance" operation to quantify the benefits before actually doing it.

- With built in instrumentation that captures throughput and latency metrics, GemFire now enables applications to sense changing performance patterns and proactively provision extra resources and trigger rebalancing. The end result is predictable data access throughput and latency without the need to overprovision capacity.

- We continue down the path of making the product more resilient than ever before - dealing with complex membership issues when operating in large clusters and allowing thresholds to be set in terms of consumption of memory in any server JVM that significantly reduces the probability of "stop the world" garbage collection cycles.

- Advanced Data Partitioning: Applications are no longer restricted by the memory available across the cluster to manage partitioned data. Applications can pool available memory as well as disk and stripe the data across memory and disk across the cluster. When the data fabric is configured as a cache, partitioned data can be expired or evicted so that only the most frequently used data is managed.

- Data-aware application behavior routing: There are several extensions added to the GemFire data-aware function execution service - a simple grid programming model that allows the application to synchronously or asynchronously execute application behavior on the data nodes. Applications invoke functions hinting the data they are dependent on and the service parallelizes the execution of the application function on all the grid nodes where the data is being managed. Applications can now define relationships between different classes of data to colocate all related data sets and application functions when routed to the data nodes can execute complex queries on in-process data. These and other features offered in the 'Function execution service' offers linear scalability for compute and data intensive applications. Simply add more nodes when demand spikes to rebalance data and behavior to increase the overall throughput for your application.

- API additions for C++, C#: Support for continuous querying, client side connection pooling and dynamic load balancing and ability to invoke server side functions.

- Cost based Query optimization: A new compact index to conserve memory utilizaton and enhanced query processor design with cost-based optimization has been introduced as part of this release.

- Developer productivity tools: It can be daunting when developers have to quickly develop and test their clustered application. Developers need the capability to browse the distributed data using ad-hoc queries, apply corrections or monitor resource utilization and performance metrics. A new graphical Data browser permits browsing and editing of data across the entire cluster, execution of ad-hoc queries and even create real-time table views that are continuously kept up-to-date through continuous queries. The GemFire Monitor tool (GFMon) also has several enhancements making the tool much more developer friendly.

For more information on GemFire, view our newly rewritten technical white paper at:


Large-scale Graph Computing at Google

To continue the graph theme Google has got into the act and released information on Pregel. Pregel does not appear to be a new type of potato chip. Pregel is instead a scalable infrastructure... mine a wide range of graphs. In Pregel, programs are expressed as a sequence of iterations. In each iteration, a vertex can, independently of other vertices, receive messages sent to it in the previous iteration, send messages to other vertices, modify its own and its outgoing edges' states, and mutate the graph's topology.

Currently, Pregel scales to billions of vertices and edges, but this limit will keep expanding. Pregel's applicability is harder to quantify, but so far we haven't come across a type of graph or a practical graph computing problem which is not solvable with Pregel. It computes over large graphs much faster than alternatives, and the application programming interface is easy to use. Implementing PageRank, for example, takes only about 15 lines of code. Developers of dozens of Pregel applications within Google have found that "thinking like a vertex," which is the essence of programming in Pregel, is intuitive.

Pregel does not appear to be publicly available, so it's not clear what the purpose of the announcement could be. Maybe it will be a new gmail extension :-)


starting small with growth in mind

Hello all,

I'm working on a web site that might totally flop or it might explode to be the next facebook/flickr/digg/etc. Since I really don't know how popular the site will be I don't want to spend a ton of money on the hardware/hosting right away but I want to be able to scale it easily if it does grow rapidly. With this in mind, what would be the best approach to launch the site?




The first meeting of this Online Gaming High Scalability SIG will be on the 9th of July 2009 in central London, starting at 10 AM and finishing around 5PM.

The main topic of this meeting will be potentials for using cloud and grid technologies in online gaming systems. In addition to experience reports from the community, we have invited some of the leading cloud experts in the UK to discuss the benefits such as resource elasticity and challenges such as storage and security that companies from other industries have experienced. We will have a track for IT managers focused on business opportunities and issues and a track for architects and developers more focused on implementation issues.

The event is free but up-front registration is required for capacity planning, so please let us know in advance, if you are planning to attend by completing the registration form on this page

To propose a talk or for programme enquiries, contact meetings [at] gamingscalability [dot] org.

Note: The event is planned to finish around 5 PM so that people can make their way to Victoria on time for CloudCamp London. CloudCamp is a meeting of the cloud computing community with short talks, is also free but you will have to register for it separately



kngine 'Knowledge Engine' milestone 2

Kngine is Knowledge Web search engine designed to provide meaningful search results, such as: semantic information about the keywords/concepts, answer the user’s questions, discover the relations between the keywords/concepts, and link the different kind of data together, such as: Movies, Subtitles, Photos, Price at sale store, User reviews, and Influenced story


Kngine long-term goal is to make all human beings systematic knowledge and experience accessible to everyone. I aim to collect and organize all objective data, and make it possible and easy to access. Our goal is to build on the advances of Web search engine, semantic web, data representation technologies a new form of Web search engine that will unleash a revolution of new possibilities.

Kngine tries to combine the power of Web search engines with the power of Semantic search and the data representation to provide meaningful search results compromising user needs.


Kngine starts as a research project in October 2008. Over times, I succeeded to collect, represent, and index a lot of human binges systematic knowledge but it is just the start. As of now, Kngine contains 500+ million of pieces of data, for 4,000+ domains. Kngine knowledge base and capabilities already span a great number of domains, such as:

  • 60,000+ Companies
  • 700,000+ Movie
  • 750,000+ Person
  • 400,000+ Location
  • 115,000+ Book
  • About 5,000,000 concepts.


Kngine, as it exists today, is just the beginning. I have both short- and long-term plans to dramatically expand all aspects of Kngine, qualities, broadening and deepening our data, and more.

I just released Kngine Milestone 2 (Our first public release), soon a preview of section called ‘Labs’ that will include a set of new research and technologies to access the knowledge will be presented.

Milestone 2

Milestone 2 is the firsts public release. This release include some useful features that help the users to reach what they want directly, such as:

  • Smart Information
  • Answer your questions
  • Link the data, and view direct data

for more information about milestone 2 Go there.

Check this out:

Hash-Table, The dark knight, When did Toronto-Dominion Bank Tower opened
profession of Alexander the Great, the director of up movie, I 'm Alive lyrics


Neo4j - a Graph Database that Kicks Buttox

Update: Social networks in the database: using a graph database. A nice post on representing, traversing, and performing other common social network operations using a graph database.

If you are Digg or LinkedIn you can build your own speedy graph database to represent your complex social network relationships. For those of more modest means Neo4j, a graph database, is a good alternative.

A graph is a collection nodes (things) and edges (relationships) that connect pairs of nodes. Slap properties (key-value pairs) on nodes and relationships and you have a surprisingly powerful way to represent most anything you can think of. In a graph database "relationships are first-class citizens. They connect two nodes and both nodes and relationships can hold an arbitrary amount of key-value pairs. So you can look at a graph database as a key-value store, with full support for relationships."

A graph looks something like:

For more lovely examples take a look at the Graph Image Gallery.

Here's a good summary by Emil Eifrem, founder of the Neo4j, making the case for why graph databases rule:

Most applications today handle data that is deeply associative, i.e. structured as graphs (networks). The most obvious example of this is social networking sites, but even tagging systems, content management systems and wikis deal with inherently hierarchical or graph-shaped data.

This turns out to be a problem because it’s difficult to deal with recursive data structures in traditional relational databases. In essence, each traversal along a link in a graph is a join, and joins are known to be very expensive. Furthermore, with user-driven content, it is difficult to pre-conceive the exact schema of the data that will be handled. Unfortunately, the relational model requires upfront schemas and makes it difficult to fit this more dynamic and ad-hoc data.

A graph database uses nodes, relationships between nodes and key-value properties instead of tables to represent information. This model is typically substantially faster for associative data sets and uses a schema-less, bottoms-up model that is ideal for capturing ad-hoc and rapidly changing data.

So relational database can't handle complex relationships. Graph systems are opaque, unmaintainable, and inflexible. OO databases loose flexibility by combining logic and data. Key-value stores require the programmer to maintain all relationships. There, everybody sucks :-)

Neo4j's Key Characteristics

  • Dual license: open source and commercial.
  • Well suited for many web use cases such as tagging, metadata annotations, social networks, wikis and other network-shaped or hierarchical data sets.
  • An intuitive graph-oriented model for data representation. Instead of static and rigid tables, rows and columns, you work with a flexible graph network consisting of nodes, relationships and properties.
  • Decent documentation, active and responsive email list, a few releases, good buzz. All a good sign for something that has a chance to last a while.
  • Has bindings for a number of languages Python, Jython, Ruby, and Clojure. No binding for .Net yet. The recommendation is to access using a REST interface.
  • Disk-based, native storage manager completely optimized for storing graph structures for maximum performance and scalability. SSD ready.
  • Massive scalability. Neo4j can handle graphs of several billion nodes/relationships/properties on a single machine.
  • Frequently outperforms relational backends with >1000x for many increasingly important use cases.
  • Powerful traversal framework for high-speed traversals in the node space.
  • Small footprint. Neo4j is a single <500k jar with one dependency (the Java Transaction API).
  • Simple and convenient object-oriented API.
  • Retrieving children is trivial in a graph database.
  • No need to flatten and serialize an object graph as graphs are native to a graph database.
  • Fully transactional like a real database. Supports JTA/JTS, XA, 2PC, Tx recovery, deadlock detection, etc.
  • Current implementation is built to handle large graphs that don't fit in memory with durability. It's not a cache, it's a fully persistent transactional store.
  • No events or triggers. Planned in a future release.
  • No sharding. A suggestion for how one might shard is here.
  • Some common graph calculations are missing. For example, in a social network finding a common friend for a set of users.
  • Separates data and logic with a more "natural" representation than tables. This makes it easy to use Neo4j as the storage tier for OO code while keeping behaviour and state separate.
  • Neo4j traverses depths of 1000 levels and beyond at millisecond speed. That's many orders of magnitude faster than relational systems.

    Neo4j vs Hadoop

    This post makes an illuminating comparison between Neo4j vs Hadoop:

    In principle, Hadoop and other Key-Value stores are mostly concerned with relatively flat data structures. That is, they are extremely fast and scalable regarding retrieval of simple objects, like values, documents or even objects.

    However, if you want to do deeper traversal of e.g. a graph, you will have to retrieve the nodes for every traversal step (very fast) and then match them yourself in some manner (e.g. in Java or so) - slow.

    Neo4j in contrast is build around the concept of "deep" data structures. This gives you almost unlimited flexibility regarding the layout of your data and domain object graph and very fast deep
    traversals (hops over several nodes) since they are handled natively by the Neo4j engine down to the storage layer and not your client code. The drawback is that for huge data amounts (>1Billion nodes) the clustering and partitioning of the graph becomes non-trivial, which is one of the areas we are working on.

    Then of course there are differences in the transaction models, consistency and others, but I hope this gives you a very short philosophical answer :)

    It would have never occurred to me to compare the two, but the comparison shows why we need multiple complementary views of data. Hadoop scales the data grid and the compute grid and is more flexible in how data are queried and combined. Neo4j has far lower latencies for complex navigation problems. It's not a zero-sum game.

    Related Articles

  • Neo4j -- or why graph dbs kick ass
  • The current database debate and graph databases by Anders Nawroth
  • On Building a Stupidly Fast Graph Database by Scott Wheeler and the Hacker News Thread
  • Network Model from wikipedia
  • Databases as a service: FathomDB
  • Using Neo4J to load and query OWL ontologies by Sujit Pal
  • Graph Databases and the Future of Large-Scale Knowledge Management by Marko A. Rodriguez
  • Memo To The Semantic Web: Drop “Semantic” And Become The “Graph Web” by Hank Williams
  • Is the Relational Database Doomed? by Tony Bain
  • Neo Database Introduction
  • Neo4j Email List
  • flare Data Visualization for the Web
  • Giant Global Graph by Tim Berners-Lee
  • Tim Berners-Lee -- Linked Data at TED
  • Drop ACID and Think About Data by Bob Ippolito
  • Analyzing and adapting graph algorithms for large persistent graphs by Larsson, Patrik
  • Thursday

    Yahoo! Distribution of Hadoop

    Many people in the Apache Hadoop community have asked Yahoo! to publish the version of Apache Hadoop they test and deploy across their large Hadoop clusters. As a service to the Hadoop community, Yahoo is releasing the Yahoo! Distribution of Hadoop -- a source code distribution that is based entirely on code found in the Apache Hadoop project.

    This source distribution includes code patches that they have added to improve the stability and performance of their clusters. In all cases, these patches have already been contributed back to Apache, but they may not yet be available in an Apache release of Hadoop.

    Read more and get the Hadoop distribution from Yahoo


    Hive - A Petabyte Scale Data Warehouse using Hadoop

    This post about using Hive and Hadoop for analytics comes straight from Facebook engineers.

    Scalable analysis on large data sets has been core to the functions of a number of teams at Facebook - both engineering and non-engineering. Apart from ad hoc analysis and business intelligence applications used by analysts across the company, a number of Facebook products are also based on analytics.

    These products range from simple reporting applications like Insights for the Facebook Ad Network, to more advanced kind such as Facebook's Lexicon product.

    As a result a flexible infrastructure that caters to the needs of these diverse applications and users and that also scales up in a cost effective manner with the ever increasing amounts of data being generated on Facebook, is critical. Hive and Hadoop are the technologies that we have used to address these requirements at Facebook.

    Read the rest of the article on Engineering @ Facebook's Notes page