In the never ending quest to figure out how to do something useful with never ending streams of data, GraphLab: A New Framework For Parallel Machine Learning wants to go beyond low-level programming, MapReduce, and dataflow languages with a new parallel framework for ML (machine learning) which exploits the sparse structure and common computational patterns of ML algorithms. GraphLab enables ML experts to easily design and implement efﬁcient scalable parallel algorithms by composing problem speciﬁc computation, data-dependencies, and scheduling. Our main contributions include:
- A graph-based data model which simultaneously represents data and computational dependencies.
- A set of concurrent access models which provide a range of sequential-consistency guarantees.
- A sophisticated modular scheduling mechanism.
- An aggregation framework to manage global state.
From the abstract:
Designing and implementing efﬁcient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufﬁciently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.
- Paper: The Declarative Imperative: Experiences and Conjectures in Distributed Logic
- Paper: Propagation Networks: A Flexible and Expressive Substrate for Computation
- How will memristors change everything?
- Parallel Information Retrieval and Other Search Engine Goodness
- Running Large Graph Algorithms - Evaluation of Current State-of-the-Art and Lessons Learned
- Paper: High Performance Scalable Data Stores
- Big Data on Grids or on Clouds?
- Paper: The Case for RAMClouds: Scalable High-Performance Storage