« Hot Scalability Links For Aug 20, 2010 | Main | Scaling an AWS infrastructure - Tools and Patterns »

Misco: A MapReduce Framework for Mobile Systems - Start of the Ambient Cloud?

Misco: A MapReduce Framework for Mobile Systems is a very exciting paper to me because it's really one of the first explorations of some of the ideas in Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud. What they are trying to do is efficiently distribute work across a set cellphones using a now familiar MapReduce interface. Usually we think of MapReduce as working across large data center hosted clusters. Here, the cluster nodes are cellphones not contained in any data center, but compute nodes potentially distributed everywhere.

I talked with Adam Dou, one of the paper's authors, and he said they don't see cellphone clusters replacing dedicated computer clusters, primarily because of the power required for both network communication and the map-reduce computations. Large multi-terabyte jobs aren't in the cards...yet. Adam estimates computationally that cellphones are performing similarly to desktops of ten years ago. Instead, they want to focus on the unique characteristics of the mobile devices--camera, microphone, GPS and other directly collectable data--so the data can be processed where collected.

MapReduce was selected as the programming interface because it is familiar to programmers, it transparently supports programming multiple devices, and can be implemented--especially using Python---in such a way that programmers are freed from all the underlying details like concurrency, data distribution, and code management. A very smart move. 

It's interesting to contrast the economics of the ambient cloud to the economics of the data center cloud. The goal of a data center cloud is 100 percent utilization. Use every possible CPU cycle or money is being wasted money on unused equipment. In an ambient cloud the idea is more parasitic, deploy to more resources yet leave the primary function of the device unaffected. It's a different perspective that may lead to different architectures. MapReduce scales linearly with the number devices. Since there are more phones than computers, using more of less capable devices will increase overall performance.

Misco first began as an intern project in the summer of 2008 at Nokia Research Center in Palo Alto. A quick introduction to Misco from the abstract:

The proliferation of increasingly powerful, ubiquitous mobile devices has created a new and powerful sensing and computational environment. Software development and application deployment in such distributed mobile settings is especially challenging due to issues of failures, concurrency, and lack of easy programming models. We present a framework which provides a powerful software abstraction that hides many of such complexities from the application developer. We design and implement a mobile MapReduce framework targeted at any device which supports Python and network connectivity. We have implemented our system on a testbed of Nokia N95 8GB smartphones and demonstrated the feasibility and performance of our approach.

An overview of the architecture is depicted by this excellent diagram:

  1. MasterServer - keeps track of user applications; maintains input, intermediary and results data associated with applications; tracks worker progress; assigns tasks to workers. Communication between the server and worker is via HTTP. A server consists of:
    1. Application Repository Scheduler component - keeps track of application input and output data.
    2. HTTP Server Scheduler component - acts as the main communication channel between the server and workers. It also displays application status via a UI. 
    3. UDP Server - listens for incoming worker logs which are stored in Worker Logs.
  2. WorkerNode - performs map and reduce operations and returns the results to the server. Can be run on any computing device that supports Python. A worker consists of:
    1. Requester component - interacts with the server to requests tasks; uploads and downloads data; triggers local task execution; performs upgrades.
    2. Repository component - stores the input data, modules, and results for each task.
    3. Logger component - maintains local process times and progress and uploads them to the server along with the results when a task completes.
  3. Polling - workers poll servers for work. Push notification is not generally supported so polling is more reliable, though it can be difficult to tune the polling interview correctly.
  4. General Process:
    1. A job is created with the server and the server schedules the job. A job consists of the deadline, input data, module, size of the map input pieces, and number of reduce partitions.
    2. The server splits the input data into M inputs files. 
    3. Workers make work requests to the server.
    4. Workers submit the results to the server.
    5. Once all the map results have been received and similar process occurs for the reduce phase. 

Seems like a very clean and logical architecture. 

Experiment Setup

The test is run on 30 Nokia N95 8GB smart-phones, which have an ARM 11 dual CPUs at 332 Mhz, 90 MB of main memory, 8GB local storage. The server was a Pentium 4 2Ghz CPU with 640 MB of memory. Phones were connected to a router using 802.11g and the router connected to the server using a 100 MBit connection.

They implemented a WordSearch application to search for keywords in a large text file. Your common MapReduce example problem. The goal is to interactively search web logs to find places of interest within a city. Input files ranged from 10KB to 1MB.


  • The Misco library took 800KB of memory, or 1% of the available 90MB.
  • A redundancy scheme was used to handle failures, as failures increased the end-to-end times increased exponentially.
  • Memory usage varies by the type of application. If the application uses 15MB of data then the total used will be 16MB.
  • Processing tasks used .7 watts.
  • Network access used 1.6 watts. Cell networks use more energy than WiFi.
  • Majority of processing time is spent performing computations, only a small fractions is spent on overhead. 

The dominance of power usage of network transfers over local computation may dictate where optimization efforts go in the future. As an example of this dynamic Adam points to the runtimes for the Kindle with and without network connection and with or without 3G.

Adam suggests a lot of energy may be saved by compressing the data so that there is much less data to send. Another approach is to pre-process the data on the phone. Say you are are performing and image recognition task on pictures that are already on the phone. It's possible to extract vector features from the pictures, which is a much more compact representation, with the end result being far less data transferred.

Batching data would use less network resources, but that must be balanced against responsiveness. How would using the cell network versus WiFi impact performance or power usage? For devices that support push more efficiently, would power usage go down significantly? Python is a very practical deployment language, but is the typical 20% overhead for a dynamic language the best approach for a cellphone? Lot's of things to think about.

Obviously still early days, but it's exciting to see the beginning of this kind of research. Adam promises a lot more in the future.

Related Articles

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (7)

Hi Todd,
the Tinkerpop crew developed LinkedProcess which makes exactly this scenario possible, but based on XMPP and general purpose protocol extensions to communicate and instantiate script execution environments on any XMPP-capable system, including mobile devices (there is an Android implementation of LinkedProcess doing map-reduce in with Groovy script snippets). I mentioned some potential use cases of XMPP and LinkedProcess in this blog post.
Thought it might be interesting in this thread.

Take care,


August 18, 2010 | Unregistered CommenterPeter Neubauer

This kind of approach is doomed. Volunteer computing is not efficient cost-wise (may it be desktop or mobile). The No1 cost in data centers is energy which accounts to more than 50% of the total costs while using extremely power efficient setups (the share of energy keeps increasing every year). Using cell phones for CPU is abysmal: transforming electricity toward 5V waste more than 50% of the energy with a typical household device, then charging/discharging the battery is also a big waste a lot of energy (probably more than 50% although I don't have the number on hand). Since cell phone overall consumption is extremely low, it's irrelevant, but still it will remain an extremely costly processing power.

August 18, 2010 | Unregistered CommenterJoannes Vermorel

Joannes, I would agree with you from a global optimization perspective, but were are talking about is the accumulation of millions of local actions, which is the opposite of the data center POV. Each of those local decisions will not be greatly impacted by the power usage. I would charge my cellphone anyway. When we get power scavenging cellphones, solar charging, and more power efficient chips, the perceived cost to each individual cellphone user will be even less.

For example: Bye-Bye Batteries: Radio Waves as a Low-Power Source - http://www.nytimes.com/2010/07/18/business/18novel.html?_r=1

August 18, 2010 | Registered CommenterTodd Hoff
August 18, 2010 | Registered Commentermysloto m

One thing that I have wondered about MapReduce is what happens when some of the worker nodes take longer time compared to others. How does master node now when to stop waiting for them etc.

In fact, I wonder if the non-deterministic behavior that we see in Google's Google's universal search, comes due to Map Reduce.

August 24, 2010 | Unregistered CommenterAmrinder Arora

This is a poor, low-quality work. The paper says its goal is to provide a powerful programming abstraction (MapReduce framework) to allow software development without the need to deal with the underlying problems of distributed computing. OK, but for facilitating the development of what applications exactly? For what applications on the smartphone does the mapreduce framework make sense? Why does it make sense to do a computation-intensive task on the phones, why can't we absolutely not do that task at the back-end servers? The paper does not have any answers to these questions. It seems like the paper's attitude is: we are doing this, because we can. But, you don't use a tank to kill a fly, and in fact, it is very difficult to kill a fly with a tank. First you study the problem, and then you build the suitable tool to fix the problem. You don't build a tool and then search a problem to fix with it.

The paper tries to provide two applications in a pretense to show the work is relevant. The first application is the assisted living application for monitoring the activity of elderly people at home. Smartphone collects data to observe if the elderly is suffering from an emergency medical condition, such as falling to the ground. OK, so for this why do we need mapreduce to run on the smartphones. Why don't we do this at the back-end if this is computationally intensive? Is the extra latency and cost overhead of finding several other smartphones and transferring the data to other resource-poor smartphones (let's ignore privacy issues) justified by anything at all? (I can think of only one reason: a post-apocalyptic Terminator environment, where there is a ban on server computing, that everything should be done on the phones only!)

Actually, this first application is not so computationally intensive after all. A laptop or even single smartphone (the very same that is collecting the accelerometer readings) can perform this calculation easily. The second so-called application is even worse, I am not going to talk about it. It is an evacuation application for emergency situations in public buildings. Why is mapreduce over smartphones good idea for these applications. Beats me.

And, in the related work, it is mentioned that reference 12 has already "implemented a mapreduce system on cellphones and showed that cellphones already provide a significant source of computational power." So, if this is done before (for whatever reason is beyond me), then what is the contribution of the current paper?

The paper suggests that the smartphones use a poll-the-master approach to see if any task is available to work on. They say this is better than the interrupt-based approach by the master, because interrupt-based is hard to implement. I actually do not buy this either. Poll-based won't scale. It is better for the master to decide, which tasks it wants to execute on which workers. The master should find/recruit best workers for the task, it should not be the other way around. The paper does not provide any evidence on why it is difficult to implement the interrupt-based approach.

November 21, 2010 | Unregistered CommenterS. Stern

Awesome work here! I was just thinking on a whim, what if we took MapReduce and threw it on a smartphone, because how obsurd does that sound? And you did it. Neato.

April 12, 2013 | Unregistered Commentermichael2hands

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>