advertise
Sunday
May312009

Parallel Programming for real-world

Multicore computers shift the burden of software performance from chip designers and architects to software developers.

What is the parallel Computing ? and what the different between Multi-Threading and Concurrency and Parallelism ? and what is differences between task and data parallel ? and how we can use it ?

Fundamental article into Parallel Programming...

Friday
May292009

Is Eucalyptus ready to be your private cloud?


Update:: Eucalyptus Goes Commercial with $5.5M Funding Round. This removes my objection that it's an academic project only. Go team go!

Rich Wolski, professor of Computer Science at the University of California, Santa Barbara, gave a spirited talk on Eucalyptus to a large group of very interested cloudsters at the Eucalyptus Cloud Meetup. If Rich could teach computer science at every school the state of the computer science industry would be stratospheric. Rich is dynamic, smart, passionate, and visionary. It's that vision that prompted him to create Eucalyptus in the first place. Rich and his group are experts in grid and distributed computing, having a long and glorious history in that space. When he saw cloud computing on the rise he decided the best way to explore it was to implement what everyone accepted as a real cloud, Amazon's API. In a remarkably short time they implement Eucalyptus and have been improving it and tracking Amazon's changes ever since.

The question I had going into the meetup was: should Eucalyptus be used to make an organization's private cloud? The short answer is no. Wait wait, it's now yes, see the update at the beginning of the article.

The project is of high quality, the people are of the highest quality, but in the end Eucalyptus is a research project from a university. As an academic project Eucalyptus is subject to changes in funding and the research interests of the team. When funding sources dry up so does the project. If the team finds another research area more interesting, or if they get tired of chasing a continuous stream of new Amazon features, or no new grad students sign on, which will happen in a few years, then the project goes dark.

Fears over continuity have at least two solutions: community support and commercial support. Eucalyptus could become community supported open source project. This is unlikely to happen though as it conflicts with the research intent of Eucalyptus. The Eucalyptus team plans to control the core for research purposes and encourage external development of add-on service like SQS. Eucalyptus won't go commercial as University projects must stay clear from commercial pretensions. Amazon is "no comment" on Eucalyptus so it's not clear what they would think of commercial development should it occur.

Taken together these concerns imply Eucalyptus is not a good base for an enterprise quality private cloud. Which they readily admit. It's not enterprise ready Rich repeats. It's not that the quality isn't there. It is and will be. And some will certainly base their private cloud on Eucalyptus, but when making a decision of this type you have to be sure your cloud infrastructure will be around for the long haul. With Eucalyptus that is not necessarily the case. Eucalyptus is still a good choice for it's original research purpose, or as cheap staging platform for Amazon, or as base for temporary clouds, but as your rock solid private cloud infrastructure of the future Eucalyptus isn't the answer.

The long answer is a little more nuanced and interesting.

The primary purpose for Eucalyptus is research. It was never meant to be our little untethered private Amazon cloud. But if it works, why not?

Eucalyptus is Not a Full Implementation of the Amazon Stack

Eucalyptus implements most of EC2 and a little of S3. They hope to get community support for the rest. That of course makes Eucalyptus far less interesting as a development platform. But if your use for Eucalyptus is as an instant provisioning framework you are still in the game. Their emulation of EC2 is so good RightScale was able to operate on top of Eucalyptus. Impressive.

But even in the EC2 arena I have to wonder for how long they'll track Amazon development. If you are a researcher implementing every new Amazon feature is going to get mighty old after a while. It will be time to move on and if you are dependent on Eucalyptus you are in trouble. Sure, you can move to Amazon but what about that $1 million data center buildout?

Developing software not tied to the Amazon service stack then Eucalyptus would work great.

As an Amazon developer I would want my code to work without too much trouble in both environments. Certainly you can mock the different services for testing or create a service layer to hide different implementations, but that's not ideal and makes Eucalyptus as an Amazon proxy less attractive.

One of the uses for Eucalyptus is to make Amazon cheaper and easier by testing code locally without out having to deploy into Amazon all the time. Given the size of images the bandwidth and storage costs add up after a while, so this could make Eucalyptus a valuable part of the development process.

Eucalyptus is Not as Scalable as Amazon

No kidding. Amazon has an army of sysadmins, network engineers, and programmers to make their system work at such ginormous scales. Eucalyptus was built on smarts, grit and pizza. It will never scale as well as Amazon, but Eucalyptus is scalable to 256 nodes right now. Which is not bad.

Rich thinks with some work they already know about it could scale to 5000 nodes. Not exactly Amazon scale, but good enough for many data center dreams.

One big limit Eucalyptus has is the self-imposed requirement to work well in any environment. It's just a tarball you can install on top of any network. They rightly felt this was necessary for adoption. Saying to potential customers that you need to setup a special network before you can test this software tends to slow down adoption. By making Eucalyptus work as an overlay they soothed a lot of early adopter pain.

But by giving up control of the machines, the OS, the disk, and the network they limited how scalable they can be. There's more to scalability than just software. Amazon has total control and that gives them power. Eucalyptus plans to make more invasive and more scalable options available in the future.

Lacks Some Private Cloud Features

Organizations interested in a private cloud are often interested in:

  • Control
  • Privacy and Security
  • Utility Chargeback System
  • Instant Provisioning Framework
  • Multi-tenancy
  • Temporary Infrastructure for Proof of Concept for "Real" Provisioning
  • Cloud Management Infrastructure

    Eucalyptus satisfies many of these needs, but a couple are left wanting:
  • The Utility Chargeback System allows companies to bill back departments for the resources they use and is a great way get around a rigid provisioning process and still provide accountability back to the budgeting process. Eucalyptus won't do this for you.
  • A first class Cloud Management Infrastructure is not part of Eucalyptus because it's not part of Amazon's API. Amazon doesn't expose their internal management process. Eucalyptus is adding some higher level management tools, but they'll be pretty basic.

    These features may or may not be important to you.

    Clouds vs Grids

    Endless pixels have been killed defining clouds, grids, and how they are different enough that there's really a whole new market to sell into. Rich actually makes a convincing argument that grids and clouds are different and do require a completely different infrastructure. The differences:

    Cloud

  • Full private cluster is provisioned
  • Individual user can only get a tiny fraction of the total resource pool
  • No support for cloud federation except through the client interface
  • Opaque with respect to resources

    Grid

  • Built so that individual users can get most, if not all of the resources in a single request
  • Middleware approach takes federation as a first principle
  • Resources are exposed, often as bare metal

    Related Articles

  • Get Off of My Cloud by M. Jagger and K. Richards.
  • Rich Wolski's Home Page
  • Enomaly
  • Nimbus
  • Thursday
    May282009

    Scaling PostgreSQL using CUDA

    Combining GPU power with PostgreSQL PostgreSQL is one of the world's leading Open Source databases and it provides enormous flexibility as well as extensibility. One of the key features of PostgreSQL is that users can define their own procedures and functions in basically any known programming language. With the means of functions it is possible to write basically any server side codes easily. Now, all this extensibility is basically not new. What does it all have to do with scaling and then? Well, imagine a world where the data in your database and enormous computing power are tightly integrated. Imagine a world where data inside your database has direct access to hundreds of FPUs. Welcome to the world of CUDA, NVIDIA's way of making the power of graphics cards available to normal, high-performance applications. When it comes to complex computations databases might very well turn out to be a bottleneck. Depending on your application it might easily happen that adding more CPU power does not improve the overall performance of your system – the reason for that is simply that bringing data from your database to those units which actually do the computations is ways too slow (maybe because of remote calls and so on). Especially when data is flowing over a network, copying a lot of data might be limited by network latency or simply bandwidth. What if this bottleneck could be avoided? CUDA is C / C++ Basically a CUDA program is simple a C program with some small extensions. The CUDA subsystem transforms your CUDA program to normal C code which can then be compiled and linked nicely with existing code. This also means that CUDA code can basically be used to work inside a PostgreSQL stored procedure easily. The advantages of this mechanism are obvious: GPUs can do matrix and FPU related operations hundreds of times faster than any CPU the GPU is used inside the database and thus no data has to be transported over slow lines basically any NVIDIA graphics card can be used you get enormous computing power for virtually zero cost you can even build functional indexes on top of CUDA stored procedures not so many boxes are needed because one box is ways faster How to make it work? How to make this all work now? The goal for this simplistic example is to generate a set of random number on the CPU, copy it to the GPU and make the code callable from PostgreSQL. Here is the function to generate random numbers and to copy them to the GPU: /* implement random generator and copy to CUDA */ nn_precision* generate_random_numbers(int number_of_values) { nn_precision *cuda_float_p; /* allocate host memory and CUDA memory */ nn_precision *host_p = (nn_precision *)pg_palloc(sizeof(nn_precision) * number_of_values); CUDATOOLS_SAFE_CALL( cudaMalloc( (void**) &cuda_float_p, sizeof(nn_precision) * number_of_values)); /* create random numbers */ for (int i = 0; i < number_of_values; i++) { host_p[i] = (nn_precision) drand48(); } /* copy data to CUDA and return pointer to CUDA structure */ CUDATOOLS_SAFE_CALL( cudaMemcpy(cuda_float_p, host_p, sizeof(nn_precision) * number_of_values, cudaMemcpyHostToDevice) ); return cuda_float_p; } Now we can go and call this function from a PostgreSQL stored procedure: /* import postgres internal stuff */ #include "postgres.h" #include "fmgr.h" #include "funcapi.h" #include "utils/memutils.h" #include "utils/elog.h" #include "cuda_tools.h" PG_MODULE_MAGIC; /* prototypes to silence compiler */ extern Datum test_random(PG_FUNCTION_ARGS); /* define function to allocate N random values (0 - 1.0) and put it into the CUDA device */ PG_FUNCTION_INFO_V1(test_random); Datum test_random(PG_FUNCTION_ARGS) { int number = PG_GETARG_INT32(0); nn_precision *p = generate_random_numbers(number); cuda_free_array(p); PG_RETURN_VOID(); } This code then now be nicely compiled just like any other PostgreSQL C extension. The test random function can be called just like this: SELECT test_random(1000); Of course this is a just brief introduction to see how things can practically be done. A more realistic application will need more thinking and can be integrated into the database even more closely. More information: Professional CUDA programming Professional PostgreSQL services The official PostgreSQL Website The official CUDA site

    Click to read more ...

    Wednesday
    May272009

    The Future of the Parallelism and its Challenges

    The Future of the Parallelism and its Challenges

    Research and education in Parallel computing technologies is more important than ever. Here I present a perspective on the past contributions, current status, and future direction of the parallelism technologies. While machine power will grow impressively, increased parallelism, rather than clock rate, will be driving force in computing in the foreseeable future. This ongoing shift toward parallel architectural paradigms is one of the greatest challenges for the microprocessor and software industries. In 2005, Justin Ratter, chief technology officer of Intel Corporation, said ‘We are at the cusp of a transition to multicore, multithreaded architectures, and we still have not demonstrated the ease of programming the move will require…’

    Key points:

    • A Little history
    • Parallelism Challenges
    • Under the hood, Parallelism Challenges
      • Synchronization problems
      • CAS problems
    • The future of the parallelism

    Click to read more ...

    Tuesday
    May262009

    Database Optimize patterns

    Database Optimize patterns

    Most of websites and enterprise application rely on the database backing them to store the application and customer data. So at some point the database could be the main performance and scalability bottleneck for your system performance, so I ‘m here today to cure this! key points:
    • Database supporters and resisters:
      • Database supporters: MySQL, SQL Server, and PostgreSQL
      • Database resisters: HBase, MongoDB, Redis, and others
    • Database Optimizing pattern:
      • What to store into the Database?
      • Field data types
      • The primary key and the indexes
      • Data retrieve, SP’s, and Ad-hoc queries
      • Caching

    Click to read more ...

    Monday
    May252009

    non-sequential, unique identifier, strategy question

    (Please bare with me, I'm a new, passionate, confident and terrified programmer :D ) Background: I'm pre-launch and 1 year into the development of my application. My target is to be able to eventually handle millions of registered users with 5-10% of them concurrent. Up to this point I've used auto-increment to assign unique identifiers to rows. I am now considering switching to a non-sequential strategy. Oh, I'm using the LAMP configuration. My reasons for avoiding auto-increment: 1. Complicates replication when scaling horizontally. Risk of collision is significant (when running multiple masters). Note: I've read the other entries in this forum that relate to ID generation and there have been some great suggestions -- including a strategy that uses auto-increment in a way that avoids this pitfall... That said, I'm still nervous about it. 2. Potential bottleneck when retrieving/assigning IDs -- IDs assigned at the database. My reasons for being nervous about non-sequential IDs: 1. To guarantee uniqueness, the IDs are going to be much larger -- potentially affecting performance significantly My New Strategy: (I haven't started to implement this... I'm waiting for someone smarter than me to steer me in the right direction) 1. Generate a guaranteed-unique ID by concatenating the user id (1-9 digits) and the UNIX timestamp(10 digits). 2. Convert the resulting 11-19 digit number to base_36. The resulting string will be alphanumeric and 6-10 characters long. This is, of course, much shorter (at least with regard to characters) then the standard GUID hash. 3. Pass the new identifier to a column in the database that is type CHAR() set to binary. My Questions: 1. Is this a valid strategy? Is my logic sound or flawed? Should I go back to being a graphic designer? 2. What is the potential hit to performance? 3. Is a 11-19 digit number (base 10) actually any larger (in terms of bytes) than its base-36 equivalent? I appreciate your insights... and High Scalability for supplying this resource!

    Click to read more ...

    Friday
    May222009

    Distributed content system with bandwidth balancing

    I am looking for a way to distribute files over servers in different physical locations. My main concern is that I have bandwidth limitations on each location, and wish to spread the bandwidth load evenly. Atm. I just have 1:1 copies of the files on all servers, and have the application pick a random server to serve the file as a temp fix... It's a small video streaming service. I want to spoonfeed the stream to the client with a max bandwidth output, and support seek. At present I use php to limit the network stream, and read the file at a given offset sendt as a get parameter from the player for seek. It's psuedo streaming, but it works. I have been looking at MogileFS, which would solve the storage part. With MogileFS I can make use of my current php solution as it supports lighttpd and apache (with mod_rewrite or similar). However I don't see how I can apply MogileFS to check for bandwidth % usage? Any reccomendations for how I can solve this?

    Click to read more ...

    Wednesday
    May202009

    Paper: Flux: An Adaptive Partitioning Operator for Continuous Query Systems

    At the core of the new real-time web, which is really really old, are continuous queries. I like how this paper proposed to handle dynamic demand and dynamic resource availability by making the underlying system adaptable, which seems like a very cloudy kind of thing to do. Abstract:

    The long-running nature of continuous queries poses new scalability challenges for dataflow processing. CQ systems execute pipelined dataflows that may be shared across multiple queries. The scalability of these dataflows is limited by their constituent, stateful operators – e.g. windowed joins or grouping operators. To scale such operators, a natural solution is to partition them across a shared-nothing platform. But in the CQ context, traditional, static techniques for partitioned parallelism can exhibit detrimental imbalances as workload and runtime conditions evolve. Longrunning CQ dataflows must continue to function robustly in the face of these imbalances. To address this challenge, we introduce a dataflow operator called Flux that encapsulates adaptive state partitioning and dataflow routing. Flux is placed between producerconsumer stages in a dataflow pipeline to repartition stateful operators while the pipeline is still executing. We present the Flux architecture, along with repartitioning policies that can be used for CQ operators under shifting processing and memory loads. We show that the Flux mechanism and these policies can provide several factors improvement in throughput and orders of magnitude improvement in average latency over the static case

    Click to read more ...

    Tuesday
    May192009

    Scaling Memcached: 500,000+ Operations/Second with a Single-Socket UltraSPARC T2

    A software-based distributed caching system such as memcached is an important piece of today's largest Internet sites that support millions of concurrent users and deliver user-friendly response times. The distributed nature of memcached design transforms 1000s of servers into one large caching pool with gigabytes of memory per node. This blog entry explores single-instance memcached scalability for a few usage patterns. Table below shows out-of-the-box (no custom OS rewrites or networking tuning required) performance with 10G networking hardware and one single-socket UltraSPARC T2-based server with 8 cores and 8 threads per core (64 threads on a chip)... Object Size / Ops/Sec / Bandwidth 100 bytes / 530,000 / 1.2 Gb/s 2048 bytes / 370,000 / 6.9 Gb/s 4096 bytes / 255,000 / 9.2 Gb/s Check out the link for more details!

    Click to read more ...

    Sunday
    May172009

    Scaling Django Web Apps by Mike Malone

    Film buffs will recognize Django as a classic 1966 spaghetti western that spawned hundreds of imitators. Web heads will certainly first think of Django as the classic Python based Web framework that has also spawned hundreds of imitators and has become the gold standard framework for the web. Mike Malone, who worked on Pownce, a blogging tool now owned by Six Apart, tells in this very informative EuroDjangoCon presentation how Pownce scaled using Django in the real world. I was surprised to learn how large Pounce was: hundreds of requests/sec, thousands of DB operations/sec, millions of user relationships, millions of notes, and terabytes of static data. Django has a lot of functionality in the box to help you scale, but if you want to scale large it turns out Django has some limitations and Mike tells you what these are and also provides some code to get around them. Mike's talk-although Django specific--will really help anyone creating applications on the web. There's a lot of useful Django specific advice and a lot of general good design ideas as well. The topics covered in the talk are:

  • Django uses a shared nothing architecture. * The database is responsible for scaling state. * Application servers are horizontally scalable because they are stateless.
  • Scalability vs Performance. Performance is not the same as scalability. Scalability is A scalable system doesn’t need to change when the size of the problem changes.
  • Type of scalability: * Vertical - buy bigger hardware * Horizontal - the ability to increase a system’s capacity by adding more processing units (servers)
  • Cache to remove load from the database server.
  • Built-in Django Caching: Per-site caching, per-view cache, template fragment cache - not so effective on heavily personalized pages
  • Low-level Cache API is used to cache at any level of granularity.
  • Pounce cached individual objects and lists of object IDs.
  • The hard part of caching is invalidation. How do you know when a value changes such that the cache should be up updates so readers see valid values? * Invalidate when a model is saved or deleted. * Invalidate post_save, not pre_save. * This leaves a small race condition so: ** Instead of deleting, set the cache key to None for a short period of time ** Instead of using set to cache objects, use add, which fails if there’s already something stored for the key
  • Pounce ran memcached on their web servers * Their servers were not CPU bound, they were IO and memory bound so they compressed objects before caching.
  • Work is spread between multiple application servers using a load balancer.
  • Best way to reduce load on your app servers: don’t use them to do hard stuff.
  • Pounce used software load balancing * Hardware load balancers are expensive ($35K) and you need two for redunancy. * Software load balancers are cheap and easy. * Some options: Perlbal, Pound, HAProxy, Varnish, Nginx * Chose a single Perlbal server. This was a Single Point of Failure but they didn't have the money for hardware. Liked Perlbal's reproxying feature.
  • Used a ghetto queuing solution (MySQL + cron) to process work asynchronously in the background.
  • At scale their system needed to have high availability and be partitionable. * The RDBMS’s consistency requirements get in our way * Most sharding / federation schemes are kludges that trade consistency * There are many non relational databases (CouchDB, Cassandra, Tokyo Cabinet) but they aren't easy to use with Django.
  • Rules for denormalization: * Start with a normalized database * Selectively denormalize things as they become bottlenecks * Denormalized counts, copied fields, etc. can be updated in signal handlers
  • Joins are evil and Django makes it really easy to do joins.
  • Database Read Performance * Since your typical web app is 80% to 80% reads adding MySQL master-slave replication can solve a lot of problems. * Django doesn't support multiple database connections, but there's a library, linked to at the end of this document to help. * A big problem is slave lag. When you write to the primary it takes time for the state to be transferred to the read slaves so readers may see an old value on the read.
  • Database Write Performance * Federate. Split tables across different servers. Not well supported by Django. * Vertical Partitioning: split tables that aren’t joined across database servers. * Horizontal Partitioning: split a single table across databases (e.g., user table). Problem is autoincrement now doesn't work and Django uses autoincrement for primary keys.
  • Monitoring - You can't improve what you don't measure * Products: Ganglia and Munin
  • Measure * Server load, CPU usage, I/O * Database QPS * Memcache QPS, hit rate, evictions * Queue lengths * Anything else interesting

    Related Articles

  • Interview with Leah Culver: The Making of Pownce
  • Django Caching Code
  • Django Multidb Code
  • EuroDjangoCon Presentations

    Click to read more ...