advertise
Tuesday
Dec162008

Facebook is Hiring

I thought with the job situation these days that people might be interested in some open jobs at Facebook. Here's what's available:


Facebook is hiring! We are looking for a Systems Engineer/Architect and Site Reliability Engineer. I have attached the job descriptions below. If you are interested, please contact Michelle Bostock mbostock-at-facebook.com. Thanks and Happy Holidays! Systems Architect Palo Alto, CA Description Facebook is seeking a seasoned Systems Architect to join the Operations team. The position is full-time and is based in our main office in downtown Palo Alto and will report to the Manager of Systems Operations. Responsibilities * Analyze application flow and infrastructure design to improve performance and scalability of the site * Collaborate on design of services infrastructure from servers to networking * Monitor, analyze, and make recommendations as appropriate to improve site stability and availability * Evaluate hardware and software technologies to improve site efficiency and performance * Troubleshoot and solve issues with hardware, applications, and network components * Lead team efforts from design to implementation, prioritize tasks and resources while interacting with Engineering and Operations * Document current and future configuration processes and policies * Participate in 24x7 on-call support Requirements * B.S. in Computer Science or equivalent experience * 4+ years of experience in Operations with large web farms * Extensive knowledge of web architecture and technologies, including Linux, Apache, MySQL, PHP, TCP/IP, security, HTTP, LDAP and MTAs * Strong background/interest in application and infrastructure design * Scripting and programming skills * Excellent verbal and written communication skills
Site Reliability Engineer Palo Alto, CA Description Facebook is seeking talented operations engineers to join the Site Reliability Engineering team. The ideal candidate will have strong communication skills, a passion for tinkering with Linux, and an almost insane fondness for fast-paced, seat-of-your-pants troubleshooting and crisis management. The position is full-time and is based in our main office in downtown Palo Alto. This position reports to the Manager of Site Reliability Engineering. Responsibilities * Monitor the stability and performance of the website * Remotely troubleshoot and diagnose hardware problems * Debug issues with Linux software, applications and network * Resolve technical challenges encountered in LAMP technologies * Develop and maintain monitoring tools and automation systems * Predict and respond to utilization variances across multiple datacenters * Identify and triage all outage related events * Facilitate communication, coordinate escalation, and work with subject matter experts to implement critical fixes * Automate and streamline processes * Track issues and run reports Requirements * 2-3 years+ Linux support/sys admin experience in an Internet operations environment * BA/BS in Computer Science or a related field, or equivalent experience * Working knowledge of Linux, Cisco, TCP/IP, Apache and mySQL * Experience working with network management systems and monitoring tools, such as Nagios, Ganglia and Cacti * Competency in Shell, PHP, Perl or Python. C is a plus * Solid understanding of web services architecture and commonly employed technologies * A sense of urgency in responding to and resolving critical issues that relate to the performance of the site and/or core infrastructure * Excellent verbal and written communication skills * Participation in a shifted coverage schedule, including working nights and on-call rotations

Click to read more ...

Sunday
Dec142008

Scaling MySQL on a 256-way T5440 server using Solaris ZFS and Java 1.7

How to scale MySQL on a 32 core system with 256 threads? Diagonal scalability in a box. An impressive benchmark that achieved more than 79,000 SQL queries per second on a single 4 RU server! Is this real? If so what is the role of good old horizontal scalability? The goals of the benchmark:

  1. Reach a high throughput of SQL queries on a 256-way Sun SPARC Enterprise T5440
  2. Do it 21st century style i.e. with MySQL and ZFS , not 20th century style i.e with OraSybInf... and VxFS
  3. Do it with minimal tuning i.e as close as possible as out-of-the-box

Click to read more ...

Saturday
Dec132008

Strategy: Facebook Tweaks to Handle 6 Time as Many Memcached Requests

Our latest strategy is taken from a great post by Paul Saab of Facebook, detailing how with changes Facebook has made to memcached they have:

...been able to scale memcached to handle 200,000 UDP requests per second with an average latency of 173 microseconds. The total throughput achieved is 300,000 UDP requests/s, but the latency at that request rate is too high to be useful in our system. This is an amazing increase from 50,000 UDP requests/s using the stock version of Linux and memcached.

To scale Facebook has hundreds of thousands of TCP connections open to their memcached processes. First, this is still amazing. It's not so long ago you could have never done this. Optimizing connection use was always a priority because the OS simply couldn't handle large numbers of connections or large numbers of threads or large numbers of CPUs. To get to this point is a big accomplishment. Still, at that scale there are problems that are often solved.

Some of the problem Facebook faced and fixed:

  • Per connection consumption of resources. What works well at low number of inputs can totally kill a system as inputs grow. Memcached uses a per-connection buffer which adds up to a lot of memory that could be used to store data. Nothing wrong with this design choice, but Facebook made changes to use a per-thread shared connection buffer and reclaimed gigabytes of RAM on each server.
  • Kernel lock contention. Facebook discovered under load there was lock contention when transmitting through a single UDP socket from multiple threads. Sockets are data structures too and they are subject to the usual lock contention issues. Facebook got around this issue by maintaining separate reply sockets in different threads so they would not contend with the receive sockets. They found another bottleneck in Linux’s “netdevice” layer that sits in-between IP and device drivers. They changed the dequeue algorithm to batch dequeues so more work was done when they had the CPU.
  • Application lock contention. Nothing brings out lock issues like moving to more cores. Facebook found when they moved to 8 core machines a global lock protecting stats collection used 20-30% of CPU usage. In application that require little processing per request, as does memcached, this is not unexpected, but doing real work with your CPU is a better idea. So they collected stats on a per thread basis and then calculated a global view on demand.
  • Interrupt floods and starvation. With so much traffic directed at a single server the hardware can flood the CPU(s) with interrupts and keep the CPU from doing "real" work. To get around this problem Facebook implements some complicated strategies to load balance IO across all the cores. As I am less clever I might try more network cards with a TCP Offload engine.

    When you read Paul's article keep in mind all the incredible number of man hours that went into profiling the system, not just their application, but the entire software hardware stack. Then add in the research, planning, and trying different solutions to see if anything changed for the better. It's a lot of work. Notice using a nifty new parallel language or moving to a cloud wouldn't have made a bit difference. It's complete mastery of their system that made the difference.

    A summary of potential strategies:
  • Profile everything. Problems are always specific. The understanding of the problem must be specific. The fix must be specific.
  • Burn profiling into your regression tests. Detect when and where performance tanks as a regular part of your build.
  • Use resources in proportion to what grows slowest. This requires multiplexing, but at least your resource usage is more predictable and bounded.
  • Batch work. When you have the CPU do all the work you possibly can in the quantum or the whole system grinds to a halt in processing overhead.
  • Do work and maintain resources per task. Otherwise locking for shared resources takes more and more time when there's less and less time to do the work that needs to be done.
  • Change algorithms. Sometimes you simply need to do things differently. Tweaking will only get you so far.

    You can find their changes on github, the hub that says "git."
  • Tuesday
    Dec092008

    Rules of Thumb in Data Engineering

    This is an interesting and still relevant research paper by Jim Gray, Prashant Shenoy at Microsoft Research that examines the rules of thumb for the design of data storage systems. It looks at storage, processing, and networking costs, ratios, and trends with a particular focus on performance and price/performance. Jim Gray has an updated presentation on this interesting topic: Long Term Storage Trends and You. Robin Harris has a great post that reflects on the Rules of Thumb whitepaper on his StorageMojo blog: Architecting the Internet Data Center - Parts I-IV.

    Click to read more ...

    Saturday
    Dec062008

    Paper: Real-world Concurrency

    An excellent article by Bryan Cantrill and Jeff Bonwick on how to write multi-threaded code. With more processors and no magic bullet solution for how to use them, knowing how to write multiprocessor code that doesn't screw up your system is still a valuable skill. Some topics:

  • Know your cold paths from your hot paths.
  • Intuition is frequently wrong—be data intensive.
  • Know when—and when not—to break up a lock.
  • Be wary of readers/writer locks.
  • Consider per-CPU locking.
  • Know when to broadcast—and when to signal.
  • Learn to debug postmortem.
  • Design your systems to be composable.
  • Don't use a semaphore where a mutex would suffice.
  • Consider memory retiring to implement per-chain hash-table locks.
  • Be aware of false sharing.
  • Consider using nonblocking synchronization routines to monitor contention.
  • When reacquiring locks, consider using generation counts to detect state change.
  • Use wait- and lock-free structures only if you absolutely must.
  • Prepare for the thrill of victory—and the agony of defeat. While I don't agree that code using locks can be made composable, this articles covers a lot of very useful nitty-gritty details that will up your expert rating a couple points.

    Click to read more ...

  • Friday
    Dec052008

    Scalability Perspectives #4: Kevin Kelly – One Machine

    Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind. Warning #2: this post is wild.

    Kevin Kelly

    Kevin Kelly is Senior Maverick at Wired magazine. He helped launch Wired in 1993, and served as its Executive Editor until January 1999. He co-founded the ongoing Hackers' Conference, and was involved with the launch of the WELL, a pioneering online service started in 1985. He authored the best-selling New Rules for the New Economy and the classic book on decentralized emergent systems, Out of Control

    One Machine

    There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine. Later that Machine may run faster, but there is only one time when it is born. You and I are alive at this moment. Is this global web of computers, servers and trunk lines a mere mechanical circuit, a very large tool, or does it reach a threshold where something, well, different happens? Kevin Kelly's hypothesis is this: The rapidly increasing sum of all computational devices in the world connected online, including wirelessly, forms a superorganism of computation with its own emergent behaviors. I define the One Machine as the emerging superorganism of computers. It is a megasupercomputer composed of billions of sub computers. The sub computers can compute individually on their own, and from most perspectives these units are distinct complete pieces of gear. But there is an emerging smartness in their collective that is smarter than any individual computer. We could say learning (or smartness) occurs at the level of the superorganism.

    The Next 6500 Days of the Web

    Kevin Kelly recently gave a short talk on the upcoming Web 10.0 at the Web 2.0 Summit in San Francisco. It is like an update to his previous TED talk on Predicting the next 5000 days of the web. He makes us realize that the Web is only around 6500 days old and argues that the next 6500 days will be something entirely different.

    Dimensions of the One Machine

    Kevin Kelly's post on his blog The Technium back from 2007 shows us the dimensions of the One Machine: The next stage in human technological evolution is a single thinking/web/computer that is planetary in dimensions. This planetary computer will be the largest, most complex and most dependable machine we have ever built. It will also be the platform that most business and culture will run on. Today it contains approximately 1.2 billion personal computers, 2.7 billion cell phones, 1.3 billion land phones, 27 million data servers, and 80 million wireless PDAs. The processor chips of all these parts are feeding the computation of the internet/web/telecommunications system. A very rough estimate of the computing power of this Machine then is that it contains a billion times a billion, or one quintillion (10 ^ 18) transistors. There are about 100 billion neurons in the human brain. Today the Machine has as 5 orders more transistors than you have neurons in your head. And the Machine, unlike your brain, is doubling in power every couple of years at the minimum. If the Machine has 100 quadrillion transistors, how fast is it running? If we include spam, there are 196 billion emails sent every day. That's 2.2 million per second, or 2 megahertz. Every year 1trillion text messages are sent. That works out to 31,000 per second, or 31 kilohertz. Each day 14 billion instant messages are sent, at 162 kilohertz. The number of searches runs at 14 kilohertz. Links are clicked at the rate of 520,000 per second, or .5 megahertz. There are 20 billion visible, searchable web pages and another 900 billion dark, unsearchable, or deep web pages. The average number of links found on each searchable web page is 62. Assuming the same count for dynamic pages that means there's 55 trillion links in the full web. We could think of each link as a synapse -- a potential connection waiting to me made. There is roughly between 100 billion and 100 trillion synapses in the human brain, which puts the Machine in the same neighborhood as our brains. We could start by saying the Machine currently has 1 HB (Human Brain) equivalent. That measure might hold up for a decade or so, but after it gets to 100 HB, or 10,000 HB, it begins to feel like using inches to measure galactic space. Check out Kevin Kelly's blog for the conclusions and more (wild?) ideas. How do You see the future of the Web?

    Information Sources

    Click to read more ...

    Friday
    Dec052008

    Sprinkle - Provisioning Tool to Build Remote Servers

    At 37 Signals Joshua Sierles describes how 37 Signals uses Sprinkle to configure their servers within EC2. Sprinkle defines a domain specific meta-language for describing and processing the installation of software. You can find an interesting discussion of Sprinkle's creation story by the creator himself, Marcus Crafter, in Sprinkle Some Powder!. Marcus divides provisioning tools into two categories:

  • Task Based - the tool issues a list of commands to run on the remote system, either remotely via a network connection or smart client.
  • Policy/state Based - the tool determines what needs to be run on the remote system by examining its current and final state. Sprinkle combines both models together in a chocolate-in-my-peanut-butter approach using normal Ruby code as the DSL (domain specific language) to declaratively describe remote system configurations. 37 Signals likes the use of Ruby as the DSL because it makes learning a separate syntax unnecessary. I've successfully done similar things in Perl. You already have a scripting language, why layer another one on top? One reason not to is that you've now tied configuration and execution together so that only one tool can control the process, but the leverage is so high with this approach it's hard to ignore. There's all the usual bits about defining packages, dependencies, installation logic, pre and post actions, etc. The format is compact and clear because that's how Ruby is and the operations are task specific so there's no fluff. Capistrano is used to communicate with remote systems though that is pluggable. 37 Signals uses the EC2 security group as way to specify the role an instance should take on when it boots. A configuration script that can handle all roles is shipped with a near complete functional base image. Sprinkle then configures the system the rest of the way based on the passed in role. Joshua says they like this approach better than Puppet because it doesn't rely on a centralized configuration server or "pushing large sets of commands over SSH manually." There's always one more than one way to do "it" and Sprinkle carves out an interesting niche in the provisioning space. The 37 Signal's approach doesn't scale to a large organization with many different flavor of servers, but for a specific set of tightly cooperating servers it's a very simple, clean, and robust way of doing business. Related Articles: Product: Puppet the Automated Administration System.

    Click to read more ...

  • Wednesday
    Dec032008

    Java World Interview on Scalability and Other Java Scalability Secrets

    OK, this interview is with me on Java scalability issues. I sound like a bigger idiot than I would like, but I suppose it could have been worse. The Java World folks were very nice and did a good job, so there’s no blame on them :-) The interview went an interesting direction, but there’s more I’d like add and I will do so here. Two major rules regarding Java and scalability have popped out at me:

  • Java – It’s the platform stupid. Java the language isn’t the big win. What is the big win is the ecosystem building up around the JVM, libraries, and toolsets.
  • Java – It’s the community stupid. A lot of creativity is being expended on leveraging the Java platform to meet scalability challenges. The amazing community that has built up around Java is pushing Java to the next level in almost every direction imaginable. The fecundity of the Java ecosystem can most readily be seen with the efforts to tame our multi-core future. There’s a multi-core crisis going in case you haven’t heard. It’s all you’ll hear mentioned in the halls of the Pentagon. The CPU wizards have maxed out on clock speed and the only way we can scale is by adding more cores. And we don't know how to do that. At 100 cores common ways of doing things break down. Locks don't scale. Cache contention for shared memory slows us down. Bandwidth on the bus is limited. TLB for managing more memory is in short supply. And we need more high speed network cards to handle faster CPUs. And that’s just at the hardware level. It’s worse for programmers. Locking is just a nightmare. I didn't believe that at first. Early in my career I worked on several multi-core systems. I thought everything was cool. Be careful and it will all work out. But work with a group and it all goes to hell. People add functions, locks, take too much time. Problems like deadlock, priority inversion, and high latency all kill a system. What can we do? As we’ll see, Java and more importantly the JVM have become a platform for many interesting technologies and scalability patterns. Let’s take a look at few.

    How Java affects both Performance and Scalability

    Peter Williams in this very informative blog post discusses how Java affects both performance and scalability. The main points are:
  • Java does not scale any better, or worse, than any other general purpose language.
  • How easily can you increase the number of requests/sec the system can handle derives from the architecture of the system (sharding, caching, avoid database etc).
  • Java’s culture encourages practices that scale only to medium size system because it encourages techniques that do not scale well : * favors multi-threading * shared state * vertical scaling * large monolithic components * multiple tiers, lots of layers
  • Java is for performance and is the reason to use Java even though the good performance means it scales pretty well out of the box.
  • Scaling with Java requires going your own way and bucking the culture to implement more scalable practices. Alain Penders makes some good points in the comments:
  • Java scalable not because it’s better suitable for building scalable systems, but because how to build scalable systems with it is well understood and the components you need to do so are widely available. Not only are those components widely available, they are available from multiple vendors — some free, some commercial — so you’re never locked in, an aspect that’s extremely valuable when producing commercial software.
  • Ruby & Rails applications can be built to scale well, but the techniques are not as well understood as for Java. Ron takes an apposing view and says Java is less scalable:
  • During garbage collection all threads are blocked and the garbage collection time can expand to minutes. These huge latencies effectively limit memory which limits scalability.
  • Increased garbage collection latencies make Java less useful for application that use heart beats, make real-time trades, etc.
  • Real Time Java API extensions were developed as a way of addressing garbage collection problems. Greg Frank weighs in with some excellent points:
  • Java language itself has nothing to do with scalability.
  • Ron’s comments about garbage collection don’t apply to post 1.4 JVMS.
  • Old-school J2EE is dead.
  • The monolithic server is being replaced with advanced patterns based on new technologies like jgroups, ehcache and terracotta.
  • The true force pushing scalability is the java community itself. Java is merely a convenient and commonly spoken language that allows for the formation of a global community.
  • We should stop debating language internals and start debating sophisticated design patterns that promote scalability.

    The Top 10 Ways to Botch Enterprise Java Application Scalability and Reliability

    This is a wonderful presentation by Oracle’s Cameron Purdy. Here’s a PDF. Cameron was CEO of Tangosol before Oracle bought them out. Tangosol made Coherence, a distributed cache. Cameron is a long time prolific contributor to the Java community. His presentation is a must see. He’s both entertaining and technically excellent. The main points he makes in the presentation are:
  • 1. Avoid proprietary features/Believe product claims.
  • 2. Assume the network works
  • 3. Use big JVM machine heaps
  • 4. Use a one-size-fits-all architecture
  • 5. Assume disaster-recovery can be added when it becomes necessary
  • 6. Abuse Abstractions/Avoid abstractions
  • 7. Introduce a single point of bottleneck/Introduce a single point of failure
  • 8. Abuse the database/Avoid the database
  • 9. Assume you are smarter than the infrastructure/Follow the rules blindly
  • 10. Optimize performance assuming that it will translate to scalability/Ignore the potential impact of performance on scalability (and vice-versa). Remember, iff these seem backwards remember these are botching strategies. You'll want to see the presentation to see each point fleshed out in more detail.

    Azul

    Azul is a Java Compute Appliance and is the ultimate scale-up play for Java. It kind of does what Google App Engine does at the framework level but does it to the JVM at the hardware level. Current standard practice is to deploy Java application across a cluster of commodity servers. Azul does the opposite. It goes big. The most recent release can contain up to 864 processor cores and 768 GB of memory. That’s big. Azul transparently runs unmodified Java applications on their specialized hardware platform which allows even the most mild mannered of Java apps to scale. Their hardware-assisted garbage collector dramatically reduces application pauses and gives access to hundred of gigabytes of RAM. Some very impressive performance improvements are possible. In one case study Breakthrough Scalability of an Application Constrained to an x86 Server, an application was given access to 384 cores and 128 GB of memory on an Azul compute appliance. The result was a 45x improvements in scalability. Scalability was increased along a number of dimensions (quoted from their article):
  • Thread count: Initially the application was artificially limiting the number of threads that were allowed to execute. With few threads, the application started at 2,200 OPS on Azul (below the 14,000 native score) but by simply increasing the thread pool size and expanding the heap throughput jumped to 60,000 OPS.
  • Memory locks: When many threads access the same piece of memory, traditional systems force developers to serialize the threads to make sure two threads do not simultaneously change the same memory location (similar to how airlines make sure the same seat is not assigned to two different people.) Azul appliances can detect such collisions in real time and assure correct execution. The removal of two such "hot locks" and a further increase in the number of threads and heap size achieved a new peak of 115,000 OPS.
  • Logging: Slow applications can afford to maintain logs of unneeded events or even multiple copies of the same information. As throughput increases, the shear volume of such logs make them difficult to analyze and it becomes more practical to log only what is needed. Reducing the amount of logging and addressing a related single threaded bottleneck raised the peak to 350,000 OPS.
  • More locks: With the above steps taken, a new lock was exposed as a barrier to scalability. Once that was resolved the compute appliance was able to deliver 630,000 OPS – a 45x improvement over the original native performance! If Azul is so cool why aren’t all applications being run on Azul? Buying a completely proprietary hardware platform is too big a risk without a gigantic throbbing pain point. I would like to see Azul open their own cloud so we could get in with low cost and risk.

    X10

    X10 is not just an inexpensive home automation control system. X10 is also:
    A type-safe, modern, parallel, distributed object-oriented language intended to be very easily accessible to Java(TM) programmers. It is targeted to future low-end and high-end systems with nodes that are built out of multi-core SMP chips with non-uniform memory hierarchies, and interconnected in scalable cluster configurations. A member of the Partitioned Global Address Space (PGAS) family of languages, X10 highlights the explicit reification of locality in the form of places; lightweight activities embodied in async, future, foreach, and ateach constructs; constructs for termination detection (finish) and phased computation (clocks); the use of lock-free synchronization (atomic blocks); and the manipulation of global arrays and data structures. An Eclipse-based Integrated Development Environment (IDE) has been developed at IBM for X10 to help further increase programmer productivity by providing state-of-the-art functionality for viewing, editing, navigating, executing, and manipulating X10 programs.
    X10 is built on top of Java. X10 adds:
  • value types, nullable
  • Array language * Multi-dimensional arrays, * aggregate operations
  • New concurrency features * activities (async, future), atomic * blocks, clocks
  • Distribution * places * distributed arrays X10 does not have:
  • Dynamic class loading
  • Java’s concurrency features
  • thread library, volatile,synchronized, wait, notify X10 restricts:
  • Class variables and static initialization The result is hopefully a language that can be scaled across a cluster of mulit-core processors yet still has the familiar Java syntax and is developed using familiar Java development tools like Eclipse.

    Clojure, Jruby, Jpython

    More of the "it’s the platform stupid." We have many different and interesting languages being built on the JVM platform.

    Jruby

    Jruby is an 100% pure-Java implementation of the Ruby programming language.

    Jpython

    Jpython is an 100% pure-Java implementation of the Python programming language.

    Clojure

    I first came upon Clojure researching software transactional memory (STM) as solution to the problem of how to create easy to write massively parallel programs. STM is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It functions as an alternative to lock-based synchronization. It’s supposed to make writing parallel programs easier. The idea is you can do away with all those nasty locks that cause so many problems. Some have found STM’s performance very disappointing for larger scale applications. And it may ultimately fail simply because everything that inside a transaction boundary is not memory. Programs routinely call out to other services and peripherals. How can STM work in real world environments? STM may not turn out to be the savior of the multi-core world, but Clojure explores some very new Java territory:
    Clojure is a dynamic programming language that targets the Java Virtual Machine. It is designed to be a general-purpose language, combining the approachability and interactive development of a scripting language with an efficient and robust infrastructure for multithreaded programming. Clojure is a compiled language - it compiles directly to JVM bytecode, yet remains completely dynamic. Every feature supported by Clojure is supported at runtime. Clojure provides easy access to the Java frameworks, with optional type hints and type inference, to ensure that calls to Java can avoid reflection. Clojure is a dialect of Lisp, and shares with Lisp the code-as-data philosophy and a powerful macro system. Clojure is predominantly a functional programming language, and features a rich set of immutable, persistent data structures. When mutable state is needed, Clojure offers a software transactional memory system and reactive Agent system that ensure clean, correct, multithreaded designs.
    Clojure doesn’t fit my aging mental model. The message-passing actor model of Erlang is more my style. Interestingly the difference between Erlang and Clojure is quite purposeful. Clojure wants to be efficient while operating in the same process rather than taking a message passing hit for every operation. Clojure requires specifying an agent as the receiver of a message where I prefer a more publish-subscribe approach where message senders and consumers are independent. Clojure's use of Java threads makes latency difficult to control. And I'm not sure a S-expression based language can ever become popular. But these are relatively minor issues compared to the task of making Java safe for parallelism. Java does OOP well enough, but sucks at concurrency. Clojure is a nice middle-ground that may be able to make concurrency-oriented programming by real humans in Java a reality.

    GigaSpaces, GridGain, Terracotta

    GigaSpaces, GirdGain, and Terracotta take Java objects and fairly transparently turn them into highly distributed in-memory grids with amazing out-of-the-box functionality. If RAM is the new disk these products make it very easy to hop on the next generation architecture. Again, their ability to do so much behind the scenes is based on the power of the JVM. Try doing any of this with C++. Can’t happen. Only in the Java world will you see so much cutting edge innovation.

    Open Services Gateway Initiative (OSGi)

    One of the major problems with using Java is it’s a pain to deploy a new release across many machines in a data center. Deploying patches and upgrades requires bringing containers down. There’s also isn’t a good way to architect WAR files. Creating one WAR file with multiple services doesn’t work for developers. Creating N WAR files with one service doesn’t scale for containers. And how do you run multiple versions of the same service in the same container? OSGI is a solution that should make dynamic and high availability deployment of Java web services a reality. OSGi is a dynamic module system for Java. Class loading done right. OSGi defines an architecture for developing and deploying modular applications and libraries by creating a microkernel-style architecture. There’s a core set of modules that make up a basic platform and new functionality is dynamically layered in with a plugin. Using OSGi these plugins are isolated, secured and controlled from the rest of code. The unit of deployment is an OSGi bundle, which is simply a JAR file with an OSGi manifest. This approach allows loosely-coupled application modules to be developed by a team of developers. Everything is kept in-sync using version numbers and module dependency ranges. If you’ve ever worked with Linux this should sound familiar. It’s basically how packages are installed on Linux.

    Many Companies are Successfully Using Java

    Many companies are using Java on their websites, they just don't use the full stack. Java is the ultimate service implementation language. There’s a trend like Amazon to develop in terms of separate services that are composed together to produce pages. Put up a cluster of applications, load balance between them and you are set. This is a big move for internal architecture. Web services now have external APIs. Those same APIs can be used internally to build your site. Java is great for larger web sites who need to start thinking in terms of services.
  • Fotolog. Fotolog is the poster boy for java scalability. They migrated from PHP to a new, Java-based architecture that, in addition to giving greater flexibility and reuse for future code, allows for a faster response time while halving the number of servers.
  • Amazon. Amazon uses a serviced based architecture. They are not stuck with one particular approach. Some places they use jboss/java, but they use only servlets, not the rest of the J2EE stack.
  • eBay. Use what you like and toss what you don't need. Ebay didn't feel compelled to use full blown J2EE stack. They liked Java and Servlets so that's all they used. You don't have to buy into any framework completely. Just use what works for you.
  • Mailinator. Handles over 1.2 billion emails a year on one rickity old server. The web application, the email server, and all email storage run in one JVM. Java doesn't have to be slow.
  • Tailrank. They use use Java, MySQL and Linux for our cluster. Java is a great language for writing crawlers. The library support is pretty solid (though it seems like Java 7 is going to be killer when they add closures).
  • Flickr. They use Java for their node service and as a FTP daemon and for several other services.
  • Linkedin. LinkedIn’s architecture has evolved to scale up to 22 million users. LinkedIn is 99% Pure Java. They use a service oriented architecture, java, jetty, eh-cache, and spring. Clients post messages via asynchronous Java Communications API using JMS. The WebApp doesn’t do everything itself anymore: they split parts of its business logic into Services. Each Service has its own domain-specific database (i.e., vertical partitioning).
  • Amazon’s Dynamo. In Dynamo, each storage node has three main software components: request coordination, membership and failuredetection, and a local persistence engine. All these components are implemented in Java.
  • GoogleTalk. Google’s IM system implemented in Java.
  • FeedBurner. A news feed management system written in Java. We’ve covered a lot of ground. We’ve seen how the excesses of old J2EE scalability failures can be routed around with ease using a number of different scalability patterns. We’ve seen some really innovative and amazing products available to Java developers. And we’ve seen a lot of successful websites use Java. Ah, after all that hard work it’s time for another cup of java. Peet’s coffee is my favorite.

    Related Articles

  • Beautiful concurrency by Simon Peyton Jones, Microsoft Research, Cambridge.
  • Concurrency Shysters by Bryan Cantrill.
  • Transactions are tomorrow's loads and stores by Nir Shavit .
  • Software transactional memory: why is it only a research toy? by too many people to list.
  • Real-world Concurrency by Bryan Cantrill and Jeff Bonwick.
  • Cantrill and Bonwick get all concurrent-y up in there... by Keith Adams
  • Fortress - Fortress is a new programming language designed for high-performance computing (HPC) with high programmability.
  • Azule’s Cliff Click Jr.’s Blog. The folks at Azul have some very interesting blogs. Check them out.
  • We Don't Know How To Program by Cliff Click Jr.
  • JavaOne: Cameron Purdy & ‘The Top 10 Ways to Botch Enterprise Java Scalability and Reliability by Ben Teese
  • Why most large-scale Web sites are not written in Java by Nati Shalom of GigaSpaces. There are also a number very good blogs written by the GigaSpaces folks.
  • Raising Web Service Updates Efficiency with Dynamic Technologies by Valery Abu-Eid
  • Building LinkedIn's Next Generation Architecture with OSGi by Yan Pujante
  • Clojure could be to Concurrency-Oriented Programming what Java was to OOP by Bill Clementson.

    Click to read more ...

  • Monday
    Dec012008

    Deploying MySQL Database in Solaris Cluster Environments

    MySQL™ database, an open source database, delivers high performance and reliability while keeping costs low by eliminating licensing fees. The Solaris™ Cluster product is an integrated hardware and software environment that can be used to create highly-available data services. This article explains how to deploy the MySQL database in a Solaris Cluster environment. The article addresses the following topics: * "Advantages of Deploying MySQL Database with Solaris Cluster" on page 1 discusses the benefits provided by a Solaris Cluster deployment of the MySQL database. * "Overview of Solaris Cluster" on page 2 provides a high-level description of the hardware and software components of the Solaris Cluster. * "Installation and Configuration" on page 8 explains the procedure for deploying the MySQL database on a Solaris Cluster. This article assumes that readers have a basic understanding of Solaris Cluster and MySQL database installation and administration.

    Click to read more ...

    Monday
    Dec012008

    Web Consolidation on the Sun Fire T1000 using Solaris Containers  

    Reducing the costs of IT infrastructure and improving the manageability and efficiency of web services pose significant challenges for many organizations in today's economic climate. Recent studies describe the challenges IT managers face administering the proliferation of x86-based servers used to run web services applications. Those reports reveal that using large number of x86-based systems can increase space and power consumption, as well as cost and asset management overhead. In addition, many of these x86-based systems run a mixture of operating system and application software leading to increased management complexity and potential security concerns. Faced with these challenges, many organizations are attracted by the idea of consolidating web and application services from multiple x86-based servers to a smaller number of high-performance servers. This approach strives to help simplify management, improve performance, and increase the efficiency of delivering web services. The combined capabilities of the Sun Fire T1000 server and Solaris Containers technology in particular offer significant promise as a web-tier consolidation platform. The Sun Fire T1000 server offers high aggregate throughput performance in a small, power-efficient footprint. Solaris containers provide a complete, isolated, and secure runtime environment for applications, enabling multiple web servers to run safely and efficiently on the same platform. This paper explores the configuration and testing of the Sun Fire T1000 server as a web-tier consolidation platform. It discusses methodologies used to consolidate multiple web servers onto a single Sun Fire T1000 server, and explains the steps used to configure the Solaris Containers. In addition, to determine the effectiveness of this approach, testing was performed to evaluate the consolidated Sun Fire T1000 system against a baseline configuration of current Xeon servers, a popular choice as web server platform.

    Click to read more ...