« Gone Fishin' 2015 | Main | Paper: Heracles: Improving Resource Efficiency at Scale »

Stuff The Internet Says On Scalability For June 5th, 2015

Hey, it's HighScalability time:

Stunning Multi-Wavelength Image Of The Solar Atmosphere.
  • 4x: amount spent by Facebook users
  • Quotable Quotes:
    • Facebook: Facebook's average data set for CF has 100 billion ratings, more than a billion users, and millions of items. In comparison, the well-known Netflix Prize recommender competition featured a large-scale industrial data set with 100 million ratings, 480,000 users, and 17,770 movies (items).
    • @BenedictEvans: The number of photos shared on social networks this year will probably be closer to 2 trillion than to 1 trillion.
    • @jeremysliew: For every 10 photos shared on @Snapchat, 5 are shared on @Facebook and 1 on @Instagtam. 8,696 photos/sec on Snapchat.
    • @RubenVerborgh: “Don’t ask for an API, ask for data access. Tim Berners-Lee called for open data, not open services.” —@pietercolpaert #SemDev2015 #ESWC2015
    • Craig Timberg: When they thought about security, they foresaw the need to protect the network against potential intruders or military threats, but they didn’t anticipate that the Internet’s own users would someday use the network to attack one another. 
    • Janet Abbate: They [ARPANET inventors] thought they were building a classroom, and it turned into a bank.
    • A.C. Hadfield: The power of accurate observation is often called cynicism by those who don’t possess it.
    • @plightbo: Every business is becoming a software business
    • @potsdamnhacker: Replaced Go service with an Erlang one. Already used hot-code reloading, fault tolerance, runtime inspectability to great effect. #hihaters
    • @alsargent: Given continuous deployment, average lifetime of a #Docker image @newrelic is 12hrs. Different ops pattern than VMs. #velocityconf
    • @abt_programming: "If you think good architecture is expensive, try bad architecture" - Brian Foote - and Joseph Yoder
    • @KlangianProverb: "I thought studying distributed systems would make me understand software better—it made me understand reality better."—Old Klangian Proverb
    • @rachelmetz: google's error rate for image recognition was 28 percent in 2008, now it's like 5 percent, quoc le says.

  • Fear or strength? Apple’s Tim Cook Delivers Blistering Speech On Encryption, Privacy. With Google Now on Tap Google is saying we've joyously crossed the freaky line and we unapologetically plan to leverage our huge lead in machine learning to the max. Apple knows they can't match this feature. Google knows this is a clear and distinct exploitable marketing idea, like a super thin MacBook Air slowly slipping out of a manila envelope.

  • How does Kubernetes compare to Mesoscmcluck, who works at Google and was one of the founders of the project explains: we looked really closely at Apache Mesos and liked a lot of what we saw, but there were a couple of things that stopped us just jumping on it. (1) it was written in C++ and the containers world was moving to Go -- we knew we planned to make a sustained and considerable investment in this and knew first hand that Go was more productive (2) we wanted something incredibly simple to showcase the critical constructs (pods, labels, label selectors, replication controllers, etc) and to build it directly with the communities support and mesos was pretty large and somewhat monolithic (3) we needed what Joe Beda dubbed 'over-modularity' because we wanted a whole ecosystem to emerge, (4) we wanted 'cluster environment' to be lightweight and something you could easily turn up or turn down, kinda like a VM; the systems integrators i knew who worked with mesos felt that it was powerful but heavy and hard to setup (though i will note our friends at Mesosphere are helping to change this). so we figured doing something simple to create a first class cluster environment for native app management, 'but this time done right' as Tim Hockin likes to say everyday. < Also, CoreOS (YC S13) Raises $12M to Bring Kubernetes to the Enterprise.

  • If structure arises in the universe because electrons can't occupy the same space, why does structure arise in software?

  • The cost of tyranny is far lower than one might hope. How much would it cost for China to intercept connections and replace content flowing at 1.9-terabits/second? About $200K says Robert Graham in Scalability of the Great Cannon. Low? Probably. But for the price of a closet in the new San Francisco you can edit an entire people's perception of the Internet in real-time.

  • Forget VR, funding may be the most exciting change in gaming. Tech Trends Changing Gaming. Broken Age set crowdfunding records on the most backers and the fastest to a million dollars in funding. Instead of the block buster conservatism that goes along with the big budget movie funding model, game crowdfunding allows risks to be taken again. It removes the gatekeeper publishing/funding layer and gives power back to the creators.

  • It's not about monoliths or microservices. It's about autocatalytic sets. The Living Set: Contrary to Kauffman’s original argument that autocatalytic sets emerge as “giant connected components,” it turns out that autocatalytic sets can often be decomposed into smaller subsets, which themselves are autocatalytic. In fact, there often exists an entire hierarchy of smaller and smaller autocatalytic subsets. 

  • A Peek Behind the Curtains: Six Lessons from Building a Microservices-led Company: Invest in infrastructure; Start small; The "micro" is up to you; Divide and conquer;  Benefit from people-oriented architecture;  Consistency overcomes complexity.

  • Velocity Conference 2015 (Santa Clara, CA) videos are now available online.

  • If we are still finding new stuff in the human body how can we find all bugs in software? Missing link found between brain, immune system -- with major disease implications: In a stunning discovery that overturns decades of textbook teaching, researchers at the University of Virginia School of Medicine have determined that the brain is directly connected to the immune system by vessels previously thought not to exist. That such vessels could have escaped detection when the lymphatic system has been so thoroughly mapped throughout the body is surprising on its own, but the true significance of the discovery lies in the effects it could have on the study and treatment of neurological diseases ranging from autism to Alzheimer's disease to multiple sclerosis.

  • Did you know there's a Failure Knowledge Database? Its purpose is to "provide a means of communicating failure knowledge." Unfortunately it stops as of 2005. Clearly the world has been completely safe since then. The coolest meme is that of failure madalas, which expresses the hierarchical relationships between failure components. Here's their entry on Apollo 13 and the Titanic.

  • Epic post of 64 Network DO’s and DON’Ts for Game Engine Developers. Part I: Client SidePart IIa: Protocols and APIs, with more to come. Not just for game developers.

  • Papers from SIGMOD '15- Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data are available online

  • How Global Network Latency Affects Your Mobile App: South Korea comes in as the fastest at 259 ms...American latency is slightly worse than Britain’s, with T-Mobile and Sprint running noticeably faster than Verizon and AT&T...Wi-Fi Brings More Potential Latency to the Table.

  • 2014 ACM Turing Award: Michael Stonebraker takes it a step further. He builds entire companies on the basis of the systems research he’s done. We’ve all been through the experience of having a great idea where “nobody will listen”.

  • Update on the Wix architecture. Scaling Wix to 60M Users - From Monolith to Microservices

  • Data Gravity? Network 'hubs' in the brain attract information, much like airport system: Research on large-scale brain networks by the University of Michigan Medical School reveals that "hubs" in the brain - highly connected regions that like hubs of the airport system - tend to consistently attract information flow.

  • We have another competitor in the database market. The ambitious Spanner inspired CockroachDB received over $6 million in funding. Congratulations and best of luck!

  • So who needs public markets anymore? Snapchat Raises Another $500 Million From Investors.

  • Finding and Solving Bottlenecks in Your System. First characterize your working set size; average transaction size; request size; update rate; consistency: locality; computation; latency. Hopefully the bottleneck is obvious. If not change a constraint and see what happens. Reduce latency requirements by 10x. Halve the number of computers. Also, Shaping Big Data Through Constraints Analysis

  • Extending your application to the edge with CDNs. Excellent talk about CDNs. Emphasis on the latency distribution of cached hits. Emphasis on purging cached content at the edge in under a second. Unfortunately there's no talk about the mechanism or of consistency models. The kind of functionality you can move to the edge doesn't seem to include business logic, just kind of bookkeeping stuff, like HTTP header manipulation, origin selection caching rules, geo-ip rules, forcing ssl, serving stale content, choosing different content mobile/desktop.

  • Great short talk on Optical Fiber. 20,000 Leagues Inside the Optical Fiber. 150 years ago Alexander Graham Bell was struggling with analog modulation, low quality, short distance. Soon we will be at 255 Tb/s 1 km 50 channels.

  • Videos from Berlin Buzzwords 2015 are available online.

  • Maybe we ought to have Numerical Coprocessors?I was recently reading a blog post claiming that matrix multiplication (GEMM) is the most expensive operation in deep learning, taking up to 95% of the execution time. This got me thinking that maybe GPGPUs are simply not ideal for most applications. Maybe future CPUs should begin to include numerical coprocessors.

  • Not surprising. Airbus confirms software configuration error caused plane crash.

  • Everything You Ever Wanted to Know About Message Latency: L = M / R + D...L = M ∑i 1 / Ri + ∑i Di + RTT = 2 D + M / R ≈ 2 D...BD = R x D

  • Researchers: SSDs struggle in virtual machines thanks to garbage collectionSSDs, with their high speed interfaces and ability to access data hundreds of times faster than a spinning disk, should be a perfect fit for this kind of workload. What researchers found, however, is that the garbage collection routines that run on modern SSDs actually make them a poor fit for these workloads.

  • Apple is using Mesos to run Siri. That's quite the customer win. Apple created their own PaaS layer on top of Mesos. It runs across thousands of nodes and hosts a hundred services. It helped Siri become more scalable, available, and reduce app latency.

  • Greg Ferro explores Why Firewalls Won’t Matter In A Few YearsFirewalls operating at 10G or more are not cost effective; Vertical scaling of performance costs more than the services are worth; At 100G, a firewall has less than 6.7 nanoseconds to “add value” before they impact service delivery; You can’t use firewalls to secure East/West data flows in the network; and a few more. There's a vigorous discussion in the comments.

  • The O’Reilly Data Show Podcast is worth a listen.

  • What is Immutable Infrastructure? When you deploy an update to your application, you should create new instances (servers and/or containers) and destroy the old ones, instead of trying to upgrade them in-place. Once your application is running, you don’t touch it! The benefits come in the form of repeatability, reduced management overhead, easier rollbacks, etc.

  • Facebook on Recommending items to more than a billion people: We finally came up with an approach that required us to extend Giraph framework with worker-to-worker messaging. Users are still presented as the vertices of the graph, but items are partitioned in #Workers disjoint parts, with each of these parts stored in global data of one of the workers. We put all workers in a circle, and rotate the items in clockwise direction after each superstep, by sending worker-to-worker messages containing items from each worker to the next worker in the line.

  • Turing Lecture: The Computer Science of Concurrency: The Early Years: I don't know if concurrency is a science, but it is a field of computer science. What I call concurrency has gone by many names, including parallel computing, concurrent programming, and multiprogramming. I regard distributed computing to be part of the more general topic of concurrency. I also use the name algorithm for what were once usually called programs and were generally written in pseudo-code.

  • Twitter Heron: Stream Processing at Scale: Storm has long served as the main platform for real-time analytics at Twitter. However, as the scale of data being processed in real-time at Twitter has increased, along with an increase in the diversity and the number of use cases, many limitations of Storm have become apparent. We need a system that scales better, has better debug-ability, has better performance, and is easier to manage -- all while working in a shared cluster infrastructure. We considered various alternatives to meet these needs, and in the end concluded that we needed to build a new real-time stream data processing system. This paper presents the design and implementation of this new system, called Heron. Heron is now the de facto stream data processing engine inside Twitter, and in this paper we also share our experiences from running Heron in production. In this paper, we also provide empirical evidence demonstrating the efficiency and scalability of Heron.

  • Feral Concurrency Control: An Empirical Investigation of Modern Application Integrity: In this work, we examined the use of concurrency control mechanisms in a set of 67 open source Ruby on Rails applications and, to a less thorough extent, concurrency control support in a range of other web-oriented ORM frameworks. We found that, in contrast with traditional transaction processing, these applications overwhelmingly prefer to leverage application-level feral support for data integrity, typically in the form of declarative (sometimes user-defined) validation and association logic. Despite the popularity of these invariants, we find limited use of in-database support to correctly implement them, leading to a range of quantifiable inconsistencies for Rails’ built-in uniqueness and association validations. 

  • Optimizing Optimistic Concurrency Control for Tree-Structured, Log-Structured Databases: The core of the system is a log roll-forward algorithm, called meld, that does optimistic concurrency control. Meld is inherently sequential and is therefore the main bottleneck. Our main algorithmic contributions are optimizations to meld that significantly increase transaction throughput. They use a pipelined design that parallelizes meld onto multiple threads. The slowest pipeline stage is much faster than the original meld algorithm, yielding a 3x improvement of system throughput over the original meld algorithm.

  • Parallel streaming signature EM-tree: A clustering algorithm for web scale applications: We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters.

  • ADAPTON: Composable, Demand-Driven Incremental Computation: Many researchers have proposed programming languages that support incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious to specific demands on the program output; that is, if a program input changes, all dependencies will be recomputed, even if an observer no longer requires certain outputs. Second, programs are made incremental as a unit, with little or no support for reusing results outside of their original context, e.g., when reordered.

  • If you like Google Photos then see how the magic is done. Going deeper with convolutions: We propose a deep convolutional neural network architecture codenamed Inception, which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>