The Joy of Deploying Apache Storm on Docker Swarm

This is a guest repost from Baqend Tech on deploying and redeploying an Apache Storm cluster on top of Docker Swarm instead of deploying on VMs. It's an interesting topic because of the experience Wolfram Wingerath called it "a real joy", which is not a phrase you hear often in tech. Curious, I asked what made using containers such a good experience over using VMs? Here's his reply:

Being pretty new to Docker and Docker Swarm, I'm sure there are many good and bad sides I am not aware of, yet. From my point of view, however, the thing that makes deployment (and operation in general) on top of Docker way more fun than on VMs or even on bare metal is that Docker abstracts from heterogeneity and many issues. Once you have Docker running, you can start something like a MongoDB or a Redis server with a single-line statement. If you have a Docker Swarm cluster, you can do the same, but Docker takes care of distributing the thing you just started to some server in your cluster. Docker even takes care of downloading the correct image in case you don't have it on your machine right now. You also don't have to fight as much with connectivity issues, because every machine can reach every other machine as long as they are in the same Docker network. As demonstrated in the tutorial, this even goes for distributed setups, as long as you have an _overlay_ network.


When I wrote the lines you were quoting in your email, I had a situation in the back of my head that had occurred a few months back when I had to set up and operate an Apache Storm cluster with 16+ nodes. There were several issues such as my inexperience with AWS (coming from OpenStack) and strange connectivity problems relating to Netty (used by Storm) and AWS hostname resolution that had not occurred in my OpenStack setup and eventually cost us several days and several hundred bucks to fix. I really think that you can shield from problems like that by using Docker, simply because your environment remains the same: Docker.

On to the tutorial...

Click to read more ...


Stuff The Internet Says On Scalability For April 22nd, 2016

Hey, it's HighScalability time:

A perfect 10. Really stuck that landing. Nadia Comaneci approves.


If you like this sort of Stuff then please consider offering your support on Patreon.
  • $1B: Supercell’s Clash Royale projected annual haul; 3x: Messenger and WhatsApp send more messages than SMS; 20%: of big companies pay zero corporate taxes; Tens of TB's RAM: Netflix's Container Runtime; 1 Million: People use Facebook over Tor; $10.0 billion: Microsoft raining money in the cloud; 

  • Quotable Quotes:
    • @nehanarkhede: @LinkedIn's use of @apachekafka:1.4 trillion msg/day, 1400 brokers. Powers database replication, change capture etc
    • @kenkeiter~ Full-duplex on a *single antenna* -- this is huge.  (single chip, too -- that's the other huge part, obviously) 
    • John Langford: In the next few years, I expect machine learning to solve no important world issues.
    • Dan Rayburn: By My Estimate, Apple’s Internal CDN Now Delivers 75% Of Their Own Content
    • @BenedictEvans: If Google sees the device as dumb glass, Apple sees the cloud as dumb pipes & dumb storage. Both views could lead to weakness
    • @JordanRinke: We need less hackathons, more apprenticeships. Less bootcamps, more classes. Less rockstars, more mentors. Develop people instead of product
    • @alicegoldfuss: Nagios screaming / Data center ablaze? No / Cable was unplugged
    • Mark Bates: As I was working on the software part time, I was keen to minimise the [cognitive] scope required when making changes. A RoR monolith was the best choice in this case.
    • Google: Our tests have shown that AMP documents load an average of four times faster and use 10 times less data than the equivalent non-amp’ed result.
    • @stevesi: In earning's call @sundarpichai says going “from mobile-first to AI-first world" emphasizing AI and machine learning across services.
    • Rex Sorgatz: Unfortunately, the entire thesis of my story is that having the history of recorded music in your pocket dictates that you will develop tastes outside “the usual.”
    • Newzoo: Clash Royale has rocketed to such quick success because of its strong core gameplay elements combined with some serious pressure to spend real money to keep up with your friends
    • vgt:  I'm going to plug Google Cloud's Preemptible VMs as a simpler alternative to Spot Instances: - Preemptible VMs are sold at a fixed 70% off discount, removing pricing volatility entirely
    • @mfdii~ "Cloud Native" is code words for "rewrite the entire f*cking app"
    • There are so many great Quotable Quotes this week they wouldn't all fit in the summary. Please see the full post to read them all.

  • Imperfection as a strategy. Why a Chip That’s Bad at Math Can Help Computers Tackle Harder Problems: In a simulated test using software that tracks objects such as cars in video, Singular’s approach [computer chips are hardwired to be incapable of performing mathematical calculations correctly] was  capable of processing frames almost 100 times faster than a conventional processor restricted to doing correct math—while using less than 2 percent as much power.

  • You have to fight magic with magic, super-villains with super-heroes, and algorithms with algorithms. How I Investigated Uber Surge Pricing in D.C. Also, Investigating the algorithms that govern our lives.

  • Mitchell Hashimoto in The Cloudcast #246  on some cloud trends. Seeing a lot of interest in non-Amazon clouds right now. A lot of interest in Azure is coming from more boring successful companies, not hot Silicon Valley startups.  This is not a clean market segmentation, but there are three flavors of cloud: Google Compute for the green field startup crowd, Amazon for enterprise, and Azure for super-enterprise. One enterprise attractor for Azure is Azure Stack, an on-premises solution. Mitchell is seeing a broad adoption of the cloud across industries you may not expect to be using the cloud. Also seeing a transition to a multi-cloud strategy to create pricing leverage. The idea seems to be to rehearse and plan to move to another cloud, though they may not actually do it, but when pricing negotiations come up there's a lot of leverage saying you can move to a completely different platform. The cloud is not so much a pay as you go model for this use case, it's more about trying to lock-in long term cost savings. International companies are interested in price, but also what features are available in different regions and when they become available.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


How Twitter Handles 3,000 Images Per Second

Today Twitter is creating and persisting 3,000 (200 GB) images per second. Even better, in 2015 Twitter was able to save $6 million due to improved media storage policies.

It was not always so. Twitter in 2012 was primarily text based. A Hogwarts without all the cool moving pictures hanging on the wall. It’s now 2016 and Twitter has moved into to a media rich future. Twitter has made the transition through the development of a new Media Platform capable of supporting photos with previews, multi-photos, gifs, vines, and inline video.

Henna Kermani, a Software Development Engineer at Twitter, tells the story of the Media Platform in an interesting talk she gave at Mobile @Scale London: 3,000 images per second. The talk focuses primarily on the image pipeline, but she says most of the details also apply to the other forms of media as well.

Some of the most interesting lessons from the talk:

  • Doing the simplest thing that can possibly work can really screw you. The simple method of uploading a tweet with an image as an all or nothing operation was a form of lock-in. It didn’t scale well, especially on poor networks, which made it difficult for Twitter to add new features.

  • Decouple. By decoupling media upload from tweeting Twitter was able independently optimize each pathway and gain a lot of operational flexibility. 

  • Move handles not blobs. Don’t move big chunks of data through your system. It eats bandwidth and causes performance problems for every service that has to touch the data. Instead, store the data and refer to it with a handle.

  • Moving to segmented resumable uploads resulted in big decreases in media upload failure rates.

  • Experiment and research. Twitter found through research that a 20 day TTL (time to live) on image variants (thumbnails, small, large, etc) was a sweet spot, a good balance between storage and computation. Images had a low probability of being accessed after 20 days so they could be deleted, which saves nearly 4TB of data storage per day, almost halves the number of compute servers needed, and saves millions of dollars a year.

  • On demand. Old image variants could be deleted because they could be recreated on the fly rather than precomputed. Performing services on demand increases flexibility, it lets you be lot smarter about how tasks are performed, and gives a central point of control.

  • Progressive JPEG is a real winner as a standard image format. It has great frontend and backend support and performs very well on slower networks.

Lots of good things happened on Twitter’s journey to a media rich future, let’s learn how they did it...

The Old Way - Twitter in 2012

Click to read more ...


Hadoop and Salesforce Integration: the Ultimate Successful Database Merger  

How we can transfer salesforce data to hadoop? It is big challenge to everyday users. What are different features of data transfer tools.

Click to read more ...


Stuff The Internet Says On Scalability For April 15th, 2016

Hey, it's HighScalability time:

What happens when Beyoncé meets eCommerce? Ring the alarm.


If you like this sort of Stuff then please consider offering your support on Patreon.
  • $14 billion: one day of purchases on Alibaba; 47 megawatts: Microsoft's new data center space for its MegaCloud; 50%: do not speak English on Facebook; 70-80%: of all Intel servers shipped will be deployed in large scale datacenters by 2025; 1024 TB: of storage for 3D imagery currently in Google Earth; $7: WeChat average revenue per user; 1 trillion: new trees; 

  • Quotable Quotes:
    • @PicardTips: Picard management tip: Know your audience. Display strength to Klingons, logic to Vulcans, and opportunity to Ferengi.
    • Mark Burgess: Microservices cannot be a panacea. What we see clearly from cities is that they can be semantically valuable, but they can be economically expensive, scaling with superlinear cost. 
    • ethanpil: I'm crying. Remember when messaging was built on open platforms and standards like XMPP and IRC? The golden year(s?) when Google Talk worked with AIM and anyone could choose whatever client they preferred?
    • @acmurthy: @raghurwi from @Microsoft talking about scaling Hadoop YARN to 100K+ clusters. Yes, 100,000 
    • @ryanbigg: Took a Rails view rendering time from ~300ms to 50ms. Rewrote it in Elixir: it’s now 6-7ms. #MyElixirStatus
    • Dmitriy Samovskiy: In the past, our [Operations] primary purpose in life was to build and babysit production. Today operations teams focus on scale.
    • @Agandrau: Sir Tim Berners-Lee thinks that if we can predict what the internet will look like in 20 years, than we are not creative enough. #www2016
    • @EconBizFin: Apple and Tesla are today’s most talked-about companies, and the most vertically integrated
    • Kevin Fishner: Nomad was able to schedule one million containers across 5,000 hosts in Google Cloud in under five minutes.
    • David Rosenthal: The Web we have is a huge success disaster. Whatever replaces it will be at least as big a success disaster. Lets not have the causes of the disaster be things we knew about all along.
    • Kurt Marko: The days of homogeneous server farms with racks and racks of largely identical systems are over.
    • Jonathan Eisen: This is humbling, we know virtually nothing right now about the biology of most of the tree of life.
    • @adrianco: Google has a global network IP model (more convenient), AWS regional (more resilient). Choices...
    • @jason_kint: Stupid scary stats in this. Ad tech responsible for 70% of server calls and 50% of your mobile data plan.
    • apy: I found myself agreeing with many of Pike’s statements but then not understanding how he wound up at Go. 
    • @TomBirtwhistle: The head of Apple Music claims YouTube accounts for 40% of music consumption yet only 4% of online industry revenue 
    • @0x604: Immutable Laws of Software: Anyone slower than you is over-engineering, anyone faster is introducing technical debt
    • surrealvortex: I'm currently using flame graphs at work. If your application hasn't been profiled recently, you'll usually get lots of improvement for very little effort. Some 15 minutes of work improved CPU usage of my team's biggest fleet by ~40%. Considering we scaled up to 1500 c3.4xlarge hosts at peak in NA alone on that fleet, those 15 minutes kinda made my month :)
    • @cleverdevil: Did you know that Virtual Machines spin up in the opposite direction in the southern hemisphere? Little known fact.
    • ksec: Yes, and I think Intel is not certain to win, just much more likely. The Power9 is here is targeting 2H 2017 release. Which is actually up against Intel Skylake/Kabylake Xeon Purley Platform in similar timeframe.
    • @jon_moore: Platforms make promises; constraints are the contracts that allow platforms to do their jobs. #oreillysacon
    • @CBILIEN: Scaling data platforms:compute and storage have to be scaled independently #HS16Dublin

  • A morning reverie. Chopped for programmers. Call it Crashed. You have three rounds with four competitors. Each round is an hour. The competitors must create a certain kind of program, say a game, or a productivity tool, anything really, using a basket of three selected technologies, say Google Cloud,, and Twilio. Plus the programmer can choose to use any other technologies from the pantry that is the Internet. The program can take any form the programmer chooses. It could be a web app, iOS or Android app, an Alexa skill, a Slack bot, anything, it's up to the creativity of the programmer. The program is judged by an esteemed panel based on creativity, quality, and how well the basket technologies are highlighted. When a programmer loses a round they have been Crashed. The winner becomes the Crashed Champion. Sound fun?

  • Jeff Dean when talking about deep learning at Google makes it clear a big part of their secret sauce is being able to train neural nets at scale using their bespoke distributed infrastructure. Now Google has released Tensor Flow with distributed computing support. It's not clear if this is the same infrastructure Google uses internally, but it seems to work: using the distributed trainer, we trained the Inception network to 78% accuracy in less than 65 hours using 100 GPUs. Also, the tensorflow playground is a cool way to visualize what's going on inside.

  • Christopher Meiklejohn with an interesting history of the Remote Procedure Call. It started way back in 1974: RFC 674, “Procedure Call Protocol Documents, Version 2”. RFC 674 attempts to define a general way to share resources across all 70 nodes of the Internet

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


10 Stack Benchmarking DOs and DON'Ts

An interesting question came up on the mechanical-sympathy list about how to best benchmark a stack of different queue (aeron/argona, jctools, dpdk, pony) and transport (aeron, dpdk, seastar) options.

Who better to answer than Gil Tene, Vice President of Technology and CTO, Co-Founder, of Azul Systems? Here's his usual insightful and helpful response:

If you are looking at the set of "stacks" (all of which are queues/transports), I would strongly encourage you to avoid repeating the mistakes of testing methodologies that focus entirely on max achievable throughput and then report some (usually bogus) latency stats at those max throughout modes.

The tech empower numbers are a classic example of this in play, and while they do provide some basis for comparing a small aspect of behavior (what I call the "how fast can this thing drive off a cliff" comparison, or "peddle to the metal" testing), those results are not very useful for comparing load carrying capacities for anything that actually needs to maintain some form of responsiveness SLA or latency spectrum requirements.

Rules of thumb I'd start with (some simple DOs and DON'Ts):

Click to read more ...


Sponsored Post: TechSummit, Netflix, Aerospike, TrueSight Pulse, Redis Labs, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?

  • Software Engineer (DevOps). You are one of those rare engineers who loves to tinker with distributed systems at high scale. You know how to build these from scratch, and how to take a system that has reached a scalability limit and break through that barrier to new heights. You are a hands on doer, a code doctor, who loves to get something done the right way. You love designing clean APIs, data models, code structures and system architectures, but retain the humility to learn from others who see things differently. Apply to AppDynamics

  • Software Engineer (C++). You will be responsible for building everything from proof-of-concepts and usability prototypes to deployment- quality code. You should have at least 1+ years of experience developing C++ libraries and APIs, and be comfortable with daily code submissions, delivering projects in short time frames, multi-tasking, handling interrupts, and collaborating with team members. Apply to AppDynamics

Fun and Informative Events

  • Discover the secrets of scalability in IT. The cream of the Amsterdam and Berlin tech scene are coming together during TechSummit, hosted by LeaseWeb for a great day of tech talk. Find out how to build systems that will cope with constant change and create agile, successful businesses. Speakers from SoundCloud, Fugue, Google, Docker and other leading tech companies will share tips, techniques and the latest trends in a day of interactive presentations. But hurry. Tickets are limited and going fast! No wonder, since they are only €25 including lunch and beer.

  • In today’s enterprise, new applications are being invented in droves. To cultivate this momentum, organizations must provide a fast, reliable environment that enables scalability, empowers innovation and reduces complexity. In a webinar on April 26 entitled “Contain Yourself: Development Just Got Easier”, veteran analyst Dr. Robin Bloor will discuss using containers for application and services development. He’ll be briefed by Alvin Richards of Aerospike (the flash-optimized, high-performance NoSQL database) who will showcase how Docker can simplify building and deploying multi-node Aerospike applications. Sign up here to reserve your seat!

Cool Products and Services

  • TrueSight Pulse is SaaS IT performance monitoring with one-second resolution, visualization and alerting. Monitor on-prem, cloud, VMs and containers with custom dashboards and alert on any metric. Start your free trial with no code or credit card.

  • Turn chaotic logs and metrics into actionable data. Scalyr is a tool your entire team will love. Get visibility into your production issues without juggling multiple tools and tabs. Loved and used by teams at Codecademy, ReturnPath, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here:

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below...

Click to read more ...


The Gig Economy Breaks Social Security

With the tax deadline looming in the US and the future of the gig economy as the engine of scaling startup workforces under fire, there's an important point to consider: In the gig economy the entire social contract is kaput. Here's why.

Everyone who works in the US pays into the Social Security system. The whole idea of Social Security is young people pay in and old people take out.

When you are an employee Social Security taxes are taken directly out of your paycheck. You don't even have to think about it.

When you work in the gig economy you get a 1099-MISC at the end of the year. A 1099 reports payments made by the hiring company during the year and it's sent by the hiring company both to the worker and the IRS.

It's up to the worker to identify their income on their tax return as self employment income, which is subject to a Social Security tax of 15.3%. Most gig workers probably won't declare this income because a lot of them don't even know they are supposed to. My wife, Linda Coleman, a respected Enrolled Agent, says from people she has talked to a lot of gig workers haven't even heard self employment tax. And there's only an ever decreasing budget for the IRS to try to enforce all the rules.

And even if a gig worker does know about the tax they might ask themselves why should should I pay 15.3% on my income when I'm making so little money and the company is capturing almost all the benefit?

The problem: if gig workers aren't contributing how is Social Security supposed to work? Gig workers simply won't have Social Security when they retire.

The way Social Security works is all your wages and self employment income are tracked by the Social Security Administration. If you aren't contributing then you aren't earning credits towards your account. And if you aren't earning credits you won't get much in the way of benefits. Social Security works like a big checking account. The amount you can take out is based on how much you put in (or close enough). If you aren't putting any money you can't take it out later. 

The whole big picture is not being communicated well to the public. Who is benefiting? It's not the worker. It's not the government. It's not even the shareholders because no dividends are being paid.

In the gig economy the entire social contract is kaput.


Stuff The Internet Says On Scalability For April 8th, 2016

Hey, it's HighScalability time:

Time for a little drone envy. Sea Hunter, 132 foot autonomous surface vessel.


If you like this sort of Stuff then please consider offering your support on Patreon.
  • 12,000: base pairs in the largest biological circuit ever built; 3x: places GitHub data is now stored; 3.5x: Slacks daily user growth this year; 56 million: events/sec processed through BigTable; 100 Billion: requests per day served by Google App Engine

  • Quotable Quotes:
    • Horst724: #PanamaPapers is the biggest secret data leak in history. It involves 2,6 TB of data, a total of 11.5 million documents that have been leaked by an anonymous insider.
    • Amazon cloud has 1 million users and is near $10 billion in annual sales: Today, AWS offers more than 70 services for compute, storage, databases, analytics, mobile, Internet of Things, and enterprise applications. We also offer 33 Availability Zones across 12 geographic regions worldwide, with another five regions and 11 Availability Zones.
    • @CodeWisdom: "Give someone a program, you frustrate them for a day; teach them how to program, you frustrate them for a lifetime." - David Leinweber
    • @peterseibel: OH: it is amazing how many people reach for some complex distributed system when really all they need is a PC with 256 gigs of RAM in it.
    • @dschobel: once you realize that 1TB of ram costs ~$10k it changes your calculus for going distributed. I mean hopefully it does :)
    • @channingwalton: “The grid takes 8 hours so I’ll run it on my dev box, it’ll take 20 mins”, OH’d at a large bank
    • @noahsussman: First attempt at showing that CPU usage statistics of Web servers exhibit a 1/f spectral density. #devops #testing
    • @BenedictEvans: Tech spirals: Open/closed Client/server Search/curation Messaging/apps Document/service Bundle/unbundle Special/general purpose FB/Myspace
    • @Carnage4Life: Insider states Nest falling apart from constant death marches, no new products and missed revenue numbers. 😢
    • Catherine (Cat): Right then and there, I said loud, confidently, and clearly so everyone in the room could hear me, “I DON’T UNDERSTAND.”
    • @mathiasverraes: Reductionist models of Complex Adaptive Systems are usually more appealing & seductive than acknowledging complexity.
    • @grayj_: Elixir vs. Go: the initial learning curve with Go is amazing, the number of things I miss when not using Elixir is amazing.
    • Broad Institute: Our DNA sequencers produce more than 20 Terabytes (TB) of genomic data per day, and they run 365 days a year.
    • Storage Mojo: The IOPS illusion has replaced the capacity illusion as the major impediment to understanding today’s I/O requirements. Latency, not IOPS, is the gating factor in storage performance today.
    • Ario Gilbert: This move [bricking Revolv] by Google opens up an entire host of concerns about other Google hardware.
    • Erik Darling: Every time I think of a place where someone could stick a scalar function into some SQL, it ends up killing parallelism. Now it’s just sad.
    • The Codist: I chose programmer because it was easier. Today I now realize how wrong I was despite all the great stuff I’ve been able to work on and ship over the past 20 years. Going towards the CTO/CIO/VP Engineering route, which was fairly new back then, would have been a much better plan.
    • dforrestwilson1: rotating green units in every 9-12 months and veteran units out is a recipe for failure. They have to relearn everything and rebuild any trust you build up with local leaders. The closest we got to that was 2 year deployments of reservists, which was effective in Iraq.
    • @drew_firment: #serverless … no servers = joy / testing = pain / logging = different / 3rd party APIs = be smart / scaling = dream

  • A good way to put it. Than Man [Nvidia] Selling Shovels in the Machine-Learning Gold Rush. Nvidia has produced a $2 Billion Dollar Chip to Accelerate Artificial Intelligence. It’s called the Tesla P100 and if you put $1000 down now you can buy it in a few years. Not really. From graphics processing to AI processing seems like an excellent product extension into the next big thing. Size matters in neural networks, it could support networks that are 30x larger because it’s a beast: 15 billion transistors, “roughly three times as many as Nvidia’s previous artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia's previous best chip.”

  • Couldn’t agree more. Why I love ugly, messy interfaces. Long live UIs that actually let you do something.

  • Is cloud or on-premise cheaper? It turns on if you optimize your architecture to work in sympathy with the cloud. The Broad Institute: the cost of running the Genome Analysis Toolkit (GATK) best practices pipeline on a 30X-coverage whole genome was roughly the same as the cost of our on-premise infrastructure. Over a period of a few months, however, we developed techniques that allowed us to really reduce costs: We learned how to parallelize the computationally intensive steps like aligning DNA sequences against a reference genome. We also optimized for GCP’s infrastructure to lower costs by using features such as Preemptible VMs. After doing these optimizations, our production whole genome pipeline was about 20% the cost of where we were when we started, saving our researchers millions of dollars, all while reducing processing turnaround time eight-fold.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


How to Remove Duplicates in a Large Dataset Reducing Memory Requirements by 99%

This is a guest repost by Suresh Kondamudi from CleverTap.

Dealing with large datasets is often daunting. With limited computing resources, particularly memory, it can be challenging to perform even basic tasks like counting distinct elements, membership check, filtering duplicate elements, finding minimum, maximum, top-n elements, or set operations like union, intersection, similarity and so on

Probabilistic Data Structures to the Rescue

Probabilistic data structures can come in pretty handy in these cases, in that they dramatically reduce memory requirements, while still providing acceptable accuracy. Moreover, you get time efficiencies, as lookups (and adds) rely on multiple independent hash functions, which can be parallelized. We use structures like Bloom filtersMinHashCount-min sketchHyperLogLog extensively to solve a variety of problems. One fairly straightforward example is presented below.

The Problem

We at CleverTap manage mobile push notifications for our customers, and one of the things we need to guard against is sending multiple notifications to the same user for the same campaign. Push notifications are routed to individual devices/users based on push notification tokens generated by the mobile platforms. Because of their size (anywhere from 32b to 4kb), it’s non-performant for us to index push tokens or use them as the primary user key.

On certain mobile platforms, when a user uninstalls and subsequently re-installs the same app, we lose our primary user key and create a new user profile for that device. Typically, in that case, the mobile platform will generate a new push notification token for that user on the reinstall. However, that is not always guaranteed. So, in a small number of cases we can end up with multiple user records in our system having the same push notification token.

As a result, to prevent sending multiple notifications to the same user for the same campaign, we need to filter for a relatively small number of duplicate push tokens from a total dataset that runs from hundreds of millions to billions of records. To give you a sense of proportion, the memory required to filter just 100 Million push tokens is 100M * 256 = 25 GB!

The Solution – Bloom filter

Click to read more ...

Page 1 ... 5 6 7 8 9 ... 206 Next 10 Entries »