« Sponsored Post:, Loupe, Etleap, Aerospike, Stream, Scalyr, VividCortex, Domino Data Lab, MemSQL, InMemory.Net, Zohocorp | Main | Stuff The Internet Says On Scalability For August 18th, 2017 »

Stuff The Internet Says On Scalability For August 25th, 2017

Hey, it's HighScalability time


View of the total solar eclipse from a hill top near Madras Oregon, August 21, 2017. As totality approaches, dragons gorge on sun flesh; darkness cleaves the day; a chill chases away the heat; all becomes still. Contact made! Diamonds glitter; beads sparkle; shadow band snakes slither across pale dust; moon shadow races across the valley, devouring all in wonder. Inside a circle of standing stones, obsidian knives slash and stab. Sacrifices offered, dragons take flight. In awe we behold the returning of the light.


If you like this sort of Stuff then please support me on Patreon.


  • ~5: ethereum transactions per second; 29+M: Snapchat news viewers; 100K: largest Mastodon instance; 2xAlibaba's cloud base growth; 1B: trees planted by a province in Pakistan; 90.07%: automated decoding of honey bee waggle dances; $86.4B: Worldwide Information Security Spending; 1200: db migrations from Mysql to Postgres; $7B: Netflix content spend (most not original); 13%: increased productivity by making vacation mandatory; 75%: US teens use iPhones; 30,000x: energy use for Bitcoin transaction compared to Visa; ~1 trillion: observations processed for Gaia mission; 50%: video North American internet traffic; $300 million: cost of cyberattack on world’s biggest container shipping company; 320 million: Freely Downloadable Pwned Passwords; 1700 B.C: world’s oldest trigonometric table;

  • Quotable Quotes:
    • @matthew_d_green: I miss the days when Bitcoin was a cool technical innovation and not a weird religious movement.
    • Ruth Williams: Their new digital-to-biological converter (DBC) can, upon receipt of a DNA sequence, prepare appropriate oligos, carry out DNA synthesis, and then, as required, convert that DNA into a vaccine, or indeed into any RNA molecule or protein.
    • @trashcanlife: Hello, this is container 100406100098090 in Buffalo, United States. I am 38% full.
    • Zhang & Stutsman: Developing new systems and applications on RAMCloud, we have repeatedly run into the need to push computation into storage servers.
    • @kevinmontrose: Tomorrow the Sun will undergo routine maintenance in US region. Will be unavailable for select customers, others will have degraded service.
    • @bryanrbeal: We're officially in an era where every piece of HARDWARE you buy, is actually a service. There is no hardware any more.
    • @jessfraz: Literally throwing away two trash bags of container startups tee-shirts, sorry I just... there's too many
    • @postwait: "Scale" I don't think that word means what you think it means. Hint: it doesn't mean your arbitrary concept of "big."
    • Rod Squad: My friend, SocialBlade founder, Jason Urgo advised my 10-year-old son on how to start programming. Jason told us how he started dabbling with scripts and programs in kindergarten. He told me about the first game he programmed. He also listed some of the first applications he built. And he explained how he taught himself PHP to build the YouTube data compiler.
    • Kim Beaudin: Why do Java developers wear glasses? Because they can’t C#.
    • Christine Hall: Investing in private data centers isn’t as much of a priority for IT organizations as it was just several years back. That’s a takeaway from IT researcher Computer Economics’ annual IT Spending and Staffing Benchmarks report...According to the report, data centers now have the lowest priority for new spending among a list of five categories. Top priority is given to the development of business applications, a category in which 54 percent of respondents plan increased spending. However, only 9 percent have plans to increase data center spending, which the study attributes to increasing reliance on cloud infrastructure, cloud storage, and SaaS
    • morning paper: The core idea of a CGN is to gather all the information needed for a page load in a place that has a short RTT time, and then transfer it to the client in (ideally) one round trip. At a cost of about $1 per user, the authors show that it can reduce the median page load time across 100 popular web sites by up to 53%.
    • Nick Harley: It’s easy to shrug off problems with a ‘move fast and break things’ mentality. But we build software for our users, and sometimes forget they are real people.
    • alexkcd: Proof of work systems are, at the core, a race towards ever greater energy consumption. They're an environmental disaster waiting to happen. Surprised how little attention this gets. I would argue that the benefit of decentralization is not worth the price.
    • @EricNewcomer: Uber generates $1.75 billion in revenue on a $645 million loss
    • Preethi Kasireddy: In order to scale, the blockchain protocol must figure out a mechanism to limit the number of participating nodes needed to validate each transaction, without losing the network’s trust that each transaction is valid. 
    • HowDoIMathThough: A slide I personally find really interesting from anandtech's hot chips coverage - Intel has packaging technology that should allow multiple dies to be combined with extremely fast links extremely cheaply
    • Tim Bray: It may sound hack­neyed in 2017, but: Me, I be­lieve in pro­gress. I be­lieve in build­ing un­der­stand­ing cu­mu­la­tive­ly and striv­ing al­ways for Truth. Un­for­tu­nate­ly, there are places in the world, some quite near­by, where the en­e­mies of progress are strong. As Joel Mokyr teach­es, progress is not pre­des­tined to win; we have to fight for it and nev­er stop, or we can lose it; it’s hap­pened.
    • @rawkode: So fed up with watching micro-service talks where they say "More services == good" and don't even mention operational concerns or intg tests
    • @pcalcado: A little known fact is that approximately 47% of CPU usage across a typical Kubernetes cluster is invested translating between JSON and YAML
    • two2two: I asked my 16 year old nephew 6 months ago how he accesses the news. His answer: Snapchat. I followed that with anywhere else? His response was nope.
    • lima: Red Hat's OpenShift makes it [deploying applications?] a lot easier by providing all of the infrastructure around it (docker registry, docker build from Git, Ansible integration and so on). Best docs of all open source projects I've seen.
    • drdaeman: I'm really wary about using larger black boxes for critical parts. Just Linux kernel and Docker can bring enough headache, and K8s on top of this looks terrifying. Simplicity has value. GitHub can afford to deal with a lot of complexity, but a tiny startup probably can't. Or am I just unnecessarily scaring myself?
    • Stefan Majewsky: Across terminals, median latencies ranged between 5 and 45 milliseconds, with the 99.9th percentile going as high as 110 ms for some terminals. Now I can see that more than 100 milliseconds is going to be noticeable, but I was certainly left wondering: Can I really perceive a difference between 5 ms latency and 45 ms latency? Turns out that I can.
    • michaelt: Our current design [for van routing software] isn't well suited to adaption to a GPU, because it branches a lot and the memory accesses aren't strided evenly. So we couldn't just plug our current code into a java-to-cuda compiler; we'd need to change the design.
    • @GabeAul: It's official! We did the last migration this weekend, so all new Windows development is on Git! Congrats to the team who worked the w/end!
    • Iddo Bentov: What we’re seeing today is just a harbinger of problems to come should decentralized exchanges sweep over the cryptocurrency landscape. But since the problems that we’ve identified are exacerbated when higher value trades take place, we conjecture that such problems will ultimately limit the popularity of decentralized exchanges.
    • Steve Goldfeder: trackers can link real-world identities to Bitcoin addresses. To be clear, all of this leaked data is sitting in the logs of dozens of tracking companies, and the linkages can be done retroactively using past purchase data.
    • @jasongorman: Go read somebody else's code, *then* write more unit tests to catch any bugs you find. Code review doesn't scale.
    • David Rosenthal: Unless decentralized technologies specifically address the issue of how to avoid increasing returns to scale they will not, of themselves, fix this economic problem. Their increasing returns to scale will drive layering centralized businesses on top of decentralized infrastructure, replicating the problem we face now, just on different infrastructure.
    • @kevin2kelly: Bill Joy: I decided to spend my time trying to create the things we need as opposed to preventing what threatens us.
    • Ethan Zuckerman: decentralization is important because it allows a community to run under its own rules.
    • Tim Harford: to take advantage of electricity, factory owners had to think in a very different way. They could, of course, use an electric motor in the same way as they used steam engines. It would slot right into their old couldn't get these results simply by ripping out the steam engine and replacing it with an electric motor. You needed to change everything: the architecture and the production process. And because workers had more autonomy and flexibility, you even had to change the way they were recruited, trained and paid. Factory owners hesitated, for understandable reasons.
    • Dan Luu: We’ve looked at a variety of classic branch predictors and very briefly discussed a couple of newer predictors. Some of the classic predictors we discussed are still used in CPUs today, and if this were an hour long talk instead of a half-hour long talk, we could have discussed state-of-the-art predictors. I think that a lot of people have an idea that CPUs are mysterious and hard to understand, but I think that CPUs are actually easier to understand than software. I might be biased because I used to work on CPUs, but I think that this is not a result of my bias but something fundamental.
    • creshal: A current-gen 35W laptop CPU will be some 10 times faster[2] as a RasPi, have much faster storage available (SATA3 or NVMe versus… USB2), much faster I/O (GBit LAN and GBit Wifi versus… USB2), and a lot of other benefits. (Like an integrated screen and battery and keyboard and …) It also won't need external hardware to communicate with other cluster members – that 10-port ethernet switch will need power, too. One RasPi is relatively energy efficient; RasPi clusters… not so much.
    • howinator: we moved to k8s because we have quite a few low-usage services. Before k8s, each one of those services was getting its own EC2 instance. After k8s, we just have one set of machines which all the services use. If one service is getting more traffic, the resources for that service scale up, but we maintain a low baseline resource usage. In short, it's resulted in a measurable drop in our EC2 usage.
    • medius: If you are migrating to AWS RDS, I recommend AWS Data Migration service. I migrated my live database (~50GB) from Mysql to Postgres (both RDS) with zero downtime. I used AWS Schema Conversion Tool for initial PG schema. I customized the generated schema for my specific needs.
    • Jon Claerbout: interactive programs are slavery unless they include the ability to arrive in any previous state by means of a script
    • @bascule: ~5 transactions/second  @VitalikButerinCongrats to ethereum community for 5 days of record-high transaction usage! (410061 ... 443356) 
    • Sujith Ravi: Delegating the computation-intensive operations from device to the cloud is not a feasible strategy in many real-world scenarios due to connectivity issues (like when data cannot be sent to the server) or privacy reasons. In scenarios, one solution is to take an existing trained neural network model and then apply compression techniques like quantization to reduce model size. The trainer model can be deployed anywhere a standard neural network is used. The simpler projection network model weights along with transform functions are extracted to create a lightweight model that is pushed to device. This model is used directly on-device at inference time.
    • rothbardrand: BCH [Bitcoin Cash] is a hastily written hack job by a third rate team (I talked to some of them on twitter, they really don't understand a lot of what they are doing)... with a drastic difficulty retargeting algorithm. A bit of a pump combined with hash power manipulations lead to this. This is all show to try and prop up the coin. Both the pump and the "profitability" of mining it. %98 of the blocks of this coin are mined by an unknown entity-- in other words, it's not decentralized. It's trivial for that entity to manipulate the difficulty retargeting mechanism in his favor. Stay away. This is not "bitcoin" in any sense.
    • John Allspaw: It’s only when there isn’t universal agreement about a decision (or even if a decision is necessary) that the how, who, and when a decision gets made becomes important to know. The idea of an architecture review is to expose the problem space and proposed departure ideas to dialogue in a broad enough way that confusion about them can be reduced as much as possible. Less confusion about the topic(s) can help reduce uncertainty and/or anxiety about a solution.
    • HBR: Even though the resilient superhero is usually perceived as better, there is a hidden dark side to it: it comes with the exact same traits that inhibit self-awareness and, in turn, the ability to maintain a realistic self-concept, which is pivotal for developing one’s career potential and leadership talent. 
    • Charles Allen: When a local disk fails, the solution is to kill that instance and let the HA built into your application recover on a new VM. When network disk fails or has a multi-instance brownout, you’re just stuck and have to failover to another failure domain, which is usually in another availability zone or in some cases another region! We know this because this kind of failure has caused production outages for us before in AWS. This trend towards network attached storage is one of the scariest industry trends for big data in the cloud where there will probably be more growing pains before it is resolved.

  • Pipeline Blindness: failing to find patterns in data because your data processing pipeline wasn't programmed to find it. This happened on the Kepler mission, a spacecraft tasked with finding exoplanets by staring at the same region of space for a very long time. Their data processing pipeline failed to find an exotic four-star planet because it wasn't looking for it. Who knew such a thing existed? Humans examining the data, did notice the unexpected. Citizen Scientists Discover Four-Star Planet with NASA Kepler. This is why you want to make data public. Many eyeballs may not make software better, but those eyeballs are good at finding patterns.

  • Papers from HotCloud '17 are now available. Adrian Colyer reviews many of the papers, like Growing a protocol

  • You know when they say it's not about the money, it's really about the money? When they say it's unlimited, it's never unlimited. Verizon’s good unlimited data plan is now three bad unlimited planVerizon will start throttling all video down to 480 or 720p, even if you have their unlimited plan.

  • A three person team moved the New York Times crossword from AWS to GCP/App Engine while cutting infrastructure costs in half. Moving The New York Times Games Platform to Google App Engine: crossword has grown into a suite of mobile apps and a fully interactive website that has over 300,000 paid subscribers...To serve puzzle data to that many subscribers and to handle advanced features like syncing game progress across multiple devices, our backend systems were running on Amazon Web Services with a LAMP-like architecture...Due to the inelastic architecture of our AWS system, we needed to have the systems scaled up to handle our peak traffic at 10PM when the daily puzzle is published. The system is generally at that peak traffic for only a few minutes a day, so this setup was very costly...we decided to rebuild our systems using Go, Google App Engine, Datastore, BigQuery, PubSub and Container Engine...all games API traffic is flowing through App Engine and 90% of the traffic is served purely by App Engine services and GCP databases.

  • A lot of forking going on. Node.js has forked into AyoBitcoin has split in two, so you can have double the cryptocurrency. Twitter forks Scala

  • Microsoft unveils Project Brainwave for real-time AI: our cross-Microsoft team unveiled a new deep learning acceleration platform, codenamed Project Brainwave...leverages the massive FPGA infrastructure that Microsoft has been deploying over the past few years.  By attaching high-performance FPGAs directly to our datacenter network, we can serve DNNs as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop.  This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them...Second, Project Brainwave uses a powerful “soft” DNN processing unit (or DPU), synthesized onto commercially available FPGAs...We showed Stratix 10 sustaining 39.5 Teraflops on this large GRU, running each request in under one millisecond.  At that level of performance, the Brainwave architecture sustains execution of over 130,000 compute operations per cycle.

  • We've taken this path with rule based systems, which at some point get so complex you have no idea what they'll do anymore. Until then, they're great. Serverless beyond Functions: One of the great advantage if serverless development is the possibility to “chain” multiple functions together, and design event-driven architectures...In this way, you can decompose and distribute business logic in smaller components that follow the data flow of your application: if this happens, do that...Applications built in this way are easier to keep under control, because our human minds are much better in looking for cause-effect relationships than understanding a complex workflow...Adding new features is also easier, because you don’t need to review all you code base to find the right spots to change, but you can start by thinking: What would be the cause (trigger) of that? Which would be the effects (what to trigger next)?...Lambda functions should be designed to be stateless, and can use a persistence tier to read/write data...Using WebSockets and AWS IoT, web browsers can receive data from Lambda functions, when those functions publish something on a topic the browsers have subscribed to...This is possible using together “building blocks” such as, in this case, AWS Lambda, Amazon API Gateway, AWS IoT and Amazon DynamoDB, that provide high level functionalities, with built-in scalability and reliability, without the requirement to provision, scale, and manage any servers. This is the power of “serverless”

  • I guess us older folks will have to find a new app now, the kids have found our secret fort in the back yard. Let’s face reality: US Teens engage with iMessage more than any other social platform: iMessage is where a lot of mobile usage is trending towards, particularly for Gen-Z...Majority of US based teens have iPhones, and it’s only trending upwards. Yes, really...75% of US teens use iPhone...iMessage IS a social platform for teens. It’s currently the center of their immediate, social universe...For teens, the core level of activity is sending a message...Mix the network effect of such an active demographic with the enhanced functionality iMessage will support over the next several years and you have a next generation, immersive messaging experience. Simple conversations can be enriched and effortlessly amplified with one tap by expressing yourself in the form of an animated GIF, a once boring group chat can now seamlessly start a game of Connect 4, Battleship, or 8-Ball pool, and of course a group of close family friends can decide to take their conversation live by tapping into a group video call right within their iMessage window. 

  • When the universe finally ends, a debugging session will likely find the cause was a memory leak. An Embarrassing Tale: Why my server could only handle 10 players: I scanned my code, line by line, looking for the bug (which I should have done at the very beginning). There it was...I have this loop that goes through each event and updates it. It’s called every 16 ms. After an event fulfills its duty, it’s supposed to be deleted. Keywords: “supposed to.”...I had memory piling up as well as an increasing amount of unnecessary for-loop passes. I inserted a line of code and voila!

  • Google and Walmart Partner With Eye on Amazon. Coalitions are aligning to do battle. Napoleon liked to mass his troops and take the central position between two opposing forces, he would strike first at one and then at the other before they could coalesce. 

  • Michael Stonebraker is back. How Hardware Drives The Shape Of Databases To Come: I don’t see these [3D XPoint or ReRAM] are being that disruptive because all of them are not fast enough to replace main memory and they are not cheap enough to replace disks, and they are not cheap enough to replace flash...I foresee databases running on two-level stores and three-level stores, but I doubt they will be able to manage four-level stores because it is just too complicated to do the software...The most interesting thing to me is that networking is getting faster at a pace that is higher than CPUs are getting beefier and memory is getting faster. Essentially all multi-node database systems have been designed under the premise that networking is the bottleneck. It turns out that no one can saturate 40 Gb/sec Ethernet...that means essentially that everybody gets to rethink their fundamental partitioning architecture, and I think this will be a big deal...My general suspicion is that networking advances will make it at least as beefy as the storage system, at which point database systems will not be network bound and there will be some other bottleneck. If you are doing data science, that bottleneck is going to be the CPU because you are doing a singular value decomposition, and that is a cubic operation relative to the number of cells that you look at. If you are doing conventional business intelligence, you are likely going to be storage bound, and if you doing OLTP you are already in main memory anyway.

  • Another episode of when you buy remotely controllable software then you don't own shite. Sonos says users must accept new privacy policy or devices may "cease to function": Sonos has confirmed that existing customers will not be given an option to opt out of its new privacy policy, leaving customers with sound systems that may eventually "cease to function".

  • Where in the World is Mobile Development?: Countries with lower GDP per capita visit substantially more Android than countries with high GDP per capita...Unlike Android, there’s no correlation, positive or negative, between iOS traffic and per-capita GDP. This suggests that the correlation between Android and GDP may not be driven entirely by the market share of the platform in each country...thing to note is that Android is more visited than iOS from literally everywhere.

  • Metamarkets with a great comparison of AWS vs GCP. Going Multi-Cloud with AWS and GCP: Lessons Learned at Scale. The point wasn't to declare a winner, they want to take advantage of both clouds: we believe that one day soon people will think of servers the same way they think of circuits. Our technology investments are aimed at making the connectivity of data to insight completely seamless. Some highlights: AWS has the best offering for local disk solutions of the two cloud vendors...the rate card for local SSD in GCP is much higher than similar storage in AWS...For GCP, networking per VM is both significantly higher than what is achieved in AWS, and more consistent...for the same amount of work done on GCP compared to AWS where the CPU is more consistent from VM to VM...From a flexibility standpoint, the GCP offering of custom machine types is something we use extensively...The GCP support has been a much more favorable interaction compared to our experience (at the support level we pay for) with AWS support...for most [GCP] use cases you have to assume in any particular zone that all of your instances are running on the same machine. We have had failures of underlying hardware or some part of networking simultaneously take out multiple instances before, meaning they shared the same hardware at some finer level than just availability zone...GCP has been a lot more forthcoming with what issues their services are experiencing...A unique feature for GCP is the ability to migrate your VMs to new hardware transparently...getting a handle on your cloud spend is a huge hassle in both AWS and GCP...The strategy for AWS is largely around instance reservations. With the recent addition of convertible reservations and instance size flexibility, it makes experimenting with more efficient instance configurations much easier...For GCP, the strategy seems to be headed toward committed use discounts and sustained usage discounts with a premium for specific extended compute or extended memory needs...Both cloud providers have excellent security features for data and we have never been concerned about security of the cloud providers themselves...In general AWS has a higher quantity of more mature features, but the features GCP is publishing tend to come with less vendor lock.

  • Making the most of javascriptJavaScript for extending low-latency in-memory key-value stores: Procedure invocation and interactions between JavaScript and the host database process are 11.4× and 72× faster than using native code and hardware based protections. Short and data-intensive procedures will benefit from JavaScript; V8 with asm.js is only 2-10% slower than native code, so compute-bound workloads are okay thanks to aggressive compiler optimization. V8 is harder on CPU branch prediction than native code; Leverage JavaScript Types and JIT; Minimize Data Movement; Exploit Semantics for Garbage Collection; Expose Database Abstractions; Fast Protection Domain Switch.

  • Jim Clark (Netscape, Silicon Graphics) tells Jason Calacanis in an interview that SGI pioneered GPU technology with their geometry engine. Bad management drove SGI's great 3D chip designers to leave SGI to found Nvidia. Jim Clark thinks making GPUs is the direction SGI should have taken, but didn't.

  • Cloud egress costs are still way too high, but Google is segmenting their cloud by networking performance. In the VIP section, behind red velvet ropes, your traffic will flow over Google's highspeed network. Down in the pit, your traffic flows over the internet. Being with the people will save you 24-33%. Bottle service is expensive. 

  • CRE life lessons: The practicalities of dark launching: In a dark launch, you take a copy of your incoming traffic and send it to the new service, then throw away the result. Dark launches are useful when you want to launch a new version of an existing service, but don’t want nasty surprises when you turn it on...Generally, a read-only service is fairly easy to dark-launch. A service with queries that mutate backend storage is far less easy...The easiest option is to disable the mutates for the dark-launch traffic, returning a dummy response after the mutate is prepared but before it’s sent...Alternatively, you might choose to send the mutation to a temporary duplicate of your existing storage...during [a storage] migration, you should always make sure that you can revert to the old storage system if something goes wrong with the new one...Make sure your backends are appropriately provisioned for 2x the current traffic...determine the largest percentage launch that is practical and plan accordingly, aiming to get the most representative selection of traffic in the dark launch. Within Google, we tend to launch a new service to Googlers first before making the service public...Because of the load impact of duplicate traffic, you should carefully consider how to use load shedding in this experiment.

  • Immutable Deployment Challenges For DevOps: The gist of the call is that DevOps processes are moving faster and faster as teams embrace the create-destroy-repeat pattern of cloud automation. This pattern favors immutable images driven by cloudinit style bootstrapping.  This changes our configuration management practice because configuration is front loaded.  It also means that we destroy rather than patch. We both felt that this immutable pattern will become dominate overtime.

  • Inserting spies into your enemy's camp has long been tradition among warring factions. What's easier than buying off those that are already there? When a company like LastPass is acquired, there may be more going on than appears. Facebook’s Onavo Gives Social-Media Firm Inside Peek at Rivals’ Users: Months before social-media company Snap Inc. publicly disclosed slowing user growth, rival Facebook Inc. already knew...Facebook’s early insight came thanks to its 2013 acquisition of Israeli mobile-analytics company Onavo...Onavo’s data comes from Onavo Protect, a free mobile app that bills itself as a way to “keep you and your data safe” by creating a virtual private network, a service used to encrypt internet traffic.

  • You know when selecting the type for an ID, one person always argues for using a short type to save space and another argues for a long type so the IDs will never run out? The short type says IDs never run out, what's the big deal? Well, they do run out. The Night the PostgreSQL IDs Ran Out

  • Science experiments generate a lot of data. Gaia Mission maps 1 Billion stars. Interview with Uwe Lammers: Since the beginning of the nominal mission in 2014 until end June 2017 the satellite has delivered about 47.5 TB compressed raw data...The average raw daily data rate is about 40 is transmitted from the satellite to the ground through a so-called phased-array-antenna (PAA) at a rate of up to 8.5 Mbps...Raw data are essentially unprocessed digital measurements from the CCDs – perhaps comparable to data from the “raw mode” of digital consumer cameras...Here at the Science Operations Centre (SOC) near Madrid we have chosen years ago InterSystems Caché RDMS + NetApp hardware as our storage solution and this continues to be a good solution...Data transfers are likewise a challenge. At the moment 1 Gbps connections (public Internet) between DPCE and the other 5 DPCs are sufficient, however, in the coming years we heavily rely on seeing bandwidths increasing to 10 Gbps and beyond...One technology we are looking at is Apache Spark for big data the moment we are offering access to the catalogue only through a traditional RDBMS system which allows queries to be submitted in a special SQL dialect called ADQL (Astronomical Data Query Language). This DB system is not using InterSystems Caché but Postgres.

  • How do you take one of those algorithms you learn in school—traveling salesperson—and make it work in real life? The software routing 260,000 grocery deliveries a week: The algorithm makes several million route calculations per second to identify the best delivery routes for our drivers...The algorithm works by rapidly attempting small moves and assessing the impact of each move upon the overall solution by evaluating a cost function. This allows us to calculate the optimal route for each van...The optimiser runs approximately 4 million moves per second...You can picture this optimising process as a series of peaks and valleys in an almost endless landscape, where a peak represents the highest cost possible and a valley represents an optimal solution. Your job is to explore your local area, and head towards the lowest nearby point. However, there are multiple peaks and valleys so you can’t be sure that once you are in a valley it is the lowest possible point...The program runs all day, every day, using rapid incremental movements so it can always be searching for more efficient solutions to the problem...The software platform is constantly running multiple instances of the optimiser simultaneously, each iteration focussing on a specific area for a certain day...Our vans are equipped with a range of IoT sensors that log relevant data during deliveries such as location, wheel speed, engine revs, braking, fuel consumption, and cornering speed...The secret ingredient to our routing success is the broad range of variables we take into account when calculating the cost function, including van capacities, weights, volumes, fuel consumption and even driver experience. Also, A story of fractals, discrete optimisation and tissue paper delivery - Redmart x DevFest.Asia. Also also, Data Science at Instacart: Making On-Demand Profitable

  • #4 is the winner. 5 things about programming I learned with Go: 1. It is possible to have both dynamic-like syntax and static safety;  2. It’s better to compose than inherit; 3. Channels and goroutines are powerful way to solve problems involving concurrency; 4. Don’t communicate by sharing memory, share memory by communicating; 5. There is nothing exceptional in exceptions. Good comment section: C++ can do all that, Erlang did all that before, Go doesn't make you do all that. 

  • Looks like great fun. Fab Academy Course Structure: At the Fab Academy, you will learn how to envision, prototype and document your ideas through many hours of hands-on experience with digital fabrication tools. We take a variety of code formats and turn them into physical objects.

  • Is there a business case for migrating 1200 databases from MySQL to Postgres for a 30% performance improvement? Only the shadow knows, but if you're making the change here's a very thorough description of how to make it happen. Migrating 1200 db from Mysql to Postgres: As of today we are running with Postgres, we got a 30% average speed up in out time responses, we have postgres replication running across two servers what allows us to an alternative access in case of failover. We implemented two new schemas with triggers and procedures written in plv8(include link) whose performance allow us to have historical information about our client changes and many more things on the way running all across Postgres.

  • How Samsung Will Improve 3D NAND Costs: First, four long steps are etched in the right-to-left direction of the figure.  This is probably done using a pullback etch series based on a single litho step, similar to the original approach, but two layers (rather than one) would be  etched at a time.  This is done on both the front of the feature and the back simultaneously, giving the pattern that “Up and Down” shape...In this way Samsung has reduced the die size of its 64-layer NAND while reducing process complexity and cost.  This approach is really the best of both worlds!

  • Ethereum's Biggest Hacking Problem Is Human Greed. Seems like a lot of heists. One has to wonder if "hacking" is simply cover for "earning" extraordinary returns.

  • There are many ways to version APIs. Stripe explains their approach. APIs as infrastructure: future-proofing Stripe with versioning: Stripe, we implement versioning with rolling versions that are named with the date they’re released (for example, 2017-05-24)...although backwards-incompatible, each one contains a small set of changes that make incremental upgrades relatively easy so that integrations can stay current...The first time a user makes an API request, their account is automatically pinned to the most recent version available, and from then on, every API call they make is assigned that version implicitly...API resources are written so that the structure they describe is what we’d expect back from the current version of the API...Version changes are written so that they expect to be automatically applied backwards from the current API version and in order. Each version change assumes that although newer changes may exist in front of them, the data they receive will look the same as when they were originally written...Version change modules keep older API versions abstracted out of core code paths. Developers can largely avoid thinking about them while they’re building new products.

  • Correcting bugs is always tricky. Correction of a pathogenic gene mutation in human embryos: It turns out that they have properly corrected a genetic defect inherited by the male donor. The experiment worked, but not quite how the researchers expected: instead of replacing the faulty gene by the proposed replacement, the process duplicated the maternal gene.

  • Let's hope Google has some real OS people on the team and we don't get another compromise OS like iOS.  Why On Earth Is Google Building A New Operating System From Scratch? Fuchsia is an attempt to get the best of both worlds between Linux–which is still better at allowing apps and hardware to communicate through the operating system–and today’s embedded systems, such as FreeRTOS and ThreadX...Dediu has a different theory: A fresh operating system could be free of the intellectual property licensing issues that have hounded Google with Android

  • RF energy harvesting is useless. What's not useless are super capacitors, PV cells (a PV Cell withwith physical dimensions compatible with the sensor can power the sensor indefinitely), energy optimisation through intelligent algorithms, programming the sensor by task rather than sample rate or transmission rate (sensor tasked with monitoring office space identifies that the lights are off and therefore sampling rates can be drastically reduced, On weekends in an empty office, the rate of change of CO2 concentration would reduce therefore the CO2 sensing rate could be reduced accordingly). LoRaWAN Energy Performance & Ambient Energy Harvesting: an RF energy harvester with a 100% efficiency converting a -30 dBm RF signal to stored energy. -30 dBm is 1 microWatt. Over one year this equates to 32 Joules of energy (1x10-6 W * 3600 sec per hour * 24 hours per day * 365 days per year). However, at -30 dBm the harvester efficiency would be closer to 5% or 1.6 J PER YEAR. Increase the signal to -20 dBm and 20% harvester efficiency and the energy available over a whole year rises to 63 Joules. Compare this to the 33,000 Joules available from an AA sized 3.6V Lithium battery. Also consider that your mobile phone thinks that -80 dBm (a million times smaller than -20dBm) is a good signal.

  • Does The Cloud Need Stabilizing? We identify the following cloud design principles to be the most important factors contributing to the high-availability of cloud services: Keep the services “stateless” to avoid state corruption; Design loosely coupled distributed services where nodes are dispensable/substitutable; Leverage on low level infrastructure and sharding when building applications.

  • lyft/toasted-marshmallow: implements a JIT for marshmallow that speeds up dumping objects 10-25X (depending on your schema).

  • alexellis/faas: a framework for building serverless functions with Docker which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.

  • huwb/jitternator: In the past I've spent weeks painstakingly hunting down jitter issues - visible stutter in games. I decided to document the lessons and techniques I picked up along the way in the hope that it might help others.

  • lattner/ This document is published in the style of a "Swift evolution manifesto", outlining a long-term view of how to tackle a very large problem. It explores one possible approach to adding a first-class concurrency model to Swift, in an effort to catalyze positive discussion that leads us to a best-possible design. 

  • kubeless/kubeless: a Kubernetes-native serverless framework that lets you deploy small bits of code without having to worry about the underlying infrastructure plumbing. It leverages Kubernetes resources to provide auto-scaling, API routing, monitoring, troubleshooting and more.

  • Tabu search: a metaheuristic search method employing local search methods used for mathematical optimization. Local (neighborhood) searches take a potential solution to a problem and check its immediate neighbors (that is, solutions that are similar except for very few minor details) in the hope of finding an improved solution. Local search methods have a tendency to become stuck in suboptimal regions or on plateaus where many solutions are equally fit. Tabu search enhances the performance of local search by relaxing its basic rule. First, at each step worsening moves can be accepted if no improving move is available (like when the search is stuck at a strict local minimum). In addition, prohibitions (henceforth the term tabu) are introduced to discourage the search from coming back to previously-visited solutions.

  • An efficient bandit algorithm for realtime multivariate optimization: Here we focus on multivariate optimization of interactive web pages. We formulate an approach where the possible interactions between different components of the page are modeled explicitly. We apply bandit methodology to explore the layout space efficiently and use hill-climbing to select optimal content in realtime. Our algorithm also extends to contextualization and personalization of layout selection. Simulation results show the suitability of our approach to large decision spaces with strong interactions between content. We further apply our algorithm to optimize a message that promotes adoption of an Amazon service. After only a single week of online optimization, we saw a 21% increase in purchase rate compared to the average layout. Our technique is currently being deployed to optimize content across several locations at

  • Technical Session 5 Paper 3: Disk | Crypt | Net: rethinking the stack for high-performance video streaming: This motivates the author to build a system called Atlas which ideally does not use any memory and fetches the content straight from disk to the NICs. Atlas puts SSDs directly in TCP control loop by processing disk reads to completion and then transmitting. Atlas uses diskmap (kernel bypass framework) to achieve this design. Ilias states that this design is slightly far off from the ideal case due to diskmap limitations. Atlas outperforms the Netflix stack by 15% for unencrypted and 50% for encrypted traffic in terms of throughput with half the number of cores used by Netflix.  The number of memory reads is half for each packet sent. Their solution does involve some memory use particularly due to inefficient re-use of buffer cache limited by diskmap.

  • Paper 3: Re-architecting datacenter networks and stacks for low latency and high performance: On top of presenting a switch queuing algorithm, NDP proposes a per-packet multipath forwarding and a novel transport protocol. Implementation: the authors presented NDP in Linux hosts with DPDK, in a software switch, in a NetFPGA-based hardware switch, and in P4


Hey, just letting you know I've written a novella: The Strange Trial of Ciri: The First Sentient AI. It explores the idea of how a sentient AI might arise as ripped from the headlines deep learning techniques are applied to large social networks. Anyway, I like the story. If you do too please consider giving it a review on Amazon. Thanks for your support!

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>