We are leaving 3x-4x performance on the table just because of configuration.
Performance guru Martin Thompson gave a great talk at Strangeloop: Aeron: Open-source high-performance messaging, and one of the many interesting points he made was how much performance is being lost because were aren't configuring machines properly.
This point comes on the observation that "Loss, throughput, and buffer size are all strongly related."
Here's a gloss of Martin's reasoning. It's a problem that keeps happening and people aren't aware that it's happening because most people are not aware of how to tune network parameters in the OS.
The separation of programmers and system admins has become an anti-pattern. Developers don’t talk to the people who have root access on machines who don’t talk to the people that have network access. Which means machines are never configured right, which leads to a lot of loss. We are leaving 3x-4x performance on the table just because of configuration.
We need to workout how to bridge that gap, know what the parameters are, and how to fix them.
So know your OS network parameters and how to tune them.
Related Articles
- Aeron: Do We Really Need Another Messaging System?
- Strategy: Exploit Processor Affinity For High And Predictable Performance
- 12 Ways To Increase Throughput By 32X And Reduce Latency By 20X
- Busting 4 Modern Hardware Myths - Are Memory, HDDs, And SSDs Really Random Access?
- Paper: Network Stack Specialization For Performance
- Strategy: Use Linux Taskset To Pin Processes Or Let The OS Schedule It?
- Big List Of 20 Common Bottlenecks
- The Secret To 10 Million Concurrent Connections -The Kernel Is The Problem, Not The Solution
- Russ’ 10 Ingredient Recipe For Making 1 Million TPS On $5K Hardware
- 42 Monster Problems That Attack As Loads Increase
- 9 Principles Of High Performance Programs