Building Highly Scalable V6 Only Cloud Hosting

This is a guest repost by Donatas Abraitis, Lead Systems Engineer at at Hostinger International.

This article is about how we built the new high scalable cloud hosting solution using IPv6-only communication between commodity servers, what problems we faced with IPv6 protocol and how we tackled them for handling more than ten millions active users.

Why did we decide to run IPv6-only network?

At Hostinger we care much about innovation technologies, thus we decided to run a new project named Awex that is based on this protocol. If we can, so why not start since today? Only frontend (user facing) services are running in dual-stack environment, everything else is IPv6-only for west-east traffic.

Architecture

We are using pods. Pod is a cluster which shares the same VIPs (Virtual IPs) addresses as anycast and can handle HTTP/HTTPS requests in parallel. Hunderds nodes per pod can handle user’s request simultaneously without saturating the single one. Parallelization is done using BGP and ECMP with resilient hashing to avoid traffic scattering. Hence every edge node is running BGP daemon for announcing VIPs toToR switch. As BGP daemon we are running ExaBGP and using single IPv6 session for announcing both protocols (IPv4/IPv6). BGP session is configured automatically during server bootstrap step. Announcements are different depending on server’s role, including /64 prefix per node plus many of VIPs for north-south traffic. /64 prefix is specially delegated for containers. Every edge node runs plenty of containers and they communicate each other between other nodes and internal services.

Every edge node uses Redis as slave replica to get upstream for particular application, hence every upstream has thousands of containers (IPv6) as list spanning between nodes in pod. These huge lists are generated in real-time using consul-template. Edge node has many public IPv4 (512) and global IPv6 (512) addresses. Wondering why? To tackle with DDoS attacks. We use DNS to randomize A/AAAA for client’s response. Client points his domain to our CNAME record named route, which in turn is randomized by our custom service named Razor. We will talk about Razor on the further posts.

Network gear

At first, for ToR switches we decided to use OpenSwitch, which is quite young but interesting and promising community project. We tested this OS in our lab for few months, even contributed some changes to OpenSwitch, like this patch. Issued a number of bugs, most of them were finally fixed, but not as fast as needed, hence we postponed experimenting with OpenSwitch for a while and gave Cumulus a try. By the way, we are still testing OpenSwitch in our lab because we are planning to use it in the near future.

Cumulus allows us to have fully automated network, where we reconfigure network including BGP neighbors, upstreams, firewall, bridges, etc. on changes. For instance, we add a new node, then Ansible will automatically see changes in Chef inventory by looking at LLDP attributes and regenerate network configuration for particular switch. If we want to add a new BGP upstream or firewall rule, we just create pull request to our Github repo and everything is done automatically including checking syntax and deploying changes in production. Every node is connected with single 10GE interface using Clos topology. Here are a few examples of pull requests:

Add IPv6 for internal ceph bridge
Remove IPv4 network for Ceph

Problems we tackled during the process

  • Different format for defining IPv6 address - some services use brackets to wrap IPv6 address inside ([2001:dead:beef::1]), others do not (2001:dead:beef::1), or the best are ([2001::dead::beef::1::::1]), (::ffff:<ipv4>);
  • Libraries incompatible with IPv6 protocol, for example Sensu monitoring framework doesn’t support IPv6, thus we moved to Prometheus;
  • Cisco IOS bug: Can’t use single IPv6 iBGP session for handling both protocols, because Cisco includes link-local address with global as next-hop. There were two options to exclude link-local address: use private ASs or loopback interfaces as update-source. Moved to private AS numbers per rack;
  • MTU issues, like receive queue drops. We run many internal services on VMWare ESXi nodes, thus after launching project in lab we saw many drops on receive side. After deep investigation we figured out that drops were due to bigger MTU size than expected (1518 + 22). By default NIC has MTU size 1500 + extra underlying headers including ethernet header, checksum, .1Q. First, I tried to change ring buffers for receive queues, but it was enough only for a short time, they filled up too quickly and vmxnet3 driver wasn’t able to drain them quickly. I logged in ESXi host and checked for vmxnet3 stats for any guest machine: # of pkts dropped due to large hdrs:126. These are large headers drops, thus later I decided to hook on vmxnet3 driver and check vmxnet3_rx_error() to see what buffer length is hitting the queues. That was really disappointing because buffer size was 54 bytes and it wasn’t even IPv4 or IPv6 packet. Just some VMWare underlying headers. Finally, by adjusting MTU for nodes running on ESXi we were able to handle all packets without dropping them.

Lessons learned

  • IPv6 protocol is much more acceptable and more scalable for larger infrastructures;
  • There are a lot of tools, services, libraries that do not support IPv6 partially or at all;
  • IPv6 allows us to define and control address space more granularly than IPv4;
  • IPv6 has better performance even if its packet header is higher than IPv4. No fragmentation, no checksums, no NAT;
  • If you are not using IPv6, you are contributing for terrorism;
  • Lack of IPv6 is a bug, not just a missing feature;
  • We fell in love with IPv6 :)