Skip to content
← All posts

Election Night: 90,000 Requests Per Second

The 2021 regional election in Denmark pushed the TV2 Regionerne websites to 3 billion requests in a single day. Here is how the infrastructure held up.

· 5 min read · Sylvester Damgaard
Election Night: 90,000 Requests Per Second

The regional election was last week. All eight TV2 regional news sites ran on the Statamic platform I built and the Kubernetes infrastructure I operate. This was the first major election since the migration from Bazo, and the ultimate load test.

The numbers for election day:

  • 1 million unique visitors

  • 3 billion requests

  • 120 TB data transfer

  • Peak in the 4-hour window as final results arrived: 1.2 billion requests, 50 TB, ~90,000 requests per second

Everything held. No downtime, no degraded performance, no emergency scaling. The caching strategy I built for exactly this scenario worked as designed.

How it works

The key insight is that election night traffic is predictable in shape but not in magnitude. I know when the peak will happen (as counting stations report results). I don't know exactly how many people will be watching. The architecture handles this by making the cache do all the work.

Layer 1: Statamic application cache. Blueprint resolution and field augmentation are the expensive operations in Statamic, not reading flat files. I cache the augmented data aggressively with tagged invalidation. When a journalist publishes an election result update, only the relevant cache entries are invalidated.

Layer 2: Fastly CDN. The edge cache serves the vast majority of requests. On election night, the cache hit ratio was above 99.5%. The origin servers (Kubernetes pods) handled fewer than 5,000 requests per second even at peak. Fastly handled the other 85,000.

Layer 3: Pre-warming. Before election night, I crawled every URL the sites would serve and pre-populated both application cache and CDN. When the first visitors arrived, every request was a cache hit from second one.

What I learned

Pre-scaling is everything. I provisioned the Kubernetes cluster for expected peak load two days before the election. Autoscaling websocket and application pods works, but the initial scale-up lag is too slow for traffic that goes from 5,000 to 90,000 req/sec in minutes. Start big, scale down later.

The one-person ops model is stressful during events like this. I sat in the TV2 newsroom watching Grafana dashboards on one screen and the election broadcast on another. Everything was green. But if it hadn't been, I was the only person who could fix it.

This event crystallized what I want to build next. The caching patterns, the auto-scaling logic, the monitoring setup. These should not be custom infrastructure that one person builds and maintains. They should be tools that any PHP team can use.

Sylvester Damgaard
Sylvester Damgaard

The person behind Cbox. Has been writing code and running servers since 2000. Built the CMS and infrastructure behind TV2's regional news sites, co-founded a drone inspection startup, and makes open source packages for PHP teams.