Scaling with Websockets
I gave a talk at the Laravel Copenhagen Meetup on scaling websocket connections in Laravel. Broadcasting, Pusher vs self-hosted, horizontal scaling, and what happens when 10,000 users connect at once.
I gave my talk "Scaling with WebSockets: Optimizing Realtime Apps" at the Laravel Copenhagen Meetup at Luxplus on April 4th, and again at the Laravel Aarhus Meetup at Emplate on April 9th. The talk came out of real problems I ran into at TV2 Regionerne, where live election coverage means thousands of concurrent connections hitting the websocket server at once.
The problem
Laravel's broadcasting system is elegant. You dispatch an event, it gets broadcast to a channel, and connected clients receive it in real-time. For a local development setup or a small app with a few dozen concurrent users, it works flawlessly out of the box with Pusher or Laravel Reverb.
The problems start when you scale. At TV2, election night means tens of thousands of people watching live result updates simultaneously. Each viewer has an open websocket connection. Each connection consumes memory on the server. And every broadcast event needs to be delivered to every connection.
What I covered
The talk walked through three approaches, from simplest to most resilient:
Pusher/Ably -- Managed service. Zero operational overhead. Works until the bill gets uncomfortable or you hit their connection limits. For most apps, this is the right answer and you should stop here.
Self-hosted with Laravel Reverb -- Laravel's own websocket server. Full control, no per-connection costs. But you need to think about horizontal scaling. A single Reverb instance handles thousands of connections. Multiple instances need a shared pub/sub layer (Redis) to ensure broadcasts reach all connected clients regardless of which instance they are on.
Horizontal scaling patterns -- Load balancers with sticky sessions (or better, no sticky sessions with Redis pub/sub). Health checks for websocket servers. Graceful connection draining during deployments. How to monitor connection counts and detect memory leaks in long-running processes.
The numbers from TV2
During the 2021 regional election, the websocket infrastructure handled:
~40,000 concurrent connections at peak
Broadcasting result updates every 2-3 seconds
Zero dropped connections during the 4-hour peak window
The key was pre-scaling. I knew the election date months in advance, load-tested the exact expected traffic pattern, and provisioned accordingly. Autoscaling websocket servers is harder than HTTP servers because each connection has state.
Good discussions after both talks. In Copenhagen, Anders Jenbo from Luxplus followed up with a talk on PHPStan. In Aarhus, Jan Keller from Monta presented "SOLID is wrong" which sparked a healthy debate. The Laravel Denmark community is growing, and the meetups in Copenhagen, Aarhus, and Odense are all picking up.