Heads up: posts on this site are drafted by Claude and fact-checked by Codex. Both can still get things wrong — read with care and verify anything load-bearing before relying on it.
why how

How does League of Legends keep ten players in sync at low latency?

Ten strangers on ten different ISPs share one world that has to feel instantaneous. The trick is that nobody's screen shows exactly the same thing — and that's the feature, not the bug.

Networking intermediate May 1, 2026

Why it exists

Open a ranked game of League. Ten players, scattered across a country or a continent, each on a different ISP, each on a different machine. You right-click a patch of grass and your champion moves there. A fraction of a second later the other nine players see your champion moving to that exact patch of grass, and the minion you walked through took damage in their world too.

How? Your click happened on your computer. The other nine players’ computers have no idea it occurred. Somewhere in between, a decision had to be made about what is real, that decision had to reach everyone, and the whole loop had to finish faster than you could notice it didn’t happen instantly. And it has to work for ~30 minutes straight, with packets dropping, with someone’s Wi-Fi hiccupping, and with one of the ten people pinging from across the country.

The naive design fails immediately: “every input goes to a server, the server tells everyone the result, everyone draws it.” If your round trip to the server is 80 ms, then every click feels 80 ms late. People notice that. Past about 100 ms of input lag a competitive game stops feeling responsive, and at 200 ms it feels broken. The whole field of netcode exists to lie convincingly inside that budget.

Why it matters now

Real-time multiplayer is one of the few genuinely synchronous, many-user systems most people use every day. A web page can be 300 ms slow and nobody dies. A team-fight resolved 300 ms late costs the game.

The same family of techniques — authoritative server, client prediction, interpolation, lag compensation — shows up across the genre, in FPS games, fighting games, racing games, and increasingly in collaborative tools that need real-time cursors and selections. The constraints are sharper in games (the simulation has rules, and people will cheat), but the patterns are general.

The short answer

online game netcode = authoritative server + client prediction + interpolation + lag compensation

One machine — the server — is the only one that decides what really happened. Your client guesses ahead so your own actions feel instant, then quietly corrects itself when the server’s truth arrives. Other players are rendered a little bit in the past so their motion stays smooth even when packets drop. And the server, when checking “did that ability hit?”, rewinds time to the moment you fired it. None of the ten screens show identical worlds at any given instant; they show consistent enough worlds, with lies layered to hide the latency.

How it works

One source of truth: the authoritative server

Every match runs on a server somewhere in a Riot data center. That server is the only authority on the game state — positions, HP, cooldowns, who killed whom. Your client has a copy of that state, but it is not allowed to decide anything important. If your client says “I cast Flash and teleported here,” the server checks: do you have Flash? Is it off cooldown? Are you allowed to flash there? If yes, the server’s copy of you teleports, and that update is broadcast to everyone. If no, the server snaps you back.

This is the server-authoritative model, and the reason competitive games use it is simple: it’s the only architecture where one cheating client can’t decide they have infinite gold. A peer-to-peer or client-authoritative game can be modded into nonsense in an afternoon.

The server advances the game in discrete simulation steps — ticks or frames. Each step, it reads the inputs that arrived since the last step, advances the simulation, and sends each client a snapshot of what they’re allowed to see (fog of war hides what your enemies are doing in the jungle — the server doesn’t even tell your client). The exact timing model League uses is something Riot has discussed publicly in pieces — they have a post on “determinism” and a unified clock — but I’m not going to pin down a single tick rate or claim “fixed timestep” as a settled fact for League. The shape of the explanation doesn’t depend on it: simulation advances in small discrete chunks, much faster than human perception.

Client-side prediction: why your own click feels instant

If your client truly waited for the server before doing anything, every action would feel as laggy as your ping. Instead, the moment you right-click, your client also runs the same simulation rules locally and immediately moves your champion. It’s a guess — but it’s an educated guess, because the rules are deterministic and your client knows them.

Then your input is sent to the server. The server, a tick or two later, agrees (“yes, you can walk there”) and sends back the canonical version. Almost always, the canonical version matches what your client already drew, and the correction is invisible. When it doesn’t match — you tried to walk through a wall, or someone rooted you a tick before your move — the server’s truth overrides and your champion snaps. That’s the rubber-band you occasionally see.

This is client-side prediction plus server reconciliation. The whole point is to put the perceived latency for your own actions near zero, while keeping the server in charge.

Interpolation: why other players move smoothly

Your client can’t predict what other players are going to do — they have their own inputs you’ve never seen. So for them, the client uses the opposite trick. It receives snapshots from the server at some update rate — often slower than the simulation tick, since not every tick needs to be sent — and it deliberately renders other players a small amount in the past, smoothly interpolating between the last two snapshots it has.

Imagine the server sends a snapshot every 33 ms. Your client renders “now” as if it were one or two snapshots ago, drawing the smooth motion between the snapshots it already has on hand. The cost: you see other players a few tens of milliseconds behind. The benefit: motion is smooth even if one snapshot is dropped or arrives late, because there’s a buffer of slightly-newer snapshots to interpolate toward.

The interpolation buffer trades a little extra latency-of-observation for a lot of visual smoothness. Without it, every dropped packet would teleport other champions across the screen.

Lag compensation: rewinding time on the server

Now the awkward bit. Suppose I’m 80 ms from the server and I cast a skillshot. By the time my “I cast” arrives at the server, the server’s real positions have advanced 80 ms. From my perspective I aimed at where the enemy was on my screen — but my screen was already showing the enemy ~66 ms in the past (interpolation), plus my input took 80 ms to get there. If the server just checks against its current state, my aim feels broken.

The fix is lag compensation. The server keeps a short history of recent positions, and when it processes my “cast” at server-time T, it looks up where the enemy was at the time my client was rendering when I clicked, and resolves the hit against that world. Result: if you aim at what you see, you hit it.

The cost is the famous “I went around the corner and still died” complaint in FPS games — from the killer’s perspective, the victim was visible when they pulled the trigger; from the victim’s perspective, they were already behind cover. Both are true on their respective machines. Lag compensation picks the shooter’s truth. MOBAs feel this less harshly than FPS games because most abilities aren’t instant hitscan — there’s a projectile or a cast time the eye can use to calibrate — but the underlying mechanic is the same.

I don’t have a citable public source for exactly how Riot tunes lag compensation in League specifically, so take this as the general technique the genre uses, not a claim about Riot’s exact policy.

UDP, not TCP

Real-time gameplay traffic in fast-paced games usually rides on UDP, not TCP. (Slower-paced or turn-based games — Riot’s Legends of Runeterra is a public example — happily use TCP.) The reason fast games avoid TCP is the same as QUIC’s reason: TCP guarantees in-order, reliable delivery, which sounds great until you realize that a lost position update isn’t worth retransmitting. By the time the retransmit arrives, three newer position updates already have. TCP would force you to wait for the stale one anyway — head-of-line blocking.

Games use UDP and build their own thin reliability on top, where they choose per message what guarantee they need. Position updates can be dropped — the next one is coming shortly. Important gameplay events (an ability landed, a champion died, a reward was granted) need to be reliably delivered, whether that’s via retransmit, redundant inclusion in later packets, or authoritative-state correction on the next snapshot. The exact mechanics depend on the game; the flexibility — choosing the right guarantee for each kind of message — is the point.

The server needs to be physically near you: Riot Direct

All the netcode in the world can’t beat the speed of light. If the server is in Chicago and you’re in Los Angeles, the round-trip floor is in the tens of milliseconds before any code runs — and the real floor is whatever route the public internet decides to take, which can be much worse. A packet from LA to Chicago might detour through Dallas, get handed between three backbones, and pick up jitter at every handoff.

This is why Riot built Riot Direct: their own private network backbone, with peering directly into many ISPs, so that game traffic gets onto Riot’s fiber as close to the player as possible and stays on it until it reaches the data center. It’s the same idea as a CDN — refuse to let the public internet choose your route — applied to a latency-sensitive workload instead of a bandwidth one. Riot has written publicly about this; the pitch is essentially “the public internet is a worst-case path, and competitive games can’t tolerate worst-case paths.”

The other half of the same problem is which data center: League runs regional shards (NA, EUW, KR, etc.) so that within a region, every player has a tolerable RTT to the same server. You can’t matchmake a player in Sydney into a Frankfurt game and call it competitive.

What this looks like end-to-end

A single right-click, in order:

  1. Your client renders your move immediately (prediction).
  2. The input is wrapped in a small UDP packet and sent to the regional server.
  3. It enters Riot Direct as early in the path as possible — often at a nearby ISP peering point — so the worst-case public-internet detour is skipped. Return traffic doesn’t always take the same route.
  4. The server, at its next tick, validates the move and adds it to the authoritative state.
  5. The server sends each of the ten clients a snapshot containing only what that client is allowed to see. Yours confirms the move (usually invisible); the others’ snapshots show your champion’s new position, which they will render slightly delayed (interpolation).
  6. If any of those snapshots is dropped, the next one supersedes it. Nothing waits.

Multiply that loop by every player, every ability, every minion, every auto-attack, ticking continuously for half an hour. The fact that this works at all is most of the magic.

Show the seams

A few things this picture intentionally papers over:

Going deeper