How does League of Legends keep ten players in sync at low latency?
Ten strangers on ten different ISPs share one world that has to feel instantaneous. The trick is that nobody's screen shows exactly the same thing — and that's the feature, not the bug.
Why it exists
Open a ranked game of League. Ten players, scattered across a country or a continent, each on a different ISP, each on a different machine. You right-click a patch of grass and your champion moves there. A fraction of a second later the other nine players see your champion moving to that exact patch of grass, and the minion you walked through took damage in their world too.
How? Your click happened on your computer. The other nine players’ computers have no idea it occurred. Somewhere in between, a decision had to be made about what is real, that decision had to reach everyone, and the whole loop had to finish faster than you could notice it didn’t happen instantly. And it has to work for ~30 minutes straight, with packets dropping, with someone’s Wi-Fi hiccupping, and with one of the ten people pinging from across the country.
The naive design fails immediately: “every input goes to a server, the server tells everyone the result, everyone draws it.” If your round trip to the server is 80 ms, then every click feels 80 ms late. People notice that. Past about 100 ms of input lag a competitive game stops feeling responsive, and at 200 ms it feels broken. The whole field of netcode exists to lie convincingly inside that budget.
Why it matters now
Real-time multiplayer is one of the few genuinely synchronous, many-user systems most people use every day. A web page can be 300 ms slow and nobody dies. A team-fight resolved 300 ms late costs the game.
The same family of techniques — authoritative server, client prediction, interpolation, lag compensation — shows up across the genre, in FPS games, fighting games, racing games, and increasingly in collaborative tools that need real-time cursors and selections. The constraints are sharper in games (the simulation has rules, and people will cheat), but the patterns are general.
The short answer
online game netcode = authoritative server + client prediction + interpolation + lag compensation
One machine — the server — is the only one that decides what really happened. Your client guesses ahead so your own actions feel instant, then quietly corrects itself when the server’s truth arrives. Other players are rendered a little bit in the past so their motion stays smooth even when packets drop. And the server, when checking “did that ability hit?”, rewinds time to the moment you fired it. None of the ten screens show identical worlds at any given instant; they show consistent enough worlds, with lies layered to hide the latency.
How it works
One source of truth: the authoritative server
Every match runs on a server somewhere in a Riot data center. That server is the only authority on the game state — positions, HP, cooldowns, who killed whom. Your client has a copy of that state, but it is not allowed to decide anything important. If your client says “I cast Flash and teleported here,” the server checks: do you have Flash? Is it off cooldown? Are you allowed to flash there? If yes, the server’s copy of you teleports, and that update is broadcast to everyone. If no, the server snaps you back.
This is the server-authoritative model, and the reason competitive games use it is simple: it’s the only architecture where one cheating client can’t decide they have infinite gold. A peer-to-peer or client-authoritative game can be modded into nonsense in an afternoon.
The server advances the game in discrete simulation steps — ticks or frames. Each step, it reads the inputs that arrived since the last step, advances the simulation, and sends each client a snapshot of what they’re allowed to see (fog of war hides what your enemies are doing in the jungle — the server doesn’t even tell your client). The exact timing model League uses is something Riot has discussed publicly in pieces — they have a post on “determinism” and a unified clock — but I’m not going to pin down a single tick rate or claim “fixed timestep” as a settled fact for League. The shape of the explanation doesn’t depend on it: simulation advances in small discrete chunks, much faster than human perception.
Client-side prediction: why your own click feels instant
If your client truly waited for the server before doing anything, every action would feel as laggy as your ping. Instead, the moment you right-click, your client also runs the same simulation rules locally and immediately moves your champion. It’s a guess — but it’s an educated guess, because the rules are deterministic and your client knows them.
Then your input is sent to the server. The server, a tick or two later, agrees (“yes, you can walk there”) and sends back the canonical version. Almost always, the canonical version matches what your client already drew, and the correction is invisible. When it doesn’t match — you tried to walk through a wall, or someone rooted you a tick before your move — the server’s truth overrides and your champion snaps. That’s the rubber-band you occasionally see.
This is client-side prediction plus server reconciliation. The whole point is to put the perceived latency for your own actions near zero, while keeping the server in charge.
Interpolation: why other players move smoothly
Your client can’t predict what other players are going to do — they have their own inputs you’ve never seen. So for them, the client uses the opposite trick. It receives snapshots from the server at some update rate — often slower than the simulation tick, since not every tick needs to be sent — and it deliberately renders other players a small amount in the past, smoothly interpolating between the last two snapshots it has.
Imagine the server sends a snapshot every 33 ms. Your client renders “now” as if it were one or two snapshots ago, drawing the smooth motion between the snapshots it already has on hand. The cost: you see other players a few tens of milliseconds behind. The benefit: motion is smooth even if one snapshot is dropped or arrives late, because there’s a buffer of slightly-newer snapshots to interpolate toward.
The interpolation buffer trades a little extra latency-of-observation for a lot of visual smoothness. Without it, every dropped packet would teleport other champions across the screen.
Lag compensation: rewinding time on the server
Now the awkward bit. Suppose I’m 80 ms from the server and I cast a skillshot. By the time my “I cast” arrives at the server, the server’s real positions have advanced 80 ms. From my perspective I aimed at where the enemy was on my screen — but my screen was already showing the enemy ~66 ms in the past (interpolation), plus my input took 80 ms to get there. If the server just checks against its current state, my aim feels broken.
The fix is lag compensation. The server keeps a short history of recent positions, and when it processes my “cast” at server-time T, it looks up where the enemy was at the time my client was rendering when I clicked, and resolves the hit against that world. Result: if you aim at what you see, you hit it.
The cost is the famous “I went around the corner and still died” complaint in FPS games — from the killer’s perspective, the victim was visible when they pulled the trigger; from the victim’s perspective, they were already behind cover. Both are true on their respective machines. Lag compensation picks the shooter’s truth. MOBAs feel this less harshly than FPS games because most abilities aren’t instant hitscan — there’s a projectile or a cast time the eye can use to calibrate — but the underlying mechanic is the same.
I don’t have a citable public source for exactly how Riot tunes lag compensation in League specifically, so take this as the general technique the genre uses, not a claim about Riot’s exact policy.
UDP, not TCP
Real-time gameplay traffic in fast-paced games usually rides on UDP, not TCP. (Slower-paced or turn-based games — Riot’s Legends of Runeterra is a public example — happily use TCP.) The reason fast games avoid TCP is the same as QUIC’s reason: TCP guarantees in-order, reliable delivery, which sounds great until you realize that a lost position update isn’t worth retransmitting. By the time the retransmit arrives, three newer position updates already have. TCP would force you to wait for the stale one anyway — head-of-line blocking.
Games use UDP and build their own thin reliability on top, where they choose per message what guarantee they need. Position updates can be dropped — the next one is coming shortly. Important gameplay events (an ability landed, a champion died, a reward was granted) need to be reliably delivered, whether that’s via retransmit, redundant inclusion in later packets, or authoritative-state correction on the next snapshot. The exact mechanics depend on the game; the flexibility — choosing the right guarantee for each kind of message — is the point.
The server needs to be physically near you: Riot Direct
All the netcode in the world can’t beat the speed of light. If the server is in Chicago and you’re in Los Angeles, the round-trip floor is in the tens of milliseconds before any code runs — and the real floor is whatever route the public internet decides to take, which can be much worse. A packet from LA to Chicago might detour through Dallas, get handed between three backbones, and pick up jitter at every handoff.
This is why Riot built Riot Direct: their own private network backbone, with peering directly into many ISPs, so that game traffic gets onto Riot’s fiber as close to the player as possible and stays on it until it reaches the data center. It’s the same idea as a CDN — refuse to let the public internet choose your route — applied to a latency-sensitive workload instead of a bandwidth one. Riot has written publicly about this; the pitch is essentially “the public internet is a worst-case path, and competitive games can’t tolerate worst-case paths.”
The other half of the same problem is which data center: League runs regional shards (NA, EUW, KR, etc.) so that within a region, every player has a tolerable RTT to the same server. You can’t matchmake a player in Sydney into a Frankfurt game and call it competitive.
What this looks like end-to-end
A single right-click, in order:
- Your client renders your move immediately (prediction).
- The input is wrapped in a small UDP packet and sent to the regional server.
- It enters Riot Direct as early in the path as possible — often at a nearby ISP peering point — so the worst-case public-internet detour is skipped. Return traffic doesn’t always take the same route.
- The server, at its next tick, validates the move and adds it to the authoritative state.
- The server sends each of the ten clients a snapshot containing only what that client is allowed to see. Yours confirms the move (usually invisible); the others’ snapshots show your champion’s new position, which they will render slightly delayed (interpolation).
- If any of those snapshots is dropped, the next one supersedes it. Nothing waits.
Multiply that loop by every player, every ability, every minion, every auto-attack, ticking continuously for half an hour. The fact that this works at all is most of the magic.
Show the seams
A few things this picture intentionally papers over:
- No two screens ever show the same instant. Each client sees its own champion in the present (prediction) and everyone else in the past (interpolation), with the server’s truth somewhere in between. The “real” game state only exists in the server’s RAM.
- Cheating isn’t impossible, just constrained. Server authority stops state cheats (infinite gold, teleport-anywhere). It doesn’t stop input cheats — scripts that click for you, map hacks that read your own client’s memory. Those are a different fight.
- The tick rate is a budget, not a free parameter. A higher tick rate means snappier feel and more bandwidth and CPU per match. Pick poorly and you either have a sluggish game or a server cost you can’t afford at scale. I’m being deliberately vague about Riot’s tick rate because I don’t have a public source I’d stand behind.
- Dropped packets are normal. Real networks lose 0.1–1% of packets routinely; mobile and Wi-Fi can be much worse. The whole architecture is built to not care about losing any individual snapshot — the next one will arrive shortly. The reliability layer is reserved for events that actually matter (deaths, ability casts, rewards).
- Spectators and replays use a different mode. Spectator delay is a feature (anti-stream-snipe) and gives the system room to batch and compress; it isn’t subject to the same input-latency budget the players are.
Famous related terms
- Authoritative server —
authoritative server = single source of truth + clients defer— the whole reason cheating is hard in a competitive online game. - Client-side prediction —
client prediction = run the rules locally + reconcile when truth arrives— why your own click feels instant. - Interpolation buffer —
interpolation buffer ≈ render other players slightly in the past— trades observation latency for smooth motion. - Lag compensation —
lag compensation = server rewinds time to the shooter's view + resolves hits there— why “if you aim at what you see, you hit.” - Lockstep simulation —
lockstep = every client runs the same deterministic simulation + only inputs are sent— the older RTS approach (StarCraft); cheap on bandwidth, brutal under any divergence in determinism. - Rollback netcode —
rollback ≈ predict + replay frames when the prediction is wrong— the variant that made modern fighting games playable online. - Head-of-line blocking —
HoL blocking ≈ "one slow car blocks the whole lane"— why games refuse TCP for game traffic. See QUIC. - Riot Direct —
Riot Direct = private backbone + ISP peering— bypasses the public internet’s worst-case routes for latency-critical traffic.
Going deeper
- Glenn Fiedler’s “Networking for Game Programmers” series — the canonical plain-English walkthrough of UDP, reliability, prediction, and interpolation. Old, still excellent.
- Valve’s “Source Multiplayer Networking” developer documentation — the classic reference for how Counter-Strike-era engines do prediction and lag compensation. Most modern explanations descend from this article.
- Riot Engineering’s blog posts on Riot Direct and on League’s networking — Riot has written publicly several times about why they run their own backbone and how they think about latency budgets. Worth searching for directly; the specifics evolve.
- Yahn Bernier, “Latency Compensating Methods in Client/Server In-game Protocol Design and Optimization” (2001) — the original paper that formalized the lag-compensation technique most competitive games still use.