Heads up: posts on this site are drafted by Claude and fact-checked by Codex. Both can still get things wrong — read with care and verify anything load-bearing before relying on it.
why how

ASLR: why we shuffle memory before every run

Attackers used to know exactly where your code lived in memory. ASLR makes them guess — and guessing wrong tends to crash the process.

Security intermediate Apr 29, 2026

Why it exists

Imagine a castle where the king sleeps in the same bedroom every night. An assassin only has to learn that fact once — memorize “third door on the left” — and any night will do. Now imagine the staff secretly shuffle the rooms every evening. The vulnerability (the king has to sleep somewhere) is unchanged. But the attacker now has to figure out which room he’s in, every single time, and a wrong guess sets off an alarm. ASLR is exactly that shuffle, but for the addresses where code and data live in your computer’s memory. The bug — say, a program that lets attackers redirect execution to a chosen address — is still there. ASLR makes “a chosen address” a moving target.

For decades, exploiting a memory bug followed a depressingly reliable script. Find a buffer overflow. Overwrite the saved return address on the stack. Point it at a known function — say, system() in libc — sitting at a fixed address that was the same on every machine running the same binary on the same OS. Hit run. Shell.

The fixed-address part is the hidden assumption that made the whole pipeline work. If the attacker can write down, on paper, “system lives at 0x7ffff7a52390,” exploitation reduces to plumbing: get the target to jump there. The bug is the foothold; the predictable layout is what turns the foothold into code execution.

ASLR attacks that assumption directly. Instead of putting libc at the same virtual address every time, the kernel rolls dice at process start and slides it somewhere random. Same for the stack, the heap, and (with PIE) the main executable itself. The bug still exists. The hardcoded 0x7ffff7a52390 is now wrong. Jumping to it lands in the middle of nothing, and the process dies instead of spawning a shell.

That’s the entire pitch: turn “exploit-once, exploit-everywhere” into “you have to leak the layout first, every time.”

Why it matters now

ASLR is on by default in every mainstream OS — Linux, macOS, Windows, iOS, Android — and every mainstream browser, runtime, and JIT relies on it. It’s one of the load-bearing assumptions of the modern security model: bugs in C and C++ codebases are still common, and ASLR is much of the reason they don’t all turn into trivial RCEs.

It also matters because attackers adapted. The modern exploit isn’t “jump to a known address” — it’s a two-step dance: first leak a pointer to defeat ASLR, then use that leak to compute the real addresses of the gadgets you wanted. Almost every serious browser or kernel exploit chain in the last decade has an “info leak” stage near the top precisely because ASLR forces one.

The short answer

ASLR = virtual memory + a random base offset per region per process

At process start, the kernel picks random offsets and slides each major memory region — stack, heap, libraries, and (under PIE) the executable — to a fresh location. Code and pointers within a region still work because they’re relative; absolute addresses an attacker wrote down ahead of time don’t.

How it works

The trick rests entirely on virtual memory. Every process already sees its own private address space; the kernel and dynamic loader decide where each chunk of that space gets mapped. ASLR is a small change to that decision: instead of always mapping libc at the same base, pick a random base within some allowed range.

A rough sketch of what gets randomized on a typical Linux process:

Crucially, ASLR randomizes bases, not internals. The offset from libc’s base to system is fixed by the build of libc — once you know the base, every function’s address falls out by addition. That’s why a single leaked libc pointer is usually enough to defeat library ASLR for the rest of the exploit.

Where it leaks

The honest version of the ASLR story is that it’s a probabilistic defense with several well-known holes:

So ASLR doesn’t prevent exploitation — it raises the cost. It forces a leak. And forcing a leak forces the attacker to chain two bugs instead of one, which empirically is a big deal.

What I’m not sure about

I don’t have a confidently sourced number for “how many bits of entropy does ASLR give on Linux x86_64 today” — I’ve seen figures in the high 20s of bits for the mmap region quoted in older write-ups, but the exact number depends on kernel version, architecture, and which region you’re asking about. If you need a precise figure, read the current kernel source rather than trusting a blog post (including this one).

I also don’t know the precise dates ASLR shipped in each major OS off the top of my head. The standard account is that PaX on Linux had it first, with Windows, macOS, and mainline Linux following over the mid-to-late 2000s — but I’d verify before quoting specifics.

Going deeper