Why constant-time comparison is a thing
An ordinary equality check leaks the secret it's supposed to protect — one byte at a time, through the clock. Constant-time comparison exists because == is faster than it should be.
Why it exists
The first time you read security code and see something like
hmac.compare_digest(a, b) instead of plain a == b, it looks paranoid. The
two strings are right there. Why not just compare them?
Because == short-circuits.
Every standard string-equality implementation in every mainstream language walks the bytes left-to-right and bails out the moment it finds a mismatch. That’s the obviously correct way to write the function — why keep checking after you already know the answer? — and it’s a disaster the moment one of those strings is a secret.
Imagine a server that authenticates webhook calls by checking an HMAC in a request header against the expected value. An attacker who controls the header can submit guesses and time how long the comparison takes. A guess whose first byte matches the real tag takes a tiny bit longer to reject than one whose first byte is wrong, because the loop ran one more iteration. The attacker fixes byte 0, sweeps byte 1, and so on. They have not stolen the secret key; they have read the comparison’s reply byte by byte through the clock.
That whole class of bug is called a timing side channel,
and it’s the reason compare_digest, crypto.timingSafeEqual, and
subtle.ConstantTimeCompare exist in your standard library.
Why it matters now
Two reasons it keeps mattering.
First, the network is no longer the noise barrier it used to be. The classic objection — “you can’t actually measure nanosecond differences over the internet” — was always an overstatement, and modern infrastructure makes it worse. Your service and your attacker often share a cloud region; sometimes the same physical host. The 2009 Remote Timing Attacks Are Practical paper by Brumley and Boneh demonstrated this against OpenSSL across a LAN, and the gap has only narrowed since. Co-tenant attackers can average enough samples to extract sub-microsecond timing differences.
Second, modern services are covered in shared-secret comparisons:
webhook signatures (Stripe, GitHub, Slack), API tokens, password reset
tokens, session IDs, CSRF tokens, license keys, JWT signature checks. Each
one is a place where == against attacker-controlled input is a slow,
quiet leak.
The fix is mechanical and cheap. The cost of not applying it is that you hand attackers an oracle for any secret you compare with the wrong operator.
The short answer
constant-time compare = always touch every byte + combine results without branching
A constant-time equality function takes the same amount of time to run no matter where (or whether) the inputs differ. It walks both buffers to the end every time and folds the per-byte differences together with bitwise operations, so the CPU never takes a data-dependent branch. The function still returns “equal” or “not equal” — it just refuses to leak how it got there.
How it works
The pattern is small enough to memorize. In C-flavored pseudocode:
int ct_equal(const uint8_t *a, const uint8_t *b, size_t n) {
uint8_t diff = 0;
for (size_t i = 0; i < n; i++) {
diff |= a[i] ^ b[i];
}
return diff == 0;
}
Three things to notice:
- No early exit. The loop runs
ntimes, period. ^(XOR) gives 0 only when the bytes are equal. OR-folding all the XORs intodiffmeansdiffends up zero iff every byte matched.- The final
diff == 0is one branch on a value that depends on the whole result, not on any intermediate position. An attacker who times the function learns nothing about where a difference was, only the public bit they were going to learn anyway: equal or not.
A subtle but important rule: both inputs must be the same length before you call this, and the length check itself must not leak. Most real-world libraries either require equal lengths up front or compare against a fixed-length reference. If you let the lengths differ and short-circuit on length mismatch, you’ve leaked the length of the secret — which is sometimes most of the secret.
Why the compiler keeps trying to ruin this
The genuinely hard part of constant-time code is not writing the loop. It’s
keeping the optimizer from helpfully un-writing it. A sufficiently smart
compiler can notice that once diff is non-zero it can never become zero
again, and “optimize” the loop into an early exit. It can also lower
diff |= ... into branchy code on some architectures.
This is why production constant-time primitives live in the standard
library or in audited libraries (libsodium, BoringSSL, Go’s
crypto/subtle), often with compiler barriers, volatile reads, or
hand-written assembly. Rolling your own in a high-level language is
usually fine for the algorithmic shape but offers no real guarantee that
the binary the compiler emits is still constant-time. Use the stdlib
function.
Show the seams
- “Constant-time” is aspirational at the hardware level. Modern CPUs have data caches, branch predictors, variable-latency instructions (integer divide, some multiplies on some chips), and SMT siblings that observe each other. A function can be branch-free at the source level and still leak through cache timing if it indexes a table with secret data. The cryptographic community calls the stricter goal constant-time-ish or isochronous, and getting there for things like AES has driven hardware features (AES-NI) precisely so the software can’t leak through table lookups.
- It’s not just
==. Anything that branches on a secret leaks. String comparisons in databases, regex matches against secret-shaped fields, earlyreturnin a signature verifier — all the same family. The question to ask code-review-style is: does the time to run this depend on bytes the attacker isn’t supposed to know? If yes, you have a channel. - Length leaks are real. Some libraries deliberately compare a hashed
version of both sides at fixed length to dodge this. Python’s
hmac.compare_digestdocuments that it will use a non-constant-time path if the inputs are different lengths or different types — read the docs of whatever you call. - The attack budget is non-trivial but not impossible. Recovering a 16-byte tag over the open internet, against a target with a lot of jitter, is harder than the textbook explanation makes it sound. It often takes millions of requests and statistical denoising. The defense is still cheap, so the cost-benefit is one-sided: you should always pay the trivial cost to remove the channel rather than argue about whether someone can afford to exploit it. Honest gap: I don’t have a clean, current public number for “fewest requests needed to recover an HMAC tag against a typical cloud service in 2026” — the answer depends heavily on the target and the network path, and most serious work in this area lives in academic side-channel papers rather than headline benchmarks.
The mental model: when one of the inputs is a secret, the comparison function is part of your cryptography, not part of your control flow. Treat it accordingly.
Famous related terms
- Timing attack —
timing attack = measure duration + infer secret— the general family; constant-time comparison is one defense against one member of it. - Side channel —
side channel ≈ information leak through a non-obvious observable (time, power, EM, cache state)— timing is the most software-accessible kind. - HMAC —
HMAC = hash + secret key— the most common reason you’d reach for constant-time compare in a webhook handler. Different from a password hash; see password hashing. - Spectre / Meltdown —
Spectre ≈ side channel via speculative execution + cache timing— same family of bug, weaponized at the CPU microarchitecture level. - AES-NI —
AES-NI = AES rounds as CPU instructions— hardware support added partly so AES implementations could stop using secret-indexed lookup tables that leak via the cache.
Going deeper
- Remote Timing Attacks Are Practical, Brumley & Boneh (2003/2009). The paper that ended the “timing attacks aren’t realistic over a network” excuse.
- Go’s
crypto/subtlepackage — short, readable source forConstantTimeCompareand friends. - BearSSL’s documentation on constant-time programming — Thomas Pornin’s notes are the clearest plain-English explanation of why this is so much harder than the four-line loop suggests.
- Python
hmac.compare_digest— read the standard-library docs to see exactly what guarantees (and non-guarantees) it makes.