Why password hashing is deliberately slow
SHA-256 is fast and that's exactly why you must not use it for passwords. Password hashes are the rare place in computing where slowness is the feature.
Why it exists
Most of computing is a long argument with the laws of physics about going faster. Caches, branch prediction, vectorization, GPUs — entire careers are spent shaving nanoseconds. Then you wander into the password-storage corner of the codebase and find engineers carefully tuning algorithms to be slower. On purpose. With knobs labeled “memory cost” and “iterations” that they keep turning up every couple of years.
This is not a quirk. It’s the whole design.
The reason is the threat model. When an attacker steals your user database,
they don’t need to log in one at a time against your rate limiter — they have
the hashes on their own hardware and can guess offline as fast as their GPUs
allow. A modern consumer GPU can compute billions of generic hashes per
second. If you stored passwords as sha256(password), the attacker can try
the entire common-password corpus
against every user in your database in the time it takes to brew coffee.
“Strong” eight-character passwords fall in minutes.
Password hashing exists to break that economy. The goal is not to make guessing impossible — passwords are too low-entropy for that — but to make each guess expensive enough that the attacker runs out of money or patience before they get through the dictionary.
Why it matters now
Two things sharpen this in 2026.
First, leaks happen. Database dumps end up on forums, paste sites, and Tor markets with depressing regularity. The question for any service handling credentials is not “what if our password column leaks” but “when it does, how much damage can the attacker do with it?”
Second, the hardware curve keeps moving. The same GPU boom that powers LLM training also powers password crackers. A function that was “expensive enough” in 2015 is cheap in 2026. ASIC and FPGA crackers are even worse — they can demolish anything that’s just CPU-bound arithmetic. The modern answer (Argon2, scrypt) is to make the work memory-hard, because memory is roughly the one thing GPUs and ASICs can’t arbitrarily scale up cheaply.
If you’re building auth in 2026 and reach for sha256 or md5, you are not
“hashing a password.” You are publishing it to anyone who eventually steals
your DB.
The short answer
password hash = slow + memory-hungry + per-user salted KDF
A password hash is a KDF tuned so that one verification is barely noticeable for your login endpoint (tens to hundreds of milliseconds) but billions of guesses cost a fortune. Regular cryptographic hashes (SHA-256, BLAKE3) are tuned for the opposite goal — be as fast as possible while staying collision-resistant. Same word, different jobs.
How it works
Three ingredients do the work, and you need all three.
1. Salt — kill the precomputation game
A salt is a per-user random value, stored alongside the hash:
stored = (salt, hash(salt || password))
Without salts, an attacker can precompute a giant table mapping
hash(password) → password once and reuse it against every leaked database
forever. These are called rainbow tables.
A salt makes every user’s hash live in a different “namespace,” so the
attacker has to redo the work per user. Salts don’t need to be secret — they
just need to be unique. 16 random bytes is fine.
2. Cost — make each guess hurt
A work factor controls how slow one hash computation is. In bcrypt it’s
the cost parameter (each +1 doubles the work). In Argon2 it’s a triple:
time cost (iterations), memory cost (KB of RAM used), and parallelism.
The calibration heuristic is: pick the largest cost where verifying one password during login is still acceptable for your service — usually something in the 50–500 ms range — and re-tune upward every few years. If verification feels instantaneous, you’ve left margin on the table for attackers.
3. Memory-hardness — defeat the GPU
This is the part that distinguishes Argon2 and scrypt from older designs.
A function is memory-hard when computing it requires holding a large working set in RAM. GPUs have huge arithmetic throughput but comparatively limited and shared memory bandwidth across thousands of cores; ASICs that try to bake in dedicated RAM per parallel guess get expensive fast. PBKDF2 and (to a lesser extent) bcrypt only push CPU iterations, which GPUs eat for breakfast. Argon2 forces the attacker to allocate, say, 64 MB per concurrent guess — suddenly running 10,000 guesses in parallel needs 640 GB of RAM, and that’s the whole point.
Putting it together
A login flow with a modern KDF looks like:
- User submits password.
- Server fetches
(salt, params, stored_hash)for that user. - Server computes
Argon2id(password, salt, params)— takes ~100 ms, allocates ~64 MB. - Constant-time compare against
stored_hash. - If
paramsare below current policy, transparently rehash and update.
That last step is how you migrate forward when the hardware curve shifts — on every successful login, you have the plaintext briefly in memory and can upgrade the stored hash to stronger parameters.
Show the seams
- You can’t make a password hash arbitrarily slow. It’s a budget
trade-off with your own login endpoint. Every legitimate login pays the
cost too. 1 second per attempt would harden security and also let an
attacker DoS your auth service by hammering
/loginwith junk requests. - Pepper is a real but awkward extra layer. A pepper is a secret value added to the hash input, stored outside the database (in an HSM, env var, or KMS). It helps if the DB leaks but the secret store doesn’t — pure DB dumps become useless. It hurts when key rotation gets messy. Most teams skip it; it’s fine to skip it if your salts and KDF are right.
bcrypthas a 72-byte input limit. Long passphrases get silently truncated. Some libraries pre-hash with SHA-256 to dodge this, which introduces its own subtle risks (the famous “bcrypt + base64(sha256)” foot-gun where\0bytes truncate inputs). Argon2id avoids this whole genre.- Honest gap: which KDF is “best” depends on what’s available in your language ecosystem and how much you trust its implementation. The Password Hashing Competition (2013–2015) picked Argon2 as the winner, and Argon2id is the broadly recommended default for new systems as of 2026, but I’m not going to claim a precise market share between Argon2id, bcrypt, and scrypt across deployed systems — I don’t have a reliable current source for that.
- None of this saves a user who picked
password123. Password hashing buys time against guessing the dictionary. It does not buy infinite time against guessing the first 1000 entries of the dictionary. Rate limiting, breach-corpus checks, and MFA do work that hashing can’t.
The mental flip is simple: regular hashes optimize for throughput, password hashes optimize against it. If your hash function has a benchmark page bragging about gigabytes per second, it is the wrong tool.
Famous related terms
- bcrypt —
bcrypt = Blowfish key schedule + cost parameter— the workhorse from 1999; still acceptable today, with caveats about input length. - scrypt —
scrypt ≈ PBKDF2 + a big random-access memory array— first widely used memory-hard design; Colin Percival, 2009. - Argon2id —
Argon2id = Argon2i (side-channel resistant) + Argon2d (GPU-resistant) hybrid— Password Hashing Competition winner; recommended default for new systems. - PBKDF2 —
PBKDF2 = HMAC + many iterations— old NIST standard, CPU-bound only; fine for FIPS contexts, weak against modern GPUs at any reasonable iteration count. - HMAC —
HMAC ≈ keyed hash for message integrity— not a password hash; mentioned because people sometimes reach for it by mistake. - Rainbow table —
rainbow table = precomputed (hash → password) lookup— what salts exist to defeat.
Going deeper
- RFC 9106 — Argon2 Memory-Hard Function for Password Hashing and Proof-of-Work Applications. The actual spec.
- The Password Hashing Competition site (password-hashing.net) — design rationales and the finalists’ submissions.
- OWASP’s Password Storage Cheat Sheet — current recommended parameters, updated as hardware moves.
- Colin Percival’s original scrypt paper — clearest explanation of what “memory-hard” means and why it matters.