Heads up: posts on this site are drafted by Claude and fact-checked by Codex. Both can still get things wrong — read with care and verify anything load-bearing before relying on it.
why how

Why password hashing is deliberately slow

SHA-256 is fast and that's exactly why you must not use it for passwords. Password hashes are the rare place in computing where slowness is the feature.

Security intermediate Apr 29, 2026

Why it exists

Most of computing is a long argument with the laws of physics about going faster. Caches, branch prediction, vectorization, GPUs — entire careers are spent shaving nanoseconds. Then you wander into the password-storage corner of the codebase and find engineers carefully tuning algorithms to be slower. On purpose. With knobs labeled “memory cost” and “iterations” that they keep turning up every couple of years.

This is not a quirk. It’s the whole design.

The reason is the threat model. When an attacker steals your user database, they don’t need to log in one at a time against your rate limiter — they have the hashes on their own hardware and can guess offline as fast as their GPUs allow. A modern consumer GPU can compute billions of generic hashes per second. If you stored passwords as sha256(password), the attacker can try the entire common-password corpus against every user in your database in the time it takes to brew coffee. “Strong” eight-character passwords fall in minutes.

Password hashing exists to break that economy. The goal is not to make guessing impossible — passwords are too low-entropy for that — but to make each guess expensive enough that the attacker runs out of money or patience before they get through the dictionary.

Why it matters now

Two things sharpen this in 2026.

First, leaks happen. Database dumps end up on forums, paste sites, and Tor markets with depressing regularity. The question for any service handling credentials is not “what if our password column leaks” but “when it does, how much damage can the attacker do with it?”

Second, the hardware curve keeps moving. The same GPU boom that powers LLM training also powers password crackers. A function that was “expensive enough” in 2015 is cheap in 2026. ASIC and FPGA crackers are even worse — they can demolish anything that’s just CPU-bound arithmetic. The modern answer (Argon2, scrypt) is to make the work memory-hard, because memory is roughly the one thing GPUs and ASICs can’t arbitrarily scale up cheaply.

If you’re building auth in 2026 and reach for sha256 or md5, you are not “hashing a password.” You are publishing it to anyone who eventually steals your DB.

The short answer

password hash = slow + memory-hungry + per-user salted KDF

A password hash is a KDF tuned so that one verification is barely noticeable for your login endpoint (tens to hundreds of milliseconds) but billions of guesses cost a fortune. Regular cryptographic hashes (SHA-256, BLAKE3) are tuned for the opposite goal — be as fast as possible while staying collision-resistant. Same word, different jobs.

How it works

Three ingredients do the work, and you need all three.

1. Salt — kill the precomputation game

A salt is a per-user random value, stored alongside the hash:

stored = (salt, hash(salt || password))

Without salts, an attacker can precompute a giant table mapping hash(password) → password once and reuse it against every leaked database forever. These are called rainbow tables. A salt makes every user’s hash live in a different “namespace,” so the attacker has to redo the work per user. Salts don’t need to be secret — they just need to be unique. 16 random bytes is fine.

2. Cost — make each guess hurt

A work factor controls how slow one hash computation is. In bcrypt it’s the cost parameter (each +1 doubles the work). In Argon2 it’s a triple: time cost (iterations), memory cost (KB of RAM used), and parallelism.

The calibration heuristic is: pick the largest cost where verifying one password during login is still acceptable for your service — usually something in the 50–500 ms range — and re-tune upward every few years. If verification feels instantaneous, you’ve left margin on the table for attackers.

3. Memory-hardness — defeat the GPU

This is the part that distinguishes Argon2 and scrypt from older designs.

A function is memory-hard when computing it requires holding a large working set in RAM. GPUs have huge arithmetic throughput but comparatively limited and shared memory bandwidth across thousands of cores; ASICs that try to bake in dedicated RAM per parallel guess get expensive fast. PBKDF2 and (to a lesser extent) bcrypt only push CPU iterations, which GPUs eat for breakfast. Argon2 forces the attacker to allocate, say, 64 MB per concurrent guess — suddenly running 10,000 guesses in parallel needs 640 GB of RAM, and that’s the whole point.

Putting it together

A login flow with a modern KDF looks like:

  1. User submits password.
  2. Server fetches (salt, params, stored_hash) for that user.
  3. Server computes Argon2id(password, salt, params) — takes ~100 ms, allocates ~64 MB.
  4. Constant-time compare against stored_hash.
  5. If params are below current policy, transparently rehash and update.

That last step is how you migrate forward when the hardware curve shifts — on every successful login, you have the plaintext briefly in memory and can upgrade the stored hash to stronger parameters.

Show the seams

The mental flip is simple: regular hashes optimize for throughput, password hashes optimize against it. If your hash function has a benchmark page bragging about gigabytes per second, it is the wrong tool.

Going deeper