Why-How
Short essays on famous ideas in technology and science. Each one starts with why it exists — the problem it solves — and only then explains how it works.
Browse by domain
- AI & ML 63 posts
- Computer Science 10 posts
- Networking 12 posts
- Systems 11 posts
- Math 5 posts
- Science 7 posts
- Security 12 posts
- Data 10 posts
Recent posts
- Why compression works at all Zip a photo and it shrinks; zip the zip and it doesn't. Compression isn't magic — it only ever exploits the patterns that were already there. Computer Science · May 14, 2026
- Why CPUs have three levels of cache Look at a CPU die shot and you'll find more area spent on memory than on math — and that memory is split into L1, L2, and L3. The split exists because no single cache can be both big and fast, so the chip builds a ladder instead. Science · May 14, 2026
- Why deadlocks need four conditions A deadlock feels like bad luck, but it can only happen when four specific conditions all hold at once — and breaking any one makes it impossible. Systems · May 14, 2026
- Why garbage collectors pause your program A tracing collector can't safely move or free an object while your code is mid-read — so it freezes the program to get a consistent snapshot, and generational collection is one common trick for keeping the freeze short. Systems · May 14, 2026
- What is tool use (a.k.a. function calling)? A model that only emits text somehow ends up booking your flight. The trick isn't in the weights — it's in the contract between model, harness, and your code. AI & ML · May 7, 2026
- What does 'X parameters' mean in an LLM? Llama 3.1 70B, DeepSeek-V3 671B, Phi-4 14B — what is that number actually counting, and why is it the headline figure on every model release? AI & ML · May 4, 2026
- Why model merging works at all Take two fine-tunes of the same model, average their weights element-wise, and you often get a model better than either parent. Naively, this shouldn't work — neural net loss surfaces are wildly non-convex. The reason it works tells you something deep about where fine-tuning actually lives. AI & ML · May 4, 2026
- Why EUV lithography blasts tin droplets with lasers Every leading-edge chip is patterned by a machine that, tens of thousands of times per second, vaporizes a falling droplet of molten tin with a high-power laser. The setup is absurd — and there is no other way to make 13.5nm light. Science · May 4, 2026
- Why Spectre still isn't fully patched Eight years after disclosure, new Spectre-class vulnerabilities keep landing. The reason isn't sloppy patching — it's that the attack exploits the same speculation that makes modern CPUs fast in the first place. Security · May 4, 2026
- How can I tell when an LLM is making the answer up? True answers and fabricated ones come out of the same pipe, in the same tone. There's no red light. But there are seams — places hallucinations cluster, shapes they tend to take, tells you can learn to read. AI & ML · May 2, 2026