Why beam search died for LLMs
Beam search was the default way to decode neural sequence models for years. Then chatbots arrived and quietly stopped using it. The reason is stranger than 'sampling is more creative.'
Why it exists
If you trained a neural sequence model in 2016, you almost certainly used beam search to generate from it. It was the standard decoder for NMT, summarization, and image captioning. The intuition is clean: greedy decoding picks the locally best token and gets stuck; beam search keeps the top-k partial sequences alive at every step, so it has a shot at finding a globally higher-probability sequence. More search, better answer.
Then LLMs arrived, ChatGPT shipped, and beam search quietly stopped being the default for open-ended generation. The OpenAI chat completions API doesn’t expose it. Most production chat stacks ship sampling. The new default is sampling.
That’s the puzzle. The thing that made decoding “work” for a decade got dropped exactly when models got big enough that you’d think more search would help even more.
Why it matters now
Every time you call a chat model, something is choosing one of tens of thousands of tokens per step. That choice is a strategy, not a fact about the model. Picking the wrong strategy makes a state-of-the-art model produce slop — bland paragraphs, repetitive loops, weirdly short answers. Engineers who reach for beam search expecting “higher quality output” get the opposite, and the failure mode looks like the model itself is bad.
The shift also reshaped how people think about prompting. If decoding is sampling, then “the answer” isn’t a thing the model has — it’s a distribution the model has, and you’re drawing from it. That mental model is load-bearing for understanding why the same prompt gives different outputs, why temperature matters, and why two perfectly correct answers can sit side by side.
The short answer
beam search died for LLMs = maximum-likelihood decoding + open-ended generation = degenerate text
Beam search optimizes for the highest-probability sequence under the model. That works when there is one right answer and the model is well-calibrated about it (translation, transcription). It breaks when there are many valid continuations and the model’s own probability surface has a pathological mode at “boring repeating text.” For open-ended generation, the most probable sequence is worse than a randomly sampled one. So sampling won.
How it works
The mechanism is counter-intuitive enough that it’s worth walking through.
Beam search, briefly
At each step, keep the top-k partial sequences (the “beam”) ranked by cumulative log-probability. Expand each by one token, score all kN candidates, keep the top k again. At the end, return the highest-scoring full sequence. With k=1 you get greedy decoding; as k grows, you approach exact MAP decoding (true exhaustive argmax over sequences requires more than just a wide beam, but the intuition is right).
For a translation system, this is great. There’s roughly one correct translation; the model concentrates probability on tokens near that translation; searching harder finds it.
What goes wrong on open-ended prompts
Holtzman et al.’s 2019 paper The Curious Case of Neural Text Degeneration (ICLR 2020) is the canonical write-up. They showed something weird: if you take a strong language model and ask it for the most likely continuation of a prompt, you get text that loops. Schematically (paraphrased — not a quote from the paper):
The unicorns were extremely friendly. The unicorns were extremely friendly. The unicorns were extremely friendly…
This isn’t a bug in beam search. The model genuinely assigns higher probability to the looping text than to a coherent paragraph. Why the distribution is shaped that way is still debated — Holtzman et al. document the effect; later work (e.g. Finlayson et al. 2024) traces it back to the softmax bottleneck and the way model errors compound on rare tokens. What’s solid is the observational fact: the argmax is degenerate, even though sampling from the same distribution produces human-like text. Sampling gives prose; maximizing gives mush.
Their conclusion is the load-bearing one: maximization is an inappropriate decoding objective for open-ended generation. The fix isn’t a smarter search; it’s a different objective. They proposed nucleus sampling (top-p), which is now ubiquitous.
The “beam search curse” in translation, too
Even in machine translation — beam search’s home turf — the picture is messier than “more search is better.” Koehn and Knowles (2017, Six Challenges for Neural Machine Translation) documented the beam search curse: past modest beam widths, BLEU scores stop improving and eventually degrade. Larger beams find higher-probability sequences that are systematically too short relative to the reference; length-normalization heuristics push the sweet spot wider, but very large beams still hurt. The underlying fact is uncomfortable: even when MAP-decoding is roughly the right idea, doing it harder eventually hurts.
The shape of the lesson is the same in both worlds: the model’s probability-of-the-whole-sequence is not directly the thing you want to maximize.
Why sampling won
Sampling has three things going for it:
- It matches the model’s own objective. Models are trained to imitate the data distribution; drawing from that distribution gives outputs shaped like training data. Argmax-ing it produces a different beast — the mode, not a sample.
- It’s cheap. One forward pass per token, one beam. Beam search with width k roughly scales decode memory and bandwidth with k — and with KV-cache-bound serving, that’s a real cost.
- It composes with the modern toolkit. Temperature, top-p, and top-k all live inside the sampling frame. They give you a knob for diversity without abandoning the model’s distribution.
There are exceptions where beam search still earns its keep: machine translation systems shipping to production, speech recognition with a clear ground-truth target, constrained decoding where you genuinely need the highest-scoring valid output. But for “talk to me,” it’s rarely the default anymore.
Where this gets murky
I’d flag a few things I’m not certain about and the post shouldn’t pretend otherwise:
- The exact reason language models concentrate probability on repeated phrases is still debated. Holtzman et al. document the effect; Finlayson et al.’s Closing the Curious Case of Neural Text Degeneration (ICLR 2024) attribute it to the softmax bottleneck and small per-token probability errors compounding. I haven’t read Finlayson et al. closely enough to summarize their argument in detail; readers should go to the paper.
- “Beam search isn’t the chat-API default” is a statement about commercial chat endpoints I’ve actually used. Open-source inference engines (Hugging Face Transformers, vLLM, etc.) still ship beam-search implementations; they’re just not what people reach for when running a chatbot.
Famous related terms
- Greedy decoding —
greedy = beam search + k=1— pick the argmax token each step. Fast, deterministic, prone to repetition. - Nucleus sampling (top-p) —
top-p = sampling + truncate the tail to mass p— the Holtzman et al. fix; default in most modern stacks. - Top-k sampling —
top-k = sampling + keep only k highest-probability tokens— older cousin of top-p, less adaptive. - Temperature —
temperature = scale logits before softmax— the diversity dial that lives inside the sampling frame. - MAP decoding —
MAP = argmax over the whole sequence— what beam search is approximating, and what turns out to be the wrong objective for open-ended text. - Minimum Bayes Risk (MBR) decoding —
MBR = pick the candidate that minimizes expected loss vs. the model's distribution— in practice approximated by sampling N candidates and returning the one most similar (by BLEU/COMET) to the others. Seeing renewed interest as a sampling-era alternative to beam search in MT.
Going deeper
- Holtzman, Buys, Du, Forbes, Choi (2019). The Curious Case of Neural Text Degeneration. The paper that named the failure mode and proposed nucleus sampling.
- Koehn and Knowles (2017). Six Challenges for Neural Machine Translation. The “beam search curse” appears here.
- Hugging Face blog: How to generate text. Side-by-side examples of greedy vs. beam vs. sampling vs. top-p on the same prompt — the fastest way to feel the difference.