Signal and Noise: A Framework for Reducing Uncertainty in Language Model Evaluation
Developing large language models is expensive and involves making decisions with small experiments, typically by evaluating on large, multi-task evaluation suites. In this work, we analyze specific pr...
(6/6) A huge thanks to my collaborators! @valentinhofmann.bsky.social @ianmagnusson.bsky.social Yuling Gu @nlpnoah.bsky.social @hanna-nlp.bsky.social @kylelo.bsky.social @jessedodge.bsky.social
π: arxiv.org/abs/2508.13144
π: allenai.org/blog/signal-noise
π»: github.com/allenai/signal-and-noise
19.08.2025 16:46
π 1
π 0
π¬ 0
π 0
(5/6) SNR naturally gives a way to improve benchmarks, we introduce 3 βinterventionsβ in our work! For example:
βοΈ Simply using the top 16 MMLU subtasks by SNR exhibits better decision accuracy and lower scaling law error than using the full task (only 6 for an AutoBencher task)
19.08.2025 16:46
π 1
π 0
π¬ 1
π 0
(4/6) π§ How do we know SNR is meaningful? We can (1) calculate % of models ranked correctly at small vs. 1B scale and (2) fit scaling laws to predict the task performance.
SNR is predictive of better decision accuracy, and tasks with lower noise have lower scaling law error!
19.08.2025 16:46
π 1
π 0
π¬ 1
π 0
(3/6) π We landed on a simple metric - the signal-to-noise ratio - the ratio between the dispersion of scores from models, and the variation of final checkpoints of a single model.
This allows estimating SNR with a small number of models (around 50 models) at any compute scale!
19.08.2025 16:46
π 1
π 0
π¬ 1
π 0
(2/6) Consider these training curves: 150M, 300M and 1B param models on 25 pretraining corpora. Many benchmarks can separate models, but are too noisy, and vice versa! π§
We want β β low noise and high signal β β *both* low variance during training and a high spread of scores.
19.08.2025 16:46
π 1
π 0
π¬ 1
π 1
Evaluating language models is tricky, how do we know if our results are real, or due to random chance?
We find an answer with two simple metrics: signal, a benchmarkβs ability to separate models, and noise, a benchmarkβs random variability between training steps π§΅
19.08.2025 16:46
π 15
π 4
π¬ 2
π 0
The RewardBench 2 Leaderboard on HuggingFace.
RewardBench 2 is here! We took a long time to learn from our first reward model evaluation tool to make one that is substantially harder and more correlated with both downstream RLHF and inference-time scaling.
02.06.2025 16:31
π 20
π 8
π¬ 1
π 1