David Heineman's Avatar

David Heineman

@davidheineman.com

Pre-doc @ai2.bsky.social davidheineman.com

29
Followers
176
Following
6
Posts
11.11.2024
Joined
Posts Following

Latest posts by David Heineman @davidheineman.com

Preview
Signal and Noise: A Framework for Reducing Uncertainty in Language Model Evaluation Developing large language models is expensive and involves making decisions with small experiments, typically by evaluating on large, multi-task evaluation suites. In this work, we analyze specific pr...

(6/6) A huge thanks to my collaborators! @valentinhofmann.bsky.social @ianmagnusson.bsky.social Yuling Gu @nlpnoah.bsky.social @hanna-nlp.bsky.social @kylelo.bsky.social @jessedodge.bsky.social

πŸ“„: arxiv.org/abs/2508.13144
πŸ“: allenai.org/blog/signal-noise
πŸ’»: github.com/allenai/signal-and-noise

19.08.2025 16:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

(5/6) SNR naturally gives a way to improve benchmarks, we introduce 3 β€œinterventions” in our work! For example:

❗️ Simply using the top 16 MMLU subtasks by SNR exhibits better decision accuracy and lower scaling law error than using the full task (only 6 for an AutoBencher task)

19.08.2025 16:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

(4/6) 🧐 How do we know SNR is meaningful? We can (1) calculate % of models ranked correctly at small vs. 1B scale and (2) fit scaling laws to predict the task performance.

SNR is predictive of better decision accuracy, and tasks with lower noise have lower scaling law error!

19.08.2025 16:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

(3/6) πŸ”Ž We landed on a simple metric - the signal-to-noise ratio - the ratio between the dispersion of scores from models, and the variation of final checkpoints of a single model.

This allows estimating SNR with a small number of models (around 50 models) at any compute scale!

19.08.2025 16:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

(2/6) Consider these training curves: 150M, 300M and 1B param models on 25 pretraining corpora. Many benchmarks can separate models, but are too noisy, and vice versa! 😧

We want – ⭐ low noise and high signal ⭐ – *both* low variance during training and a high spread of scores.

19.08.2025 16:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

Evaluating language models is tricky, how do we know if our results are real, or due to random chance?

We find an answer with two simple metrics: signal, a benchmark’s ability to separate models, and noise, a benchmark’s random variability between training steps 🧡

19.08.2025 16:46 πŸ‘ 15 πŸ” 4 πŸ’¬ 2 πŸ“Œ 0
The RewardBench 2 Leaderboard on HuggingFace.

The RewardBench 2 Leaderboard on HuggingFace.

RewardBench 2 is here! We took a long time to learn from our first reward model evaluation tool to make one that is substantially harder and more correlated with both downstream RLHF and inference-time scaling.

02.06.2025 16:31 πŸ‘ 20 πŸ” 8 πŸ’¬ 1 πŸ“Œ 1