Valentin Hofmann's Avatar

Valentin Hofmann

@valentinhofmann

Assistant Professor @cislmu.bsky.social @lmu.de

2,351
Followers
167
Following
39
Posts
05.10.2023
Joined
Posts Following

Latest posts by Valentin Hofmann @valentinhofmann

Post image

๐Ÿ“ข Life update ๐Ÿ“ข

After a wonderful time at @ai2.bsky.social, I've joined @cislmu.bsky.social at @lmu.de as a tenure-track assistant professor in NLP. Thrilled to be back in Europe and to start a lab in Munich's flourishing AI ecosystem! ๐ŸŽ‰

03.03.2026 14:58 ๐Ÿ‘ 19 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Demographic cues (eg, names, dialect) are widely used to study how LLM behavior may change depending on user demographics. Such cues are often assumed interchangeable.

๐Ÿšจ We show they are not: different cues yield different model behavior for the same group and different conclusions on LLM bias. ๐Ÿงต๐Ÿ‘‡

27.01.2026 13:07 ๐Ÿ‘ 18 ๐Ÿ” 10 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image Post image

Introducing Bolmo, a new family of byte-level language models built by "byteifying" our open Olmo 3โ€”and to our knowledge, the first fully open byte-level LM to match or surpass SOTA subword models across a wide range of tasks. ๐Ÿงต

15.12.2025 17:19 ๐Ÿ‘ 75 ๐Ÿ” 15 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 4

Excited to see our #COLM2025 paper on fluid benchmarking highlighted by @eval-eval.bsky.social! They are worth a follow if you are into LLM eval research. ๐Ÿ”ฌ

31.10.2025 17:25 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Thereโ€™s plenty of evidence for political bias in LLMs, but very few evals reflect realistic LLM use cases โ€” which is where bias actually matters.

IssueBench, our attempt to fix this, is accepted at TACL, and I will be at #EMNLP2025 next week to talk about it!

New results ๐Ÿงต

29.10.2025 16:11 ๐Ÿ‘ 32 ๐Ÿ” 11 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Check out this #EMNLP2025 paper led by @minhducbui.bsky.social and @carolin-holtermann.bsky.social showing dialect prejudice remains a major issue in current LLMs.

Example: GPT-5 associates German dialect speakers with being uneducated and steers them toward stereotyped jobs (e.g., farmworkers).

๐Ÿ‘‡

14.10.2025 16:01 ๐Ÿ‘ 7 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Thanks, Jordan! Your ACL 2021 paper was a huge source of inspiration for us!

19.09.2025 19:04 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

We did not specifically analyze novel models as your paper did. While I am optimistic that Fluid Benchmarking improves over static IRT-based methods in this regime as well, there are definitely limitations, which we discuss in the paragraph below.

Would be exciting to run more experiments on this!

19.09.2025 18:52 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

In our experiments, we find that this dynamic approach consistently outperforms static IRT-based methods. The improvements are especially pronounced in terms of variance, which poses a major challenge for static IRT-based methods. We discuss this in more detail in the paragraph below.

19.09.2025 18:52 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Great question! The key difference is that we use IRT to dynamically adapt the subset of items to a model's capability, rather than to determine a static, "globally optimal" subset of items as in prior work. With Fluid Benchmarking, each model is evaluated on a different subset of items.

19.09.2025 18:52 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

LM benchmark design requires 3 decisions, how to:
๐ŸŸ select test cases
๐Ÿ  score LM on each test
๐Ÿฆˆ aggregate scores to estimate perf

fluid benchmarking is simple:
๐Ÿฃ find max informative test cases
๐Ÿฅ estimate 'ability', not simple avg perf

why care? turn ur grey noisy benchmarks to red ones!

17.09.2025 18:17 ๐Ÿ‘ 5 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Last but not least, a huge shoutout to my incredible coauthors @davidheineman.com, @ianmagnusson.bsky.social, @kylelo.bsky.social, @jessedodge.bsky.social, @maartensap.bsky.social, Pang Wei Koh, Chun Wang, @hanna-nlp.bsky.social, and @nlpnoah.bsky.social! ๐Ÿค—

16.09.2025 17:16 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

For details, check out our paper, blog, code, and data:

๐Ÿ“„ arxiv.org/abs/2509.11106
โœ๏ธ allenai.org/blog/fluid-b...
๐Ÿ’ป github.com/allenai/flui...
๐Ÿ“Š huggingface.co/datasets/all...

Looking forward to chatting more at #COLM2025! ๐Ÿ‘‹

16.09.2025 17:16 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Overall, our work shows that LLM evaluations can be substantially improved by moving beyond the until-now universal practice of static benchmarking, which assumes a globally optimal set of evaluation questions for all models.

16.09.2025 17:16 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

These (and more) advantages are achieved while at the same time reducing evaluation cost.

Example: on MMLU, Fluid Benchmarking results in lower step-to-step variance and higher validity than standard methods while using 50 times fewer questions. โšก

16.09.2025 17:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Fluid Benchmarking substantially reduces step-to-step variance during pretraining.

It also increases validity: results generalize better to other benchmarks targeting the same capability. One reason: it automatically avoids mislabeled questions, cutting label errors by 99%! ๐Ÿคฏ

16.09.2025 17:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

In our experiments, we apply Fluid Benchmarking to evaluation during pretraining, a setting where capabilities evolve rapidly.

We find that Fluid Benchmarking dynamically adapts to these changes, administering easier questions early in training and more difficult ones later.

16.09.2025 17:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Fluid Benchmarking repeats this loop until the number of administered questions reaches the allotted budget.

Adaptive question selection means that LLMs face different sets of questions, but ability estimation aligns results in a common space.

16.09.2025 17:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

In Fluid Benchmarking, we start with an initial ability estimate from one question.

To select the next question, we use Fisher information. Essentially: a question close in difficulty (b) to the ability estimate (ฮธ) and with high discrimination (a).

Then we update the estimate.

16.09.2025 17:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

In addition, IRT models each LLM's ability, which can be estimated from its responses to questions with known difficulty and discrimination.

The IRT ability estimate can be used to summarize performance like accuracy, and it accounts for question characteristics.

16.09.2025 17:16 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

To get a question's difficulty, we use item response theory (IRT): we analyze responses of hundreds of LLMs to see how often a question is answered correctly.

IRT also measures the discrimination of a question, meaning how reliably it separates stronger from weaker LLMs.

16.09.2025 17:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Test theory says: questions are most informative when matched to a test taker's ability.

For LLMs, that means evaluating weaker models on easier questions and stronger models on harder ones.

But how do we know a question's difficulty, or an LLM's ability, before evaluation? ๐Ÿค”

16.09.2025 17:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

๐Ÿ“ข New #COLM2025 paper ๐Ÿ“ข

Standard benchmarks give every LLM the same questions. This is like testing 5th graders and college seniors with *one* exam! ๐Ÿฅด

Meet Fluid Benchmarking, a capability-adaptive eval method delivering lower variance, higher validity, and reduced cost.

๐Ÿงต

16.09.2025 17:16 ๐Ÿ‘ 41 ๐Ÿ” 10 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 1
Post image

I am delighted to share our new #PNAS paper, with @grvkamath.bsky.social @msonderegger.bsky.social and @sivareddyg.bsky.social, on whether age matters for the adoption of new meanings. That is, as words change meaning, does the rate of adoption vary across generations? www.pnas.org/doi/epdf/10....

29.07.2025 12:31 ๐Ÿ‘ 49 ๐Ÿ” 13 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 1

Attending #ICML2025? Don't miss this TokShop panel, which will explore:

๐Ÿ”ฎ The Future of Tokenization ๐Ÿ”ฎ

Featuring a stellar lineup of panelists - mark your calendar! โœจ

16.07.2025 15:28 ๐Ÿ‘ 4 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

LLMs can appear unbiased on the surface but still perpetuate racist views in subtle ways.

What causes this discrepancy? ๐Ÿ”

In our upcoming #ACL2025 paper, we find a pattern akin to racial colorblindness: LLMs suppress race in ambiguous contexts, leading to biased outcomes.

10.06.2025 18:13 ๐Ÿ‘ 6 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

๐Ÿ“ฃ We extend the submission deadline by 24 hours to avoid conflict with ACL camera-ready deadline.

๐Ÿ“… New Submission Deadline: May 31, 2025 (23:59 AoE)

๐Ÿ“ฉ OpenReview: openreview.net/group?id=ICM...

30.05.2025 21:52 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Huge congrats, Adam!!! ๐ŸŽ‰

29.05.2025 16:15 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Got a good tokenization paper under review at COLM, but the scores were a letdown? ๐Ÿ˜ฌ

Why bother with rebuttal when the perfect venue is right around the corner!

Submit your paper to the #ICML2025 Tokenization Workshop (TokShop) by May 30! ๐Ÿš€

28.05.2025 08:24 ๐Ÿ‘ 11 ๐Ÿ” 4 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Beyond text: Modern AI tokenizes images too! Vision models split photos into patches, treating each 16x16 pixel square as a "token." ๐Ÿ–ผ๏ธโžก๏ธ๐Ÿ”ค #VisualTokenization

Interested in tokenization? Join our workshop tokenization-workshop.github.io
The submission deadline is already May 30!

26.05.2025 19:55 ๐Ÿ‘ 4 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0