Michael Lepori's Avatar

Michael Lepori

@michael-lepori

PhD student at Brown interested in deep learning + cog sci, but more interested in playing guitar.

124
Followers
220
Following
16
Posts
16.05.2025
Joined
Posts Following

Latest posts by Michael Lepori @michael-lepori

Preview
Language Models Struggle to Use Representations Learned In-Context Though large language models (LLMs) have enabled great success across a wide variety of tasks, they still appear to fall short of one of the loftier goals of artificial intelligence research: creating...

Check out the paper for more analyses and details! Huge shout out to my advisors at Google (@tallinzen.bsky.social, Ann Yuan, and Katja Filippova) for supervising this project over the summer!

Paper Link: arxiv.org/abs/2602.04212

26.02.2026 17:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Overall, we show that while language models can learn novel, structured representations in context, they are a long way from being able to use these representations as a general-purpose in-context world model. Frontier models somewhat ameliorate the problem, but not entirely.

26.02.2026 17:28 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

These results were generated using relatively small instruction-tuned models. Do state-of-the-art reasoning models (like Gemini and GPT5) also struggle to use representations that are learned in-context? Broadly speaking, yes!

26.02.2026 17:28 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We find that models also struggle at deploying their in-context representations in this task.

26.02.2026 17:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We expand our investigation to a novel task: adaptive world modeling. This task combines graph tracing with a few-shot learning task that systematically maps tokens at one point in the topology to tokens at another (e.g., token_i -> token_i+2).

26.02.2026 17:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Surprisingly, we find that models perform dramatically worse in the instruction condition, despite having encoded the in-context representations just as well as in the prefilled condition!

26.02.2026 17:28 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

...and another where the model is instructed to generate the next word in the sequence. The prefilled condition is identical to the standard graph tracing task, whereas the instruction condition requires the model to delay its prediction for several tokens.

26.02.2026 17:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

First, we study whether in-context representations are deployable when an instruction-tuned model is tasked with performing next-word prediction. We consider two conditions: one in which the sequence is formatted as a prefilled model response...

26.02.2026 17:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We consider an in-context learned representation to be flexibly deployable if it can be used in new contexts. Otherwise, we call the representation inert.

26.02.2026 17:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Among other things, that work shows that LM representations come to reflect the graph's topology. This indicates that models can learn novel, structured representations in-context. We push this a step further, and ask whether these representations can be used to solve new tasks!

26.02.2026 17:28 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Our starting point is an excellent paper from
@corefpark.bsky.social et al. This work defines an in-context graph tracing task, which involves next-word prediction on a sequence of words generated by a random walk on a graph. (Figure lifted from their beautiful paper!)

26.02.2026 17:28 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

AI is now being deployed for long time horizon tasks. This has renewed the relevance of a longstanding ambition: to build systems capable of flexibly adapting to different environments. We ask whether LLMs can already accomplish this goal in controlled, synthetic settings.

26.02.2026 17:28 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

🚨New preprint! In-context learning underlies LLMs’ real-world utility, but what are its limits? Can LLMs learn completely novel representations in-context and flexibly deploy them to solve tasks? In other words, can LLMs construct an in-context world model? Let’s see! πŸ‘€

26.02.2026 17:28 πŸ‘ 37 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1

Huge shoutout to my collaborators and advisors @jennhu.bsky.social, Ishita Dasgupta, @romapatel.bsky.social, @thomasserre.bsky.social, and Ellie Pavlick for their contributions to this project!

26.02.2026 00:25 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

I'm excited to share that this paper was accepted at ICLR 2026! We show that language models encode one of the most basic ingredients of a world model: the ability to distinguish plausible from implausible states. Check out the paper for more details!

See you in Rio!
Paper: arxiv.org/abs/2507.12553

26.02.2026 00:22 πŸ‘ 30 πŸ” 6 πŸ’¬ 3 πŸ“Œ 0

I had a great time helping out on this project with @jennhu.bsky.social and Michael Franke! If you're interested in the intersection of interpretability and cogsci, check it out!

21.05.2025 16:13 πŸ‘ 9 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0