Our ICML 2025 workshop on Actionable Interpretability drew massive interest. But the same questions kept coming up: What does "actionable" mean? Is it achievable? How?
We're ready to answer.
π§΅
Our ICML 2025 workshop on Actionable Interpretability drew massive interest. But the same questions kept coming up: What does "actionable" mean? Is it achievable? How?
We're ready to answer.
π§΅
Excited to share our new dataset, FOL-Traces!
We introduce a large-scale dataset of programmatically verified FOL reasoning traces for studying structured logical inference + process fidelity.
Happy to hear thoughts from others working on reasoning in LLMs!
Check it out here π
paper: arxiv.org/abs/2505.14932
dataset: huggingface.co/datasets/fo...
work w/ @sarahliaw.bsky.social and Dani Yogatama
If you want to chat about interpretability & training dynamics & reasoning and munch on mezzes, come hang out with me in Rabat π²π¦π
9/9
I wanted to study reasoning acquisition in training by complexity + process fidelity but wasn't able to find a dataset. So we built one that's rigorously annotated and large enough to train a small LM. Now Iβm excited about what we can do with it
8/9
Bar graph displaying the accuracy percentages of various models on two-step prediction tasks, with distinct colors for each step.
Bar chart displaying accuracy percentages for various models across three complexity thresholds: 10-19, 20-29, and 30+.
a harder task- last step prediction: Β¬(Β¬Sunny(x) β§ Breezy(x)) β [MASK] or last two step prediction. Most LLMs only achieve <50% accuracy on both tasks.
(n.b. since FOL is verifiable, we define correct as any generation that's equivalent to expression.)
7/9
Table displaying model accuracies for components, operators, and predicates prediction tasks, with various metrics for performance evaluation.
e.g. masked prediction. we mask an operator randomly and have LLMs guess: Β¬(Β¬Sunny(x) β§ Breezy(x)) β (Sunny(x) [MASK] Breezy(x)). LLMs are correct ~45.7% on average:
6/9
...resulting in a bunch of reasoning traces that are verifiably correct with measurable programmatic complexity. And we find that they're very hard for LLMs!
Let's consider an example w/ de Morgan's law: Β¬(Β¬Sunny(x) β§ Breezy(x)) β (Sunny(x) β¨ Breezy(x))
5/9
Flowchart illustrating the process of using First-Order Logic, integrating human rules, symbolic generation, and LLM instantiation for reasoning examples.
So how do we strike a balance? We propose using First-Order Logic (FOL) as a middle ground. We
1. programmatically, randomly generate a bunch of FOL expressions
2. progressively simplify them, verifying their equivalence
3. chain them together
4. NL instantiate them w/ LLMs
4/9
We mostly interface with LLMs with words but evaluating NL reasoning is messy. On the other hand, something like math reasoning gives us concrete, objectively correct answers. But itβs narrow/doesnβt look like NL.
3/9
There are many evals and benchmarks in this field, but natural language (NL) reasoning is tricky--meaning depends on context (commonsense), shared assumptions (pragmatics), and whatβs unsaid (abduction). Pattern shortcuts/heuristics β logical inference.
2/9
Title highlights "FOL-Traces," a dataset for evaluating logical reasoning in language models, emphasizing rigorous testing and performance metrics.
New dataset ποΈ coming to #eacl
What is (correct) reasoning in LLMs? How do you rigorously define/measure process fidelity? How might we study its acquisition in large scale training? We made a gigantic, verifiably correct reasoning traces of first order logic expressions!
1/9
gemini summarized my google search when i was tryna look for an anti-new years resolution blog post. it says in highlight: "Approximately 80% to 88% of New Year's resolutions fail by mid-February, ..."
one of my new years "considerations" is to be less silent #onhere. so i guess i'll be #here and maybe also #there til february 15th
If you're interested in interpretability driven evaluations, I'd love to hear from you! And stay tuned for more work from us :)
Really excited to receive Coefficient Giving's Technical AI Safety Research Grant via Berkeley Existential Risk Initiative w/
@nsaphra.bsky.social! We aim to predict potential AI model failures before impact--before deployment, using interpretability.
Excited to share our paper: "Chain-of-Thought Is Not Explainability"! We unpack a critical misconception in AI: models explaining their steps (CoT) aren't necessarily revealing their true reasoning. Spoiler: the transparency can be an illusion. (1/9) π§΅
weve reached that point in this submission cycle, no amount of coffee will do ππββοΈπ
INCOMING
a leaf falls on moo deng the pygmy hippo , blocking her vision
moo deng is upset presumably because she canβt see!
titled: peer review
CDS building which looks like a jenga tower
Life update: I'm starting as faculty at Boston University
@bucds.bsky.social in 2026! BU has SCHEMES for LM interpretability & analysis, I couldn't be more pumped to join a burgeoning supergroup w/ @najoung.bsky.social @amuuueller.bsky.social. Looking for my first students, so apply and reach out!
or if you're awesome and happen to be in sf, also message me
pls message me if you wanna meet up for coffee and chat about ai/physics/llms/interpretability
really excited to be headed to OFC in SF! so excited to revisit optical physics π
Transformers employ different strategies through training to minimize loss, but how do these tradeoff and why?
Excited to share our newest work, where we show remarkably rich competitive and cooperative interactions (termed "coopetition") as a transformer learns.
Read on πβ¬
i use the same template and need help getting a butterfly button help
New paperβaccepted as *spotlight* at #ICLR2025! π§΅π
We show a competition dynamic between several algorithms splits a toy modelβs ICL abilities into four broad phases of train/test settings! This means ICL is akin to a mixture of different algorithms, not a monolithic ability.
Starlings move in undulating curtains across the sky. Forests of bamboo blossom at once. But some individuals donβt participate in these mystifying synchronized behaviors β and scientists are learning that they may be as important as those that do.
New piece out!
We explain why Fully Autonomous Agents Should Not be Developed, breaking βAI Agentβ down into its components & examining through ethical values.
With @evijit.io, @giadapistilli.com and @sashamtl.bsky.social
huggingface.co/papers/2502....
Brian Hie harnessed the powerful parallels between DNA and human language to create an AI tool that interprets genomes. Read his conversation with Ingrid Wickelgren: www.quantamagazine.org/the-poetry-f...
How do tokens evolve as they are processed by a deep Transformer?
With JosΓ© A. Carrillo, @gabrielpeyre.bsky.social and @pierreablin.bsky.social, we tackle this in our new preprint: A Unified Perspective on the Dynamics of Deep Transformers arxiv.org/abs/2501.18322
ML and PDE lovers, check it out!
itβs finally raining in la:)