Our ICML 2025 workshop on Actionable Interpretability drew massive interest. But the same questions kept coming up: What does "actionable" mean? Is it achievable? How?
We're ready to answer.
π§΅
Our ICML 2025 workshop on Actionable Interpretability drew massive interest. But the same questions kept coming up: What does "actionable" mean? Is it achievable? How?
We're ready to answer.
π§΅
Can you solve this algebra puzzle? π§©
cb=c, ac=b, ab=?
A small transformer can learn to solve problems like this!
And since the letters don't have inherent meaning, this lets us study how context alone imparts meaning. Here's what we found:π§΅β¬οΈ
Hello world π
My first paper at UT Austin!
We ask: what happens when medical βevidenceβ fed into an LLM is wrong? Should your AI stay faithful, or should it play it safe when the evidence is harmful?
We show that frontier LLMs accept counterfactual medical evidence at face value.π§΅
Check out @hibaahsan.bsky.social's paper on spotting (problematic) racial biases in LLMs for healthcare applications π
3/ π₯ A separate team at Northeastern located where certain signals live inside Olmo and made targeted edits that reduced biased clinical predictions. This kind of audit is only possible because Olmo exposes all its components.
β buff.ly/HkChr4Q
Chantal (and Vinith) find that you can jailbreak LLMs with syntax! Some examples: cshaib.github.io/syntax_domai...
Now to appear at #EMNLP2025 (Findings). We've added more models and experiments: arxiv.org/abs/2502.13319
Can we distill *circuits* from teacher models into smaller students? π
Who is going to be at #COLM2025?
I want to draw your attention to a COLM paper by my student @sfeucht.bsky.social that has totally changed the way I think and teach about LLM representations. The work is worth knowing.
And you can meet Sheridan at COLM, Oct 7!
bsky.app/profile/sfe...
Can we quantify what makes some text read like AI "slop"? We tried π
Our new paper asks: what is the goal of βnatural language verbalizationβ interpretability approaches? If a verbalizer is supposed to tell us something about whatβs in the target LM and NOT just whatβs in the verbalizer LM, how do we actually evaluate that?
Wouldnβt it be great to have questions about LM internals answered in plain English? Thatβs the promise of verbalization interpretability. Unfortunately, our new paper shows that evaluating these methods is nuancedβand verbalizers might not tell us what we hope they do. π§΅π1/8
Thrilled to share our research showing how LLM models can be influenced by bias from "spun" medical literature is now featured in Northeastern's Khoury news! This shows critical insights as AI enters healthcare.
The full paper can be found at arxiv.org/abs/2502.07963
This Friday NEMI 2025 is at Northeastern in Boston, 8 talks, 24 roundtables, 90 posters; 200+ attendees. Thanks to
goodfire.ai/ for sponsoring! nemiconf.github.io/summer25/
If you can't make it in person, the livestream will be here:
www.youtube.com/live/4BJBis...
π’ How factual are LLMs in healthcare?
Weβre excited to release FactEHR β a new benchmark to evaluate factuality in clinical notes. As generative AI enters the clinic, we need rigorous, source-grounded tools to measure what these models get right β and what they donβt. π₯ π€
Chatted with @byron.bsky.social at icml about my recent work, so look out for his upcoming "Tokenization is More Than More Than Compression".
An overview of our AI-in-the-loop expert study pipeline: given a claim from a subreddit, we extract the PIO elements and retrieve the evidence automatically. The evidence, its context, and the evidence are then presented to a medical expert to provide a judgment and a rationale for the factuality of the claim.
Are we fact-checking medical claims the right way? π©Ίπ€
Probably not. In our study, even experts struggled to verify Reddit health claims using end-to-end systems.
We show whyβand argue fact-checking should be a dialogue, with patients in the loop
arxiv.org/abs/2506.20876
π§΅1/
[π] Are LLMs mindless token-shifters, or do they build meaningful representations of language? We study how LLMs copy text in-context, and physically separate out two types of induction heads: token heads, which copy literal tokens, and concept heads, which copy word meanings.
I'm searching for some comp/ling experts to provide a precise definition of βslopβ as it refers to text (see: corp.oup.com/word-of-the-...)
I put together a google form that should take no longer than 10 minutes to complete: forms.gle/oWxsCScW3dJU...
If you can help, I'd appreciate your input! π
πJob adπ We (@gregdnlp.bsky.social, @mattlease.bsky.social and I) are hiring a postdoc fellow within the CosmicAI Institute, to do galactic work with LLMs and generative AI! If you would like to push the frontiers of foundation models to help solve myths of the universe, please apply!
LLMs are known to perpetuate social biases in clinical tasks. Can we locate and intervene upon LLM activations that encode patient demographics like gender and race? π§΅
Work w/ @arnabsensharma.bsky.social, @silvioamir.bsky.social, @davidbau.bsky.social, @byron.bsky.social
arxiv.org/abs/2502.13319
π¨ Do LLMs fall for spin in medical literature? π€
In our new preprint, we find that LLMs are susceptible to biased reporting of clinical treatment benefits in abstractsβmore so than human experts. ππ [1/7]
Full Paper: arxiv.org/abs/2502.07963
π§΅π
π’ Can we trace a small distilled model back to its teacher? π€New work (w/ @chantalsh.bsky.social, @silvioamir.bsky.social & @byron.bsky.social) finds some footprints left by LLMs in distillation! [1/6]
π Full paper: arxiv.org/abs/2502.06659
DeepSeek R1 shows how important it is to be studying the internals of reasoning models. Try our code: Here @canrager.bsky.social shows a method for auditing AI bias by probing the internal monologue.
dsthoughts.baulab.info
I'd be interested in your thoughts.
π£ π We're hiring for 2 Machine Learning researchers to join SOLACE-AI @kingscollegelondon.bsky.social , funded by @wellcometrust.bsky.social . This is your chance to develop cutting-edge AI to directly impact global health responses to climate emergencies. jobs.ac.uk/job/DLM377
OLMo 2 is out π₯³ 7B and 13B trained on 5T tokens, and meticulousy instruction tuned using Tulu 3 recipe.
Simply the best fully open models yet.
Really proud of the work & the amazing team at
@ai2.bsky.social
And Sheridan Feucht investigates the "implicit vocabulary" of LLMs via token erasure: arxiv.org/abs/2406.20086 (w/David Atkinson and @davidbau.bsky.social)
Somin Wadhwa has some intriguing findings on distillation with "chain of thought" sequences (e.g., this works better when "reasoning" follows labels, and individual tokens seem to be sufficient): arxiv.org/abs/2406.14511 (w/@Silvio Amir)
Chantal Shaib reports on syntactic "templates" that LLM's like to repeat: arxiv.org/abs/2407.00211 (w/@yanai.bsky.social and @jessyjli.bsky.social)
I'll be @ #EMNLP2024 if anyone wants to find snobby coffee / despair about election / or I guess talk research. Some work to be presentedπ