I knew from the first sentence that it would be MACE! Excited to see it revisited!
I knew from the first sentence that it would be MACE! Excited to see it revisited!
Congrats Dr Vagrant!!!
The must-read paper on LLMs, language, and thought that I reference here:
Dissociating language and thought in large language models
arxiv.org/abs/2301.06627
by @kmahowald.bsky.social @neuranna.bsky.social Idan Blank @nancykanwisher.bsky.social @joshtenenbaum.bsky.social @evfedorenko.bsky.social
Huge thanks to @wiair.bsky.social for hosting me -- I had an absolutely wonderful time chatting with @j-novikova-nlp.bsky.social and @malikeh97.bsky.social π€©
New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press.
This is open access; MIT Press will post a link soon, but until then, the book is available on my website:
tedlab.mit.edu/tedlab_websi...
Hiring a postdoc for the Normativity Lab at Johns Hopkins (2026 start). Looking for multiagent systems expertise (RL/generative agents) + interdisciplinary background in AI and cognitive science/econ/cultural evolution.
apply.interfolio.com/177701
π§βπ¬Iβm recruiting PhD students in Natural Language Processing @unileipzig.bsky.social Computer Science, together with @scadsai.bsky.social!
Topics include, but arenβt limited to:
πLinguistic Interpretability
πMultilingual Evaluation
πComputational Typology
Please share!
#NLProc #NLP
I thought it was very good! Some people strongly prefer Babel for its perspective (the POV character of BoBH is a white woman), but I had the same criticisms as you and I liked BoBH better, especially in terms of character development. It also talks a lot more about research as a career!
Have you read Blood over Bright Haven? (No translation magic there, unfortunately, but much better on both other points IMO)
Surprising to me that on the chart it's labelled as being darker than The Secret History!
References to two papers next to one another in a bibliography section: Making FETCH! happen: Finding emergent dog whistles through common habitats by Kuleen Sasse, Carlos Alejandro Aguirre, Isabel Cachola, Sharon Levy, and Mark Dredze. ACL 2025. Making βfetchβ happen: The influence of social and linguistic context on nonstandard word growth and decline by Ian Stewart and Jacob Eisenstein. EMNLP 2018.
Accidental bibliography achievement unlocked!
(I highly recommend checking out both papers)
Congratulations!!!
The recording of my keynote from #COLM2025 is now available!
Btw the PI of this work, Dr Kelly Lambert, has a cool book called "The Lab Rat Chronicles" that describes lots of behavioral findings from rat experiments! (Written pre-driving rats, unfortunately)
two rats in cars from the University of Richmond study where they trained rats to drive tiny cars to get to treats and concluded that the rats love driving so much they'll do it without any incentive
the only kind of Rat Race I'm down for
Congratulations! Took me a second to understand you weren't talking about Lexical Functional Grammar though...
Canadian researchers should be aware the there is a motion before the Parliamentary Standing Committee on Science and Research to force Tricouncils to hand over disaggregated peer review data on all applications:
Applicant names, profiles, demographics
Reviewers names, profiles, comments, and scores
Isn't mis- (or at least under-)specification inevitable? (I'm thinking of arxiv.org/abs/1804.04268)
Finally out in TACL:
πEWoK (Elements of World Knowledge)π: A cognition-inspired framework for evaluating basic world knowledge in language models
tl;dr: LLMs learn basic social concepts way easier than physical&spatial concepts
Paper: direct.mit.edu/tacl/article...
Website: ewok-core.github.io
π Excited to share a major update to our βMixture of Cognitive Reasonersβ (MiCRo) paper!
We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brainβs functional specialization?
More below π§ π
cognitive-reasoners.epfl.ch
DM'd you, thanks!
The organizers mentioned that the videos will be up a few weeks after the conference! I expect it'll be at www.youtube.com/@colm_conf
I still have that card! Still working on that second ice cream π₯²
It used to be 5 "no"s for ice cream/pizza! Has the exchange rate gone up?
I'm on the job market looking for CS/ischool faculty and related positions! I'm broadly interested in doing research with policymakers and communities impacted by AI to inform and develop mitigations to harms and risks. If you've included any of my work in syllabi or policy docs please let me know!
Grateful to keynote at #COLM2025. Here's what we're missing about AI alignment: Humans donβt cooperate just by aggregating preferences, we build social processes and institutions to generate norms that make it safe to trade with strangers. AI needs to play by these same systems, not replace them.
Inspired to share some papers that I found at #COLM2025!
"Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation" by Amanda Myntti et al. arxiv.org/abs/2504.01542
Title: Large Language Models Assume People are More Rational than We Really are Authors: Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths Affiliations: Department of Computer Science & Department of Psychology, Princeton University; Computing & Data Sciences, Boston University; Center for Data Science, New York University Email: ryanliu at princeton.edu and jiayig at princeton.edu
LLMs Assume People Are More Rational Than We Really Are by Ryan Liu* & Jiayi Geng* et al.:
LMs are bad (too rational) at predicting human behaviour, but aligned with humans in assuming rationality in othersβ choices.
arxiv.org/abs/2406.17055
Title: Neologism Learning for Controllability and Self-Verbalization Authors: John Hewitt, Oyvind Tafjord, Robert Geirhos, Been Kim Affiliation: Google DeepMind Email: {johnhew, oyvindt, geirhos, beenkim} at google.com
Neologism Learning by John Hewitt et al.:
Training new token embeddings on examples with a specific property (e.g., short answers) leads to finding βmachine-only synonymsβ for these tokens that elicit the same behaviour (short answers=βlackβ).
arxiv.org/abs/2510.08506
Title: Hidden in plain sight: VLMs overlook their visual representations Authors: Stephanie Fu, Tyler Bonnen, Devin Guillory, Trevor Darrell Affiliation: UC Berkeley
Hidden in Plain Sight by Stephanie Fu et al. [Outstanding paper award]:
VLMs are worse than vision-only models on vision-only tasks β LMs are biased and underutilize their (easily accessible) visual representations!
hidden-plain-sight.github.io