Maria Ryskina's Avatar

Maria Ryskina

@mryskina

Postdoc @vectorinstitute.ai | organizer @queerinai.com | previously MIT, CMU LTI | πŸ€ rodent enthusiast | she/they 🌐 https://ryskina.github.io/

161
Followers
173
Following
39
Posts
10.07.2025
Joined
Posts Following

Latest posts by Maria Ryskina @mryskina

I knew from the first sentence that it would be MACE! Excited to see it revisited!

20.01.2026 18:48 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Congrats Dr Vagrant!!!

12.01.2026 21:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The must-read paper on LLMs, language, and thought that I reference here:

Dissociating language and thought in large language models
arxiv.org/abs/2301.06627
by @kmahowald.bsky.social @neuranna.bsky.social Idan Blank @nancykanwisher.bsky.social @joshtenenbaum.bsky.social @evfedorenko.bsky.social

07.01.2026 16:19 πŸ‘ 15 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

Huge thanks to @wiair.bsky.social for hosting me -- I had an absolutely wonderful time chatting with @j-novikova-nlp.bsky.social and @malikeh97.bsky.social 🀩

07.01.2026 16:05 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press.

This is open access; MIT Press will post a link soon, but until then, the book is available on my website:
tedlab.mit.edu/tedlab_websi...

24.12.2025 19:55 πŸ‘ 122 πŸ” 41 πŸ’¬ 2 πŸ“Œ 3
Apply - Interfolio

Hiring a postdoc for the Normativity Lab at Johns Hopkins (2026 start). Looking for multiagent systems expertise (RL/generative agents) + interdisciplinary background in AI and cognitive science/econ/cultural evolution.
apply.interfolio.com/177701

16.12.2025 15:54 πŸ‘ 6 πŸ” 11 πŸ’¬ 0 πŸ“Œ 1
Post image

πŸ§‘β€πŸ”¬I’m recruiting PhD students in Natural Language Processing @unileipzig.bsky.social Computer Science, together with @scadsai.bsky.social!

Topics include, but aren’t limited to:

πŸ”ŽLinguistic Interpretability
🌍Multilingual Evaluation
πŸ“–Computational Typology

Please share!

#NLProc #NLP

11.12.2025 13:36 πŸ‘ 41 πŸ” 25 πŸ’¬ 1 πŸ“Œ 3

I thought it was very good! Some people strongly prefer Babel for its perspective (the POV character of BoBH is a white woman), but I had the same criticisms as you and I liked BoBH better, especially in terms of character development. It also talks a lot more about research as a career!

06.12.2025 23:08 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Have you read Blood over Bright Haven? (No translation magic there, unfortunately, but much better on both other points IMO)

06.12.2025 14:26 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Surprising to me that on the chart it's labelled as being darker than The Secret History!

06.12.2025 14:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
References to two papers next to one another in a bibliography section:

Making FETCH! happen: Finding emergent dog whistles through common habitats by Kuleen Sasse, Carlos Alejandro Aguirre, Isabel Cachola, Sharon Levy, and Mark Dredze. ACL 2025.

Making β€œfetch” happen: The influence of social and linguistic context on nonstandard word growth and decline by Ian Stewart and Jacob Eisenstein. EMNLP 2018.

References to two papers next to one another in a bibliography section: Making FETCH! happen: Finding emergent dog whistles through common habitats by Kuleen Sasse, Carlos Alejandro Aguirre, Isabel Cachola, Sharon Levy, and Mark Dredze. ACL 2025. Making β€œfetch” happen: The influence of social and linguistic context on nonstandard word growth and decline by Ian Stewart and Jacob Eisenstein. EMNLP 2018.

Accidental bibliography achievement unlocked!
(I highly recommend checking out both papers)

04.12.2025 21:05 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Congratulations!!!

08.11.2025 00:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Gillian Hadfield - Alignment is social: lessons from human alignment for AI
Gillian Hadfield - Alignment is social: lessons from human alignment for AI Current approaches conceptualize the alignment challenge as one of eliciting individual human preferences and training models to choose outputs that that satisfy those preferences. To the extent…

The recording of my keynote from #COLM2025 is now available!

06.11.2025 21:35 πŸ‘ 10 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

Btw the PI of this work, Dr Kelly Lambert, has a cool book called "The Lab Rat Chronicles" that describes lots of behavioral findings from rat experiments! (Written pre-driving rats, unfortunately)

06.11.2025 16:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
two rats in cars from the University of Richmond study where they trained rats to drive tiny cars to get to treats and concluded that the rats love driving so much they'll do it without any incentive

two rats in cars from the University of Richmond study where they trained rats to drive tiny cars to get to treats and concluded that the rats love driving so much they'll do it without any incentive

the only kind of Rat Race I'm down for

06.11.2025 14:43 πŸ‘ 18 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

Congratulations! Took me a second to understand you weren't talking about Lexical Functional Grammar though...

05.11.2025 13:22 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Canadian researchers should be aware the there is a motion before the Parliamentary Standing Committee on Science and Research to force Tricouncils to hand over disaggregated peer review data on all applications:
Applicant names, profiles, demographics
Reviewers names, profiles, comments, and scores

30.10.2025 20:33 πŸ‘ 144 πŸ” 170 πŸ’¬ 13 πŸ“Œ 50
Preview
Incomplete Contracting and AI Alignment We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a syste...

Isn't mis- (or at least under-)specification inevitable? (I'm thinking of arxiv.org/abs/1804.04268)

21.10.2025 19:22 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Finally out in TACL:
🌎EWoK (Elements of World Knowledge)🌎: A cognition-inspired framework for evaluating basic world knowledge in language models

tl;dr: LLMs learn basic social concepts way easier than physical&spatial concepts

Paper: direct.mit.edu/tacl/article...
Website: ewok-core.github.io

20.10.2025 17:36 πŸ‘ 70 πŸ” 10 πŸ’¬ 1 πŸ“Œ 2
Post image

πŸš€ Excited to share a major update to our β€œMixture of Cognitive Reasoners” (MiCRo) paper!

We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brain’s functional specialization?

More below πŸ§ πŸ‘‡
cognitive-reasoners.epfl.ch

20.10.2025 12:05 πŸ‘ 31 πŸ” 9 πŸ’¬ 2 πŸ“Œ 2

DM'd you, thanks!

19.10.2025 14:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The organizers mentioned that the videos will be up a few weeks after the conference! I expect it'll be at www.youtube.com/@colm_conf

19.10.2025 00:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I still have that card! Still working on that second ice cream πŸ₯²

17.10.2025 17:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It used to be 5 "no"s for ice cream/pizza! Has the exchange rate gone up?

17.10.2025 17:36 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I'm on the job market looking for CS/ischool faculty and related positions! I'm broadly interested in doing research with policymakers and communities impacted by AI to inform and develop mitigations to harms and risks. If you've included any of my work in syllabi or policy docs please let me know!

16.10.2025 23:19 πŸ‘ 7 πŸ” 6 πŸ’¬ 2 πŸ“Œ 0
Post image

Grateful to keynote at #COLM2025. Here's what we're missing about AI alignment: Humans don’t cooperate just by aggregating preferences, we build social processes and institutions to generate norms that make it safe to trade with strangers. AI needs to play by these same systems, not replace them.

15.10.2025 23:00 πŸ‘ 15 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

Inspired to share some papers that I found at #COLM2025!

"Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation" by Amanda Myntti et al. arxiv.org/abs/2504.01542

14.10.2025 18:16 πŸ‘ 26 πŸ” 8 πŸ’¬ 1 πŸ“Œ 0
Title: Large Language Models Assume People are More Rational than We Really are
Authors: Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths
Affiliations: Department of Computer Science & Department of Psychology, Princeton University; Computing & Data Sciences, Boston University; Center for Data Science, New York University
Email: ryanliu at princeton.edu and jiayig at princeton.edu

Title: Large Language Models Assume People are More Rational than We Really are Authors: Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths Affiliations: Department of Computer Science & Department of Psychology, Princeton University; Computing & Data Sciences, Boston University; Center for Data Science, New York University Email: ryanliu at princeton.edu and jiayig at princeton.edu

LLMs Assume People Are More Rational Than We Really Are by Ryan Liu* & Jiayi Geng* et al.:

LMs are bad (too rational) at predicting human behaviour, but aligned with humans in assuming rationality in others’ choices.

arxiv.org/abs/2406.17055

14.10.2025 00:43 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Title: Neologism Learning for Controllability and Self-Verbalization
Authors: John Hewitt, Oyvind Tafjord, Robert Geirhos, Been Kim
Affiliation: Google DeepMind
Email: {johnhew, oyvindt, geirhos, beenkim} at google.com

Title: Neologism Learning for Controllability and Self-Verbalization Authors: John Hewitt, Oyvind Tafjord, Robert Geirhos, Been Kim Affiliation: Google DeepMind Email: {johnhew, oyvindt, geirhos, beenkim} at google.com

Neologism Learning by John Hewitt et al.:

Training new token embeddings on examples with a specific property (e.g., short answers) leads to finding β€œmachine-only synonyms” for these tokens that elicit the same behaviour (short answers=’lack’).

arxiv.org/abs/2510.08506

14.10.2025 00:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Title: Hidden in plain sight: VLMs overlook their visual representations
Authors: Stephanie Fu, Tyler Bonnen, Devin Guillory, Trevor Darrell
Affiliation: UC Berkeley

Title: Hidden in plain sight: VLMs overlook their visual representations Authors: Stephanie Fu, Tyler Bonnen, Devin Guillory, Trevor Darrell Affiliation: UC Berkeley

Hidden in Plain Sight by Stephanie Fu et al. [Outstanding paper award]:

VLMs are worse than vision-only models on vision-only tasks – LMs are biased and underutilize their (easily accessible) visual representations!

hidden-plain-sight.github.io

14.10.2025 00:43 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0