Rachel Ryskin's Avatar

Rachel Ryskin

@ryskin

Cognitive scientist @ UC Merced http://raryskin.github.io PI of Language, Interaction, & Cognition (LInC) lab: http://linclab0.github.io

504
Followers
329
Following
24
Posts
24.10.2023
Joined
Posts Following

Latest posts by Rachel Ryskin @ryskin

I’m excited to share our new work, led by grad student Rajvi Agravat, using iEEG in 54 pediatric, adolescent, & young adult participants with deep neural network audio source separation to show how the brain prioritizes speech in audio containing both speech and music www.biorxiv.org/content/10.6...

13.03.2026 19:54 πŸ‘ 17 πŸ” 6 πŸ’¬ 1 πŸ“Œ 1
Preview
Optimized feature gains explain and predict successes and failures of human selective listening - Nature Human Behaviour Griffith et al. show that human-like auditory attentional strategies naturally arise from the optimization of feature gains for selective listening.

Excited to announce a new paper from our lab, by Ian Griffith @iangriffith.bsky.social with help from Preston Hess @phess2.bsky.social, introducing a model of attentional selection. www.nature.com/articles/s41...
@mitbcs.bsky.social @mitscience.bsky.social
Here is a summary. (1/n)

13.03.2026 13:10 πŸ‘ 18 πŸ” 6 πŸ’¬ 1 πŸ“Œ 1
Post image

🧠πŸ§ͺ🧡1/37
Our new paper on how pinniped (seal and sea lion) brains evolved to unlock vocal plasticity is this week's @science.org cover.

www.science.org/doi/10.1126/...

12.03.2026 18:06 πŸ‘ 84 πŸ” 34 πŸ’¬ 5 πŸ“Œ 6
Preview
Evidence Against Syntactic Encapsulation in Large Language Models Transformer-based large language models (LLMs) have recently demonstrated exceptional performance in a variety of linguistic tasks. LLMs primarily combine information across words in a sentence using...

Evidence against syntactic encapsulation in large language models

onlinelibrary.wiley.com/doi/10.1111/...

10.03.2026 14:19 πŸ‘ 7 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

Congratulations @judithfan.bsky.social on winning the Lila R. Gleitman Prize for early-career contributions to Cognitive Science πŸ₯³ Amazing!!

cognitivesciencesociety.org/gleitman-pri...

10.03.2026 20:07 πŸ‘ 71 πŸ” 10 πŸ’¬ 4 πŸ“Œ 0
title section of the paper: β€œCross-Modal Taxonomic Generalization in (Vision) Language Models” by Tianyang Xu, Marcelo Sandoval-CastaΓ±eda, Karen Livescu, Greg Shakhnarovich, Kanishka Misra.

title section of the paper: β€œCross-Modal Taxonomic Generalization in (Vision) Language Models” by Tianyang Xu, Marcelo Sandoval-CastaΓ±eda, Karen Livescu, Greg Shakhnarovich, Kanishka Misra.

What is the interplay between representations learned from (language) surface forms alone, and those learned from more grounded evidence (e.g.,vision)?

Excited to share new work understanding β€œCross-modal taxonomic generalization” in (V)LMs

arxiv.org/abs/2603.07474

1/

10.03.2026 20:53 πŸ‘ 33 πŸ” 12 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

What if you could automatically transcribe children's speech sounds from their first babbles to full sentences?

Screening for speech delays. Comparing how kids learn to talk across languages. Following how sounds evolve month by month.

We're building toward this with BabAR🧡 (sound on πŸ”Š)

09.03.2026 13:07 πŸ‘ 52 πŸ” 19 πŸ’¬ 3 πŸ“Œ 6
Preview
Task learning increases information redundancy of neural responses in macaque visual cortex How does the brain optimize sensory information for decision-making in new tasks? One hypothesis suggests that learning reduces redundancy in neural representations to improve efficiency, whereas anot...

RIP redundancy reduction?

Beautiful work by Liu & colleagues showing that neural redundancy increases with learning, as predicted by a Bayesian model:
www.science.org/doi/10.1126/...

07.03.2026 11:08 πŸ‘ 68 πŸ” 25 πŸ’¬ 2 πŸ“Œ 1

Of all the analogies, this one about horses is the dumbest.

06.03.2026 05:48 πŸ‘ 42 πŸ” 8 πŸ’¬ 5 πŸ“Œ 1
README

The canvas() function from the ggview R package is very useful for previewing/tweaking a ggplot into publication-ready format: it renders a plot "as it would appear if saved to a file with the specified dimensions".

cran.r-project.org/web/packages...

#RStats #ggplot #ggview

05.03.2026 01:53 πŸ‘ 53 πŸ” 10 πŸ’¬ 1 πŸ“Œ 4

Fantastic tool for teaching about neural networks!

05.03.2026 05:10 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Research Coordinator I - TCH Neurosurgery Research Coordinator I - TCH Neurosurgery

My group is hiring a full time research coordinator to work with our collaborators in Houston on understanding speech and language development in children with epilepsy. Great for folks looking to get direct experience with clinical/translational research. Please repost! jobs.bcm.edu/job/Research...

03.03.2026 20:25 πŸ‘ 14 πŸ” 10 πŸ’¬ 1 πŸ“Œ 0

Very cool! Looking forward to reading!

27.02.2026 00:57 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Learning through prediction: a case of verb bias learning Linguistic prediction, which emerges from experience, is a pervasive process in language comprehension. However, how prediction develops as learning unfolds and how it drives the learning process r...

πŸ“£ Check out how verb-specific knowledge updates incrementally as distributional learning takes place: our first paper on prediction and learning (with Amanda Owen Van Horne @telllab.bsky.social and Yi-Lun Weng) is out!

26.02.2026 05:25 πŸ‘ 4 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1
LinkedIn This link will take you to a page that’s not on LinkedIn

πŸ“£πŸ“£πŸ“£Job alert Multimodal Language Department Max Planck Institute for Psycholinguistics MAX PLANCK RESEARCH GROUP LEADER POSITION (W2 BBESG) lnkd.in/eaq5MW9a

26.02.2026 20:32 πŸ‘ 17 πŸ” 20 πŸ’¬ 1 πŸ“Œ 2
Preview
GitHub - BenjaFried/modsoc_Julia: Modeling Social Behavior in Julia Modeling Social Behavior in Julia. Contribute to BenjaFried/modsoc_Julia development by creating an account on GitHub.

Interested in using my textbook, Modeling Social Behavior, but wishing code was in a more general programming language than NetLogo? The incomparable Ben Fried has translated all modeling code into JULIA, utilizing the excellent Agents.jl package. github.com/BenjaFried/m...

24.02.2026 20:46 πŸ‘ 48 πŸ” 15 πŸ’¬ 0 πŸ“Œ 0
Preview
Pace of ecology drives the tempo of visual perception across the animal kingdom Nature Ecology & Evolution - Using phylogenetic comparative methods across 237 species from disparate phyla, the authors show that species with fast-paced ecologies have higher temporal...

Our new paper is now out showing how time perception in animals is linked to their ecology. Using data from 237 species we show temporal perception is faster in species that fly and pursuit predators www.nature.com/articles/s41... 🌐

24.02.2026 13:22 πŸ‘ 140 πŸ” 60 πŸ’¬ 3 πŸ“Œ 2
ConversationAlign: Open-source software for analyzing patterns of lexical use and alignment in conversation transcripts

Big paper release! ConversationAlign - methods for computing lexical and affective alignment between interlocutors in dyadic conversation transcripts. Open Access in Behavior Research Methods. link.springer.com/epdf/10.3758...

20.02.2026 18:15 πŸ‘ 45 πŸ” 14 πŸ’¬ 4 πŸ“Œ 0
Abstract of the paper

Abstract of the paper

Figure 1 - experimental setup

Figure 1 - experimental setup

Figure 2 - accuracy over time

Figure 2 - accuracy over time

Figure 3 - semantic similarity within/across games

Figure 3 - semantic similarity within/across games

I always thought preschoolers were too egocentric to do well on communication tasks where they had to talk about novel referents. Old papers reported they'd say stuff like "this one looks like my uncle's hat."

@vboyce.bsky.social shows that this is wrong!

osf.io/preprints/ps...

12.02.2026 23:38 πŸ‘ 29 πŸ” 9 πŸ’¬ 0 πŸ“Œ 0

I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science πŸ“•πŸ‘‡

Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!

Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!

12.02.2026 22:22 πŸ‘ 53 πŸ” 22 πŸ’¬ 0 πŸ“Œ 1
Video thumbnail

AI agents are becoming a serious threat to research data quality.

Today we’re rolling out Bot authenticity checks on @joinprolific.bsky.social, detecting agentic AI with 100% accuracy in testing.

Comes with a native Qualtrics integration! More info:

www.prolific.com/resources/in...

04.02.2026 15:08 πŸ‘ 13 πŸ” 7 πŸ’¬ 2 πŸ“Œ 1
Preview
How does a deep neural network look at lexical stress in English words? Despite their success in speech processing, neural networks often operate as black boxes, prompting the following questions: What informs their decisions, and h

Out today! "How Does a Deep Neural Network Look at Lexical Stress in English Words?" w/ I. Allouche, I. Asael, R. Rousso, V. Dassa, A. Bradlow, S.-E. Kim & @keshet.bsky.social doi.org/10.1121/10.0... 1/

11.02.2026 14:41 πŸ‘ 19 πŸ” 3 πŸ’¬ 3 πŸ“Œ 0

Students need to remember that Inigo Montoya method for emails and greetings:
"Hello, my name is Inigo Montoya. You killed my father. Prepare to die.”

Polite Greeting
Name
Relevant Personal Link
Manage Expectations

Keep it BRIEF.

10.02.2026 16:44 πŸ‘ 47 πŸ” 11 πŸ’¬ 3 πŸ“Œ 0

This study was an amazing collaborative experience. I'm really really grateful to all the wonderful people who contributed and made this happen.

It's the closest I have ever come to finding something like a "universal" in human cognition.

09.02.2026 12:32 πŸ‘ 39 πŸ” 20 πŸ’¬ 6 πŸ“Œ 0
Preview
Distinct neuronal populations in the human brain combine content and context - Nature Single-neuron recordings in humans reveal largely separate content and context neurons whose coordinated activity flexibly places memory items in context.

Recently published in @nature.com :the human brain stores what happened and the context in mostly separate neuronsβ€”binding them only when needed, which enables flexible memory (and hopefully avoids confusion) πŸ§ͺ www.nature.com/articles/s41...

20.01.2026 20:50 πŸ‘ 17 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Apes Share Human Ability to Imagine
Apes Share Human Ability to Imagine YouTube video by Johns Hopkins University

Imagination in bonobos!

I am thrilled to share a new paper w/ Amalia Bastos, out now in @science.org

We provide the first experimental evidence that a nonhuman animal can follow along a pretend scenario & track imaginary objects. Work w/ Kanzi, the bonobo, at Ape Initiative

youtu.be/NUSHcQQz2Ko

05.02.2026 19:18 πŸ‘ 292 πŸ” 110 πŸ’¬ 10 πŸ“Œ 10

It was such a treat for us. Thanks for making the trip down and sharing your fascinating work!

04.02.2026 03:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

How do diverse context structures reshape representations in LLMs?
In our new work, we explore this via representational straightening. We found LLMs are like a Swiss Army knife: they select different computational mechanisms reflected in different representational structures. 1/

04.02.2026 02:54 πŸ‘ 38 πŸ” 11 πŸ’¬ 1 πŸ“Œ 1

The Visual Learning Lab is hiring TWO lab coordinators!

Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)β€”with flexible summer start dates.

30.01.2026 23:21 πŸ‘ 48 πŸ” 41 πŸ’¬ 1 πŸ“Œ 0
Post image

The cerebellum supports high-level language?? Now out in @cp-neuron.bsky.social, we systematically examined language-responsive areas of the cerebellum using precision fMRI and identified a *cerebellar satellite* of the neocortical language network!
authors.elsevier.com/a/1mUU83BtfH...
1/n πŸ§΅πŸ‘‡

22.01.2026 17:21 πŸ‘ 69 πŸ” 20 πŸ’¬ 2 πŸ“Œ 4