Ryan Panela's Avatar

Ryan Panela

@ryanapanela

πŸ‡¨πŸ‡¦πŸ‡΅πŸ‡­ || Graduate Student || Cognitive & Computational Neuroscience || UofT & Rotman Research

116
Followers
580
Following
6
Posts
01.01.2025
Joined
Posts Following

Latest posts by Ryan Panela @ryanapanela

With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.

09.01.2026 01:27 πŸ‘ 585 πŸ” 237 πŸ’¬ 16 πŸ“Œ 10
Preview
Neural signatures of engagement and event segmentation during story listening in background noise Speech in everyday life is often masked by background noise, making comprehension effortful. Characterizing brain activity patterns when individuals listen to masked speech can help clarify the mechan...

Finally out: www.eneuro.org/content/earl...

fMRI during naturalistic story listening in noise, looking at event-segmentation and ISC signatures. Listeners stay engaged and comprehend the gist even in moderate noise.

with @ayshamota.bsky.social @ryanaperry.bsky.social @ingridjohnsrude.bsky.social

09.01.2026 19:43 πŸ‘ 18 πŸ” 11 πŸ’¬ 1 πŸ“Œ 1
OSF

New Preprint 🚨

This research with @alexbarnett.bsky.social, Yulia Lamekina, @barense.bsky.social, and @bjherrmann.bsky.social examines how background noise shapes event segmentation during continuous speech listening and its consequences for memory.

osf.io/e67qr_v1
@auditoryaging.bsky.social

16.01.2026 18:13 πŸ‘ 9 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

Building Bridges in Brain Data.

The event will focus on open science practices, innovative methods, and community in the neurosciences, with opportunities to engage in collaborative projects or explore new tools. No prior expertise is required.

Registration for BrainHack 2026 is still open!

14.01.2026 16:36 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Preview
Mobile Eye-Tracking Glasses Capture Ocular and Head Markers of Listening Effort To extend the assessment of listening effort beyond a sound booth, we validated mobile eye-tracking glasses (Pupil Labs Neon) by comparing them to a stationary system (Eyelink DUO) in a controlled env...

New work from the lab: www.biorxiv.org/content/10.1...

Mobile eye-tracking glasses assess listening effort through pupil size and eye movements as good as a stationary eye tracker. But mobile glasses also show that people reduce their head movements when listening becomes more effortful.

20.09.2025 13:41 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1

Excited to share the publication of our work which explores the application of LLMs in event segmentation and memory research.

For researchers interested in applying these validated methods, an open-source module is available on GitHub (github.com/ryanapanela/EventRecall).

17.12.2025 18:07 πŸ‘ 12 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

Such cool work!

01.08.2025 23:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

🚨 New preprint 🚨

Prior work has mapped how the brain encodes concepts: If you see fire and smoke, your brain will represent the fire (hot, bright) and smoke (gray, airy). But how do you encode features of the fire-smoke relation? We analyzed fMRI with embeddings extracted from LLMs to find out 🧡

24.06.2025 13:49 πŸ‘ 32 πŸ” 8 πŸ’¬ 1 πŸ“Œ 2
Preview
Finding the music of speech: Musical knowledge influences pitch processing in speech Few studies comparing music and language processing have adequately controlled for low-level acoustical differences, making it unclear whether differe…

Short speech utterances can be looped and after a few repetitions it sounds like the speaker is singing and once the switch from speech to song happens it never seems to go back. This paper we showed evidence for music knowledge being activated after the switch. www.sciencedirect.com/science/arti...

14.03.2025 23:22 πŸ‘ 10 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1
Post image Post image Post image Post image

And that's a wrap on #ARO2025. Grateful for the opportunity to give my first international conference talk and for the chance to (re-)connect with an incredible group of researchers.

See you next year in Puerto Rico for #ARO2026.

@auditoryaging.bsky.social

01.03.2025 00:56 πŸ‘ 10 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Event Segmentation Applications in Large Language Model Enabled Automated Recall Assessments Understanding how individuals perceive and recall information in their natural environments is critical to understanding potential failures in perception (e.g., sensory loss) and memory (e.g., dementi...

New Preprint 🚨

This research with @bjherrmann.bsky.social, @alexbarnett.bsky.social, and @barense.bsky.social extends previous work exploring how LLMs can simulate human event segmentation, with applications for automated recall assessments.

arxiv.org/abs/2502.13349

24.02.2025 17:13 πŸ‘ 13 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0