Yash Mehta's Avatar

Yash Mehta

@yashsmehta

Cognitive Science PhD student, Johns Hopkins ๐Ÿง  Previously: HHMI Janelia ๐Ÿ‡บ๐Ÿ‡ธ, AutoML Lab ๐Ÿ‡ฉ๐Ÿ‡ช, Gatsby Unit UCL ๐Ÿ‡ฌ๐Ÿ‡ง www.yashsmehta.com ๐Ÿ‡ฎ๐Ÿ‡ณ

39
Followers
37
Following
8
Posts
18.11.2024
Joined
Posts Following

Latest posts by Yash Mehta @yashsmehta

Preview
High-dimensional structure underlying individual differences in naturalistic visual experience Han and Bonner reveal that individual visual experience arises from high-dimensional neural geometry distributed across multiple representational scales. By characterizing the full dimensional spectru...

Human visual cortex representations may be much higher-dimensional than earlier work suggested, but are these higher dimensions of cortical activity actually relevant to behavior? Our new paper tackles this by studying how different people experience the same movies. ๐Ÿงต www.cell.com/current-biol...

30.01.2026 18:52 ๐Ÿ‘ 60 ๐Ÿ” 16 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 2

High dimensional representations in the visual cortex, new paper from our lab, check it out!

12.12.2025 22:07 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Dimensionality reduction may be the wrong approach to understanding neural representations. Our new paper shows that across human visual cortex, dimensionality is unbounded and scales with dataset sizeโ€”we show this across nearly four orders of magnitude. journals.plos.org/ploscompbiol...

11.12.2025 15:32 ๐Ÿ‘ 224 ๐Ÿ” 64 ๐Ÿ’ฌ 7 ๐Ÿ“Œ 10

Our modeling framework would offer a new avenue for understanding the computational principles of synaptic plasticity and learning in the brain. Research at HHMI Janelia, with fantastic collaborators Danil Tyulmankov, Adithya Rajagopalan, Glenn Turner, James Fitzgerald and @janfunkey.bsky.social!

18.11.2024 18:18 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

We applied our technique to behavioral data from Drosophila in a probabilistic reward-learning experiment. Our findings reveal an active forgetting component in reward learning in flies ๐Ÿชฐ, improving predictive accuracy over previous models. (4/5)

18.11.2024 18:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

This method uncovers complex rules inducing long nonlinear time dependencies, involving factors like postsynaptic activity and current synaptic weights. We validate it through simulations, successfully recovering known rules like Ojaโ€™s and more intricate ones. (3/5)

18.11.2024 18:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
NeurIPS 2024: Model-Based Inference of Synaptic Plasticity Rules Inferring the synaptic plasticity rules that govern learning in the brain is a key challenge in neuroscience. We present a novel computational method to infer these rules from experimental data, appli...

website: yashsmehta.com/plasticity-p... Our approach approximates plasticity rules using parameterized functionsโ€”either truncated Taylor series for theoretical insights or multilayer perceptrons. We optimize these parameters via gradient descent over entire trajectories to match observed data (2/5)

18.11.2024 18:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

๐Ÿš€ Excited to share our paper has been accepted at #NeurIPS! ๐ŸŽ‰ We developed a deep learning framework that infers local learning algorithms in the brain by fitting behavioral or neural activity trajectories during learning. We validate on synthetic data and tested on ๐Ÿชฐ behavioral data (1/5 ๐Ÿงต)

18.11.2024 18:18 ๐Ÿ‘ 13 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
NeurIPS 2024: Model-Based Inference of Synaptic Plasticity Rules Inferring the synaptic plasticity rules that govern learning in the brain is a key challenge in neuroscience. We present a novel computational method to infer these rules from experimental data, appli...

Interesting approach to estimate the physiological update rules of synapses: yashsmehta.com/plasticity-p...

18.11.2024 13:53 ๐Ÿ‘ 24 ๐Ÿ” 5 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

๐Ÿ™Œ๐Ÿผ๐Ÿ™Œ๐Ÿผ

18.11.2024 16:50 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Thank you, Konrad!

18.11.2024 16:35 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0