Kanaka Rajan's Avatar

Kanaka Rajan

@kanakarajanphd

Associate Professor at Harvard & Kempner Institute. Applying computational frameworks & machine learning to decode multi-scale neural processes. Marathoner. Rescue dog mom. https://www.rajanlab.com/

3,641
Followers
243
Following
47
Posts
11.09.2023
Joined
Posts Following

Latest posts by Kanaka Rajan @kanakarajanphd

Interesting physics punctuated by incredible videos!

03.03.2026 22:30 ๐Ÿ‘ 7 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Neuroscience has a species problem If neuroscience is serious about building general principles of brain function, cross-species dialogue must become a core organizing principle.

Differences between species should be treated as informative constraints that refine theory, not as inconsistencies to be explained away, writes @suthanalab.bsky.social.

#neuroskyence

www.thetransmitter.org/animal-model...

19.02.2026 17:04 ๐Ÿ‘ 22 ๐Ÿ” 9 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1

Can we predict a thought before it happens?

To know what one neuron will do next, you have to know what the entire brain is doing right now.

In our latest @kempnerinstitute.bsky.social Deeper Learning blog, @duranrin.bsky.social introduces POCO, a tool paving the way for adaptive neurotechnology.

26.02.2026 15:33 ๐Ÿ‘ 21 ๐Ÿ” 6 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Same task, different strategy โ†”๏ธ

Why do identical neural network models develop separate internal approaches to solve the same problem?

@annhuang42.bsky.social explores the factors driving variability in task-trained networks in our latest @kempnerinstitute.bsky.social Deeper Learning blog.

09.02.2026 19:07 ๐Ÿ‘ 46 ๐Ÿ” 10 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Gradient Descent as Loss Landscape Navigation: a Normative... Learning rulesโ€”prescriptions for updating model parameters to improve performanceโ€”are typically assumed rather than derived. Why do some learning rules work better than others, and under what...

Enormous thanks to John Vastola for leading this work, @gershbrain.bsky.social for the collaboration & @harvardmed.bsky.social, @kempnerinstitute.bsky.social for their support โœจ

Read the full paper here & let us know what you think: openreview.net/forum?id=oMi...
(6/6)

16.12.2025 19:29 ๐Ÿ‘ 5 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

For any ML folks out there, our framework helps clarify why certain algorithms work under specific assumptions. It can justify design choices & suggest new directions, but empirical testing is still essential to validate what works in practice (5/6)

16.12.2025 19:29 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

What does this mean for understanding real brains? ๐Ÿง 

Neurons face very different constraints than AI, with limited information & noisy signals. Learning rules that look "messy" compared to AI models might actually be optimal within a biological system (4/6)

16.12.2025 19:29 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

The answer depends on your constraints, like how far ahead you can plan, how much of the landscape you can see, and what kinds of moves you can make.

Our framework can derive the optimal strategy in each case. (3/6)

16.12.2025 19:29 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Standard learning methods only ask what the next best step is & take it. We reframed learning as navigating a landscape, where the goal is to find the best path over many steps.

This lets us ask a new question: what's the optimal way to navigate? ๐Ÿ—บ๏ธ (2/6)

16.12.2025 19:29 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

New paper for #neurips2025!

AI models adjust millions of internal settings to get better at a task. But how are these adjustments determined? For decades, we've mostly figured this out through trial & error.

We took a different approach...๐Ÿงต (1/6)

๐Ÿ”— openreview.net/forum?id=oMi...

16.12.2025 19:29 ๐Ÿ‘ 49 ๐Ÿ” 14 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 2
Conference poster schedule

Conference poster schedule

Workshop poster schedule

Workshop poster schedule

Celebrating the Rajan Labโ€™s papers at #NeurIPS2025! Stop by to chat with these talented students and postdocs ๐ŸŽ‰

04.12.2025 16:08 ๐Ÿ‘ 16 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Survey: What are neuroscienceโ€™s most transformative new tools? Which new toolsโ€”including artificial intelligence, deep-learning methods, genetic tools and advanced neuroimagingโ€”are making the largest impact?

To identify the most transformative tools and technologies in the past five years, @thetransmitter.bsky.social surveyed readers and contributors and worked with a market-research firm to interview neuroscientists around the world. See what they had to say: bit.ly/3LYTNoB

#StateOfNeuroscience

21.11.2025 14:04 ๐Ÿ‘ 13 ๐Ÿ” 5 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

๐Ÿ“Excited to share that our paper was selected as a Spotlight at #NeurIPS2025!

arxiv.org/pdf/2410.03972

It started from a question I kept running into:

When do RNNs trained on the same task converge/diverge in their solutions?
๐Ÿงตโฌ‡๏ธ

24.11.2025 16:43 ๐Ÿ‘ 108 ๐Ÿ” 27 ๐Ÿ’ฌ 5 ๐Ÿ“Œ 6

Awesome work co-led by my student @annhuang42.bsky.social and @neurostrow.bsky.social on disentangling, then comparing recurrent and externally driven dynamics!

Look out for more incredible work from @annhuang42.bsky.social coming up at NeurIPS ๐Ÿ‘€

13.11.2025 20:58 ๐Ÿ‘ 19 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Congrats to Ann Huang for an excellent presentation at last week's @kempnerinstitute.bsky.social all-hands! ๐Ÿ‘

Ann shared exciting updates about our InputDSA tool - more to come soon. Thrilled she had the chance to present to our engaged and supportive community.

16.10.2025 21:58 ๐Ÿ‘ 30 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
GitHub - yuvenduan/POCO: Official Implementation for POCO: Scalable Neural Forecasting through Population Conditioning Official Implementation for POCO: Scalable Neural Forecasting through Population Conditioning - yuvenduan/POCO

(8/8) To apply POCO to your own work, find our open source code on github below ๐Ÿ‘‡

github.com/yuvenduan/POCO

12.09.2025 20:46 ๐Ÿ‘ 4 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
POCO: Scalable Neural Forecasting through Population Conditioning Predicting future neural activity is a core challenge in modeling brain dynamics, with applications ranging from scientific investigation to closed-loop neurotechnology. While recent models of populat...

(7/8) Thanks to @deisseroth.bsky.socialโ€ฌ, @mishaahrens.bsky.social & Chris Harvey for their contributions, and to @kempnerinstitute.bsky.social & @harvardmed.bsky.socialโ€ฌ for supporting computational neuroscience research.

Read the paper here: arxiv.org/abs/2506.14957

12.09.2025 20:46 ๐Ÿ‘ 6 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

(6/8) Combined with its prediction speed and steady improvement from longer recordings/more sessions, POCO shows enormous potential for usage in larger brains & real-time neurotechnologies like โ€œneuro-foundation modelsโ€ for brain-computer interfaces (BCI).

12.09.2025 20:46 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

(5/8) Other time-series forecasting models perform well on synthetic/simulated data ๐Ÿค–

POCO dominates in context-dense predictions based on REAL neural data ๐Ÿง 

12.09.2025 20:32 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

(4/8) Beyond neural predictions, POCO's learned unit embeddings independently reproduce brain region clustering without any anatomical labels.

That means at single-cell resolution across entire brains, POCO mimics biological organization purely from neural activity patterns โœจ

12.09.2025 20:32 ๐Ÿ‘ 2 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

(3/8) POCO forecasts how the brain will behave up to ~15 seconds into the future across behavioral data & species ๐Ÿ”ฎ

After pre-training, POCOโ€™s speed & flexibility allow it to adapt to new recordings with minimal fine-tuning, opening the door for real-time applications.

12.09.2025 20:32 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

(2/8) POCO was trained on spontaneous & task-specific behavior data from zebrafish, mice, & C. elegans. It combines a local forecaster with a population encoder capturing brain-wide patterns, so we track each neuron individually AND how the whole brain affects each cell ๐Ÿง 

12.09.2025 20:32 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

(1/8) New paper from our team!

Yu Duan & Hamza Chaudhry introduce POCO, a tool for predicting brain activity at the cellular & network level during spontaneous behavior.

Find out how we built POCO & how it changes neurobehavioral research ๐Ÿ‘‡

arxiv.org/abs/2506.14957

12.09.2025 20:32 ๐Ÿ‘ 53 ๐Ÿ” 14 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image Post image Post image Post image

Thanks for having me at @camp_course and the @iitmadras Brain Center during my visit to India this summer!๐Ÿฅญ

It was lovely to be back home, and a pleasure to work with the young scientists there who are finding their path in computational neuroscience ๐Ÿง 

09.09.2025 20:23 ๐Ÿ‘ 21 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Neural population-based approaches have opened new windows into neural computations and behavior Neural manifold properties can help us understand how animal brains deal with complex information, execute flexible behaviors and reuse common computations.

Brilliant piece by @mattperich.bsky.social on neural manifolds ๐ŸŒŸ

His โ€ชessay in @thetransmitter.bsky.social shows how this view changes the game in computational neuroscience, reproducing behavioral flexibility within finite neural constraints ๐Ÿง 

www.thetransmitter.org/neural-dynam...

14.08.2025 17:34 ๐Ÿ‘ 49 ๐Ÿ” 11 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
The Crearte Foundation | #ArtScience on Instagram: "When a neurobiologist is also a comic artist, science can have a whole new storyline. In a 2022 collaboration between the Rajan Lab at Harvard Med... 6 likes, 0 comments - crearte.ca on July 28, 2025: "When a neurobiologist is also a comic artist, science can have a whole new storyline. In a 2022 collaboration between the Rajan Lab at Harvard Med...

Check out @jordancollver.bsky.socialโ€™sโ€ฌ great illustration of modular RNNs training to work like a bio-brain๐Ÿฆพ๐Ÿง 

Thanks to Crearte for featuring our collaboration!

www.instagram.com/crearte.ca/p...

06.08.2025 19:04 ๐Ÿ‘ 19 ๐Ÿ” 4 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

(7/7) Congrats to Riley & Ryan on this work. Also huge thanks to collaborators Felix Berg, @raymondrchua.bsky.socialโ€ฌ, John Vastola, @joshlunger.bsky.social, Billy Qian & everyone who helps us kick the tires.

02.07.2025 18:33 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

(6/7) A 4096-unit agent that remembers, plans & navigates risks gives a โ€œwindow-sizedโ€ brain we can watch neuron-by-neuron. ForageWorld is a perfect sandbox for testing cognitive map theories & offers a blueprint for ultra-efficient autonomous AI systems in a naturalistic world.

02.07.2025 18:33 ๐Ÿ‘ 7 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

(5/7) Analyzing the trained agent reveals an interpretable neural GPS: past & future positions can be linearly decoded over long horizons from the agentโ€™s โ€˜neuralโ€™ activity, and a lightweight โ€œpredict-its-own-positionโ€ signal sharpens its compass even further.

02.07.2025 18:33 ๐Ÿ‘ 4 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

(4/7) What we see is planning & recall over hundreds of timesteps!

After a quick wander, the agent switches from exploring to visiting patches from memory: revisiting food not seen for over 500-1000 steps, skirting predator zones & timing resource visits.

02.07.2025 18:33 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0