Friedemann Zenke's Avatar

Friedemann Zenke

@fzenke

Computational neuroscientist at the FMI. www.zenkelab.org

747
Followers
315
Following
24
Posts
07.01.2025
Joined
Posts Following

Latest posts by Friedemann Zenke @fzenke

Post image

Congrats to Fabian Mikulash, a postdoc in the @fzenke.bsky.social lab, for being awarded a Marie Skłodowska-Curie Actions fellowship! His project aims to develop a new theory—tested with real brain data—explaining how neurons decide when to trust what we see versus what we expect 🧠

13.02.2026 12:35 👍 7 🔁 1 💬 0 📌 0
Post image

Our paper is out in @natneuro.nature.com!

www.nature.com/articles/s41...

We develop a geometric theory of how neural populations support generalization across many tasks.

@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social

1/14

10.02.2026 15:56 👍 273 🔁 100 💬 7 📌 1
A functional influence based circuit motif that constrains the set of plausible algorithms of cortical function There are several plausible algorithms for cortical function that are specific enough to make testable predictions of the interactions between functionally identified cell types. Many of these algorithms are based on some variant of predictive processing. Here we set out to experimentally distinguish between two such predictive processing variants. A central point of variability between them lies in the proposed vertical communication between layer 2/3 and layer 5, which stems from the diverging assumptions about the computational role of layer 5. One assumes a hierarchically organized architecture and proposes that, within a given node of the network, layer 5 conveys unexplained bottom-up input to prediction error neurons of layer 2/3. The other proposes a non-hierarchical architecture in which internal representation neurons of layer 5 provide predictions for the local prediction error neurons of layer 2/3. We show that the functional influence of layer 2/3 cell types on layer 5 is incompatible with the hierarchical variant, while the functional influence of layer 5 cell types on prediction error neurons of layer 2/3 is incompatible with the non-hierarchical variant. Given these data, we can constrain the space of plausible algorithms of cortical function. We propose a model for cortical function based on a combination of a joint embedding predictive architecture (JEPA) and predictive processing that makes experimentally testable predictions. ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, https://ror.org/00yjd3n13 Novartis Foundation, https://ror.org/04f9t1x17 European Research Council, https://ror.org/0472cxd90, 865617

Our work with @georgkeller.bsky.social on testing predictive processing (PP) models in cortex is out on biorvix now! www.biorxiv.org/content/10.6... A short thread on our findings and thoughts on where we should move on from PP below.

30.01.2026 14:37 👍 42 🔁 14 💬 2 📌 1
Attention-like regulation of theta sweeps in the brain's spatial navigation circuit Spatial attention supports navigation by prioritizing information from selected locations. A candidate neural mechanism is provided by theta-paced sweeps in grid- and place-cell population activity, which sample nearby space in a left-right-alternating pattern coordinated by parasubicular direction signals. During exploration, this alternation promotes uniform spatial coverage, but whether sweeps can be flexibly tuned to locations of particular interest remains unclear. Using large-scale Neuropixels recordings in freely-behaving rats, we show that sweeps and direction signals are rapidly and dynamically modulated: they track moving targets during pursuit, precede orienting responses during immobility, and reverse during backward locomotion — without prior spatial learning. Similar modulation occurs during REM sleep. Canonical head-direction signals remain head-aligned. These findings identify sweeps as a flexible, attention-like mechanism for selectively sampling allocentric cognitive maps. ### Competing Interest Statement The authors have declared no competing interest. European Research Council, Synergy Grant 951319 (EIM) The Research Council of Norway, Centre of Neural Computation 223262 (EIM, MBM), Centre for Algorithms in the Cortex 332640 (EIM, MBM), National Infrastructure grant (NORBRAIN, 295721 and 350201) The Kavli Foundation, https://ror.org/00kztt736 Ministry of Science and Education, Norway (EIM, MBM) Faculty of Medicine and Health Sciences; NTNU, Norway (AZV)

The hippocampal map has its own attentional control signal!
Our new study reveals that theta #sweeps can be instantly biased towards behaviourally relevant locations. See 📹 in post 4/6 and preprint here 👉
www.biorxiv.org/content/10.6...
🧵(1/6)

28.01.2026 10:03 👍 183 🔁 62 💬 4 📌 10

With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.

09.01.2026 01:27 👍 585 🔁 237 💬 16 📌 10
Preview
Careers Careers on Simons Foundation

Joint junior faculty position in Computational Neuroscience, between Ctr for Computational Neuroscience at @flatironinstitute.org and the CUNY Graduate Center @thegraduatecenter.bsky.social . Application deadline: 16 Jan 2026!

www.simonsfoundation.org/flatiron/car...
cuny.jobs/new-york-ny/...

06.01.2026 03:14 👍 38 🔁 22 💬 1 📌 1

Thanks, Rich!

20.12.2025 12:45 👍 1 🔁 0 💬 0 📌 0

Thanks so much.

20.12.2025 12:45 👍 0 🔁 0 💬 0 📌 0

Thank you!

19.12.2025 09:33 👍 0 🔁 0 💬 0 📌 0

I’m very grateful to the FMI, the tenure committee, inspiring colleagues, and all the hidden supporters who made this possible. Huge thanks to past and present group members for their curiosity and creativity. Excited for the next chapter.

19.12.2025 07:44 👍 55 🔁 5 💬 7 📌 0
Preview
Three types of remapping with linear decoders: A population-geometric perspective Author summary Place cells of the hippocampus form unique activity patterns in different environments, a process called remapping. However, it is not clear what the relationship is between changes in ...

I’m happy to share some recent work out in PLOS Computational Biology with @guille-martin.bsky.social and Christian Machens at @champalimaudr.bsky.social . We use neural coding and population geometry to study different perspectives on hippocampal remapping.

journals.plos.org/ploscompbiol...

09.12.2025 15:09 👍 28 🔁 6 💬 1 📌 1
Preview
Lindsay Lab - Postdoc Position Artificial neural networks applied to psychology, neuroscience, and climate change

Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro

08.12.2025 23:53 👍 125 🔁 91 💬 2 📌 0
Post image

Finally got the job ad—looking for 2 PhD students to start spring next year:

www.gao-unit.com/join-us/

If comp neuro, ML, and AI4Neuro is your thing, or you just nerd out over brain recordings, apply!

I'm at neurips. DM me here / on the conference app or email if you want to meet 🏖️🌮

03.12.2025 09:36 👍 81 🔁 51 💬 1 📌 5

Come work with us!!!

04.12.2025 07:36 👍 12 🔁 6 💬 0 📌 1
Preview
Joint modelling of brain and behaviour dynamics with artificial intelligence - Nature Reviews Neuroscience Artificial intelligence is rapidly advancing our mechanistic understanding of the shared structure between the brain and higher-order behaviours. In this Review, Mathis and Mathis synthesize state-of-...

Joint modelling of brain and behaviour dynamics with artificial intelligence

www.nature.com/articles/s41...

03.12.2025 16:59 👍 118 🔁 29 💬 2 📌 2
Post image

Thanks! There is a notable difference, though: in Nejad et al. (2025), L5 is trained with a reconstruction loss, i.e., an autoencoder (see Eqs. 4–6 from the methods below). L2/3 then predicts the autoencoder's latent state via a supervised next-step loss. That shouldn't be conflated with a JEPA.

29.11.2025 13:58 👍 4 🔁 0 💬 1 📌 0
Preview
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...

🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14

18.11.2025 12:34 👍 85 🔁 28 💬 3 📌 5
Post image

6/ Finally, we build a hierarchical JEPA version of our model and outline how its architecture could map onto cortical microcircuits, toward a predictive-processing framework with mechanistic links to neuroanatomy. Read the full story here 👇
🔗 doi.org/10.1101/2025...

27.11.2025 08:27 👍 12 🔁 1 💬 1 📌 0

5/ Importantly, RPL captures representational motifs across multiple species and cortical areas: On the one hand, successor-like structures resembling those in human V1. On the other hand, its abstract sequence representations are comparable to macaque PFC.

27.11.2025 08:27 👍 7 🔁 0 💬 1 📌 0
Post image

4/ From raw video streams and without supervision, RPL learns: invariant object identity, equivariant motion variables (position, velocity, orientation, etc.), and a world model that allows simulating plausible motion trajectories entirely in latent space.

27.11.2025 08:27 👍 11 🔁 0 💬 1 📌 0
Post image

3/ Recent studies indicate that, aside from plausibility, representation-space predictive models like JEPAs also learn more abstract representations than input-space generative models, which tend to focus on low-level details (cf @yann-lecun.bsky.social)

27.11.2025 08:26 👍 10 🔁 1 💬 1 📌 0
Post image

2/ RPL operates entirely in latent space, avoiding the anatomical issues of predictive coding models that compute prediction errors in input space. Instead, the network predicts future internal representations through a specific recurrent circuit structure.

27.11.2025 08:25 👍 7 🔁 0 💬 1 📌 0
Post image

1/6 New preprint 🚀 How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs.
🔗 doi.org/10.1101/2025...
Led by @atenagm.bsky.social @mshalvagal.bsky.social

27.11.2025 08:24 👍 141 🔁 42 💬 3 📌 4

Excited to see the paper fully published. It's an important milestone for training SNNs with exact gradients, replacing our earlier tricks of a "delay line augmentation" to capture temporal relationships. Delays can now be learnt alongside weights naturally. Amazing work @mbalazs98.bsky.social !

25.11.2025 18:24 👍 21 🔁 8 💬 0 📌 0
Preview
Exploiting heterogeneous delays for efficient computation in low-bit neural networks Neural networks rely on learning synaptic weights. However, this overlooks other neural parameters that can also be learned and may be utilized by the brain. One such parameter is the delay: the brain...

Psst - neuromorphic folks. Did you know that you can solve the SHD dataset with 90% accuracy using only 22 kb of parameter memory by quantising weights and delays? Check out our preprint with @pengfei-sun.bsky.social and @danakarca.bsky.social, or read the TLDR below. 👇🤖🧠🧪 arxiv.org/abs/2510.27434

13.11.2025 17:40 👍 43 🔁 16 💬 3 📌 3
SNUFA 2025 Spiking Neural networks as Universal Function Approximators

Spiking NN fans - the #SNUFA workshop (Nov 5-6) agenda is finalised and online now. Make sure to register (free) soon. (Note you can register for either day and come to both.)

Agenda: snufa.net/2025/
Registration: www.eventbrite.co.uk/e/snufa-2025...

Thanks to all who voted on abstracts!

🤖🧠🧪

23.10.2025 16:17 👍 31 🔁 16 💬 0 📌 8
SNUFA 2025 Spiking Neural networks as Universal Function Approximators

Message for participants of the #SNUFA 2025 spiking neural network workshop. We got almost 60 awesome abstract submissions, and we'd now like your help to select which ones should be offered talks. Follow the "abstract voting" link at snufa.net/2025/ to take part. It should take <15m. Thanks! ❤️

01.10.2025 19:16 👍 18 🔁 10 💬 0 📌 1
Post image

Interested in doing a Ph.D. to work on building models of the brain/behavior? Consider applying to graduate schools at CU Anschutz:
1. Neuroscience www.cuanschutz.edu/graduate-pro...
2. Bioengineering engineering.ucdenver.edu/bioengineeri...

You could work with several comp neuro PIs, including me.

27.09.2025 20:30 👍 52 🔁 30 💬 1 📌 4
Video thumbnail

I’m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!

www.biorxiv.org/content/10.1...

24.09.2025 09:52 👍 219 🔁 86 💬 9 📌 9

Truly honored (and a little overwhelmed) to see our work featured in The Transmitter's "This Paper Changed My Life." Huge thanks to @neural-reckoning.org for the kind words - and to our amazing community that keeps pushing spiking neural network research forward 🙏

17.09.2025 14:50 👍 45 🔁 8 💬 0 📌 0