Thanks, David!
Thanks, David!
How are neural manifolds and single-neuron response properties related to circuit structure?
How degenerate are these relationships?
Theory and a plethora of examples can be found in the following paper, out today in Neuron ๐
It was a privilege to co-supervise first author @lpezon.bsky.social!
These workshops are absolutely amazing. I had the privilege to attend the theoretical neuroscience workshop in 2024; it was a memorable week.
Applications are now open for the summer school: ๐๐๐ญ๐ก๐๐ฆ๐๐ญ๐ข๐๐๐ฅ ๐๐๐ญ๐ก๐จ๐๐ฌ ๐ข๐ง ๐๐จ๐ฆ๐ฉ๐ฎ๐ญ๐๐ญ๐ข๐จ๐ง๐๐ฅ ๐๐๐ฎ๐ซ๐จ๐ฌ๐๐ข๐๐ง๐๐
๐ง Apply before March 15: www.compneuronrsn.org
๐ Located in beautiful Eresfjord ๐ณ๐ด
๐๏ธ Between July 6-24
Supported by the @kavlifoundation.org
In collaboration with @kavlintnu.bsky.social
Turns out deep neural networks are better at extracting relevant features than I am...
Meet "DeepUnitMatch", a DNN version of our neuron tracking tool:
www.biorxiv.org/content/10.6...
It has been a privilage to continue the collaboration with Cรฉlian, and cosupervise the talented Suyash & Wentao!
Summer school on Neuro AI in Cambridge.
Registration Deadline: 16 Feb 2026
Speaker line up:
G. Bellec
A. Billard
R. Bogacz
R. Ponte Costa
W. Gerstner
M. Giugliano
L. Hunt
M. Sahani
P. Series
P. Tino
www.fens.org/news-activit...
Thanks! But is the tail exponent really a continuous function of the spectrum in L1 norm? Can't small errors in the last eigenvalues have a large effect on the estimated tail exponent? On noisy data, I would be surprised if the two-step approach you suggest would suffice to guarantee consistency...
Estimating the power-law exponent of the tail of the eigenspectrum from noisy data is a difficult problem. Here is an approach based on eigenmoment estimation. ๐๐ป
A hard stats problem remains open: Can we find an estimator of the tail exponent for which we can PROVE unbiasedness, consistency, etc.?
Talk on "High-dimensional neuronal activity from low-dimensional latent dynamics" at NeurIPS 2025 with @haydari.bsky.social now accessible online ๐ฅ:
neurips.cc/virtual/2025...
Please reach out if you have any comments or ideas for follow-up work!
If you recently finished your PhD in ML for life science and are looking for a new job early next year, please apply! Gรถttingen is a lovely town and a great scientific environment.
Congratulations!
Very excited about this new work from the omnipotent Owen, with me and Ashok Litwin-Kumar! Can we reconcile low- and high-dimensional activity in neural circuits by recognizing that these circuits ~multitask~?
(Plausibly, yes ๐)
I'm more and more convinced that low-dimensional manifolds in the brain are just an artifact of the experimental designs and analyses we use...
๐ง ๐ ๐งช
"Low-d neural activity" would probably look high-D if you plot the spectrum on a log-log scale. @engeltatiana.bsky.social made a similar point recently on @braininspired.bsky.social. Also agrees with our claim with @bio-emergent.bsky.social that low-D and high-D can be two sides of the same coin!
Best part about being a scientist is the people I get to work with. Valentin (@bio-emergent.bsky.social) and I got to give a talk at NeurIPS, bridging a gap between low- and high-dim perspectives of the brain. Thankfully, the audience was (somewhat) more awake than the San Diego desert ๐๏ธ
Thank you for having me on BrainInspired, Paul @braininspired.bsky.social! It was such an honor to be on my favorite showโa rare place where we can leisurely talk about manifolds, latent circuits, power laws, and other esoteric ideas, and still be taken seriously in knowing they are all real.
Tomorrow at #NeurIPS2025! Oral at 10 am in UL Ballroom 20D and poster #2016 at 11 am. @haydari.bsky.social and I are looking forward to hearing your thoughts.
๐งตExcited to present our latest work at #Neurips25! Together with @avm.bsky.social, we discover ๐๐ก๐๐ง๐ง๐๐ฅ๐ฌ ๐ญ๐จ ๐ข๐ง๐๐ข๐ง๐ข๐ญ๐ฒ: regions in neural networks loss landscapes where parameters diverge to infinity (in regression settings!)
We find that MLPs in these channels can take derivatives and compute GLUs ๐คฏ
1. New preprint resolving a conundrum in systems neuroscience with an AI scientist, and humans Reilly Tilbury, Dabin Kwon, @haydari.bsky.social, @jacobmratliff.bsky.social, @bio-emergent.bsky.social, @carandinilab.net, @kevinjmiller.bsky.social, @neurokim.bsky.social
www.biorxiv.org/content/10.1...
Excited to share our new work with @engeltatiana.bsky.social!
RNNs are often used to explore how the brain may solve specific tasks. We show that, depending on the architecture, RNNs find distinct circuit solutions, behaving differently when exposed to novel stimuli.
www.nature.com/articles/s42...
To the math/comp neuro folks in Oxford. I'll be giving a math neuro talk in the Partial Differential Equations Seminar at the MI on Monday 27 Oct at 16:30 (details in the link).
www.maths.ox.ac.uk/node/74242
I'll be talking about the "universality" of spatially extended mean-field PDEs (see ๐งตโฌ๏ธ)
A study led by Cina Aghamohammadi is now out in โช@natcomms.nature.comโฌ! We developed a mathematical framework for partitioning spiking variability, which revealed that spiking irregularity is nearly invariant for each neuron and decreases along the cortical hierarchy.
www.nature.com/articles/s41...
Thanks, Matthijs!
Message for participants of the #SNUFA 2025 spiking neural network workshop. We got almost 60 awesome abstract submissions, and we'd now like your help to select which ones should be offered talks. Follow the "abstract voting" link at snufa.net/2025/ to take part. It should take <15m. Thanks! โค๏ธ
New in @pnas.org: doi.org/10.1073/pnas...
We study how humans explore a 61-state environment with a stochastic region that mimics a โnoisy-TV.โ
Results: Participants keep exploring the stochastic part even when itโs unhelpful, and novelty-seeking best explains this behavior.
#cogsci #neuroskyence
Thanks, Dimitri!
Thanks a lot, Alireza!
Big shout out to co-authors Ali Haydaroฤlu @haydari.bsky.social (co-first), Shuqi Wang @shuqiw.bsky.social, Matteo Carandini
@carandinilab.net, and Kenneth Harris
@kenneth-harris.bsky.social
Looking forward to meeting you in San Diego ๐ด
2/2
๐ "High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model" will be presented as an oral at #NeurIPS2025 ๐
Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.
www.biorxiv.org/content/10.1...
1/2
Submissions (short!) due for SNUFA spiking neural networks conference in <2 weeks! ๐ค๐ง ๐งช
forms.cloud.microsoft/e/XkZLavhaJe
More info at snufa.net/2025/
Note that we normally get around 700 participants and recordings go on YouTube and get 100s-1000s views.
Please repost.