Trump either fundamentally misunderstands the aspirations of the Iranian people (which I doubt), or he simply lacks the will to challenge the status quo of the Islamic Republic.
@shahabbakht
|| assistant prof at University of Montreal || leading the systems neuroscience and AI lab (SNAIL: https://www.snailab.ca/) π || associate academic member of Mila (Quebec AI Institute) || #NeuroAI || vision and learning in brains and machines
Trump either fundamentally misunderstands the aspirations of the Iranian people (which I doubt), or he simply lacks the will to challenge the status quo of the Islamic Republic.
I couldn't help feeling happy about the assassination, but Iβm in no way happy with the extent of Israelβs use of AI and data in targeted military operations.
A deep ethical dissonance β¦
Great report in FT on the operation to kill Khamenei: www.ft.com/content/bf99...
"Nearly all the traffic cameras in Tehran had been hacked for years, their images encrypted and transmitted to servers in Tel Aviv and southern Israel, according to two people familiar with the matter."
One thing nobody imagined the Islamic regime could achieve was uniting Israel and Arab nations to join forces in a military strike. And yet β¦ voila. It looks like thatβs exactly whatβs about to happen.
β€οΈ
β€οΈ
I have waited almost my entire life for this moment β¦
Ali Khamenei, the dictator, is dead.
Hoping for brighter days and a democratic #Iran.
This paper on how the brain may do gradient descent is very cool: www.nature.com/articles/s41...
Our new paper is now out showing how time perception in animals is linked to their ecology. Using data from 237 species we show temporal perception is faster in species that fly and pursuit predators www.nature.com/articles/s41... π
This study is super cool (connecting ecology and perception), that suggest some aspects of animal's perception (temporal precision) is shaped by their environment (which somehow resonates w our proposal on internal foraging perspectives on perceptual selection www.sciencedirect.com/science/arti...)
This is a critical methodological point about the Platonic Representation Hypothesis paper.
I mistakenly thought the PRH paper used CKA as its main similarity metric.
Another motivation for thinking more deeply about metrics of similarity and alignment.
Though itβs quite interesting that this subtle methodological detail turned out to be so important in the main message.
Thanks for the correction!
Yes, in the main text, your paper mainly relied on local similarity.
Now actually I rememberer, my first reaction reading your paper was why CKA results werenβt used in the main text.
What you describe sounds like the "lumpers vs. splitters" in this paper from @summerfieldlab.bsky.social lab: lumpers generalize more/retain less, and splitters generalize less/forget more. They gave a nice explanation based on rich vs lazy training regimes in ANNs.
www.nature.com/articles/s41...
Looking forward to reading the promised post on continual learning, @lampinen.bsky.social :)
Cool! This is generative inferenceβs prediction for human perception in this illusion: the squares are no longer squares!
Try it for yourself here:
huggingface.co/spaces/ttoos...
What is the relationship between memorization and generalization in AI? Is there a fundamental tradeoff? In infinitefaculty.substack.com/p/memorizati... Iβve reviewed some of the evolving perspectives on memorization & generalization in machine learning, from classic perspectives through LLMs.
Another good reason for being cautious with representational similarity analysis: arxiv.org/abs/2602.14486
The famous Platonic Representation Hypothesis was largely driven by CKAβs bias.
But the hypothesis still holds for shared local relationships.
The revised version of our paper on the impact of top-down feedback is now out @elife.bsky.social:
doi.org/10.7554/eLif...
tl;dr: we show that using human-brain-like feedback/anatomy in a deep RNN leads to human-like visual biases!
This work was led by @tmshbr.bsky.social
#NeuroAI π§ π π§ͺ
Excited to launch Principia, a nonprofit research organisation at the intersection of deep learning theory and AI safety.
Our goal is to develop theory for modern machine learning systems that can help us understand complex network behaviors, including those critical for AI safety and alignment.
1
Thrilled to finally share this work! π§ π
Using a new reinforcement-free task we show mice (like humans) extract abstract structure from sound (unsupervised) & dCA1 is causally required by building factorised, orthogonal subspaces of abstract rules.
Led by Dammy Onih!
www.biorxiv.org/content/10.6...
I donβt think AIβs success in coding will automatically translate to other fields. That level of performance only works where the output is as easily verifiable as code; and not many domains fit that bill. 2/2
"The experience that tech workers have had over the past year, of watching AI go from βhelpful toolβ to βdoes my job better than I doβ, is the experience everyone else is about to have. Law, finance, medicine, accounting, β¦"
Iβm not sure β¦ 1/2
fortune.com/2026/02/11/s...
β¦ especially whenever controversies around representational similarity resurface.
Youβre comparing two fields at very different stages of theoretical maturity. Neuroscience (and NeuroAI) are still largely pre-theoretic. I often return to Hasok Changβs Inventing Temperature as a parallel for where we actually stand in theoretical neuroscience, β¦
Definitely not enough.
My bet is on the ecological relevance of training data and temporal prediction as the core objective.
Architecture is difficult to constrain, given that ANNs and brains rely on substantially different functional mechanisms.
Thatβs just my view, though; I could be wrong.
Exactly my point. The emerging view seems to be that, assuming equal trainability (big assumption though), the architectures may not play as big of a role as the training objective and data.
The problem is the training objective and lack of recurrence, not the 50-layer architecture.
Also see @mschrimpf.bsky.social thread here: bsky.app/profile/msch...