Excited to share new work on how the brain makes social inferences from visual input! π§ π―ββοΈ
(With @lisik.bsky.social , @shariliu.bsky.social, @tianminshu.bsky.social , and Minjae Kim!) www.biorxiv.org/content/10.6...
@neuranna
Language and thought in brains and in machines. Assistant Prof @ Georgia Tech Psychology. Previously a postdoc @ MIT Quest for Intelligence, PhD @ MIT Brain and Cognitive Sciences. She/her https://www.language-intelligence-thought.net
Excited to share new work on how the brain makes social inferences from visual input! π§ π―ββοΈ
(With @lisik.bsky.social , @shariliu.bsky.social, @tianminshu.bsky.social , and Minjae Kim!) www.biorxiv.org/content/10.6...
Very cool, excited to take a look!
Looks cool Gasper! Just curious, why do you think the absence/presence of language in animals would affect their personhood / legal status?
Wohoo many congratulations!!! πππ
Sure, not defending the vibe coded model; my comment was more about the tone of the conversation... If you do add your model to BrainScore, I too will appreciate an opportunity to compare the results directly (we have implemented a version of that model in our lab, but applied to a diff dataset)
We all want to understand the brain, let's help each other
Hi Nima - could you please contribute positively to the NeuroAI community by staying cordial & working to improve the field together, not tear people down? My lab has benefited from your paper, yet this kind of discourse is stalling positive change, not promoting it.
Our workshop on LLMs, Cognitive Science, Linguistics, and Neuroscience explored why LLM capabilities in logic and reasoning lag behind their linguistic capacity, and how a different model β the human brain β could point the way forward.
bit.ly/45QWwaM
AI Psychology (some also use machine psychology)
An excellent retort to βthat BOLDβ paper making the rounds lately. A great example of needing to understand the assumptions of an analysis method.
π¨ New preprint!
Why do some insights from spikes translate to field potentials while others don't? In this paper we compare visual memory representations in spikes and LFPs to propose a general framework that answers this question.
www.biorxiv.org/content/10.6...
π§΅ (1/10)
π§ π¦ π§ π»
If you've missed this piece about the different modes of empiricism in computer science versus the social sciences, I can highly recommend it. doomscrollingbabel.manoel.xyz/p/the-empiri...
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!
It responds to pictures, so no, it doesn't just operate downstream from the language network!
Many thanks to @evfedorenko.bsky.social and @nancykanwisher.bsky.social who brought me on to this project years ago, as well as to the whole author team: @carinakauf.bsky.social @ruimingao.bsky.social Selena She @hopekean.bsky.social T. Goldhaber, A. Nieto-CastaΓ±Γ³n, R. Varley 11/end
Overall, we find that semantic reasoning recruits its own neural machinery, distinct from the language network and other large-scale brain networks. This is cool! Lots of exciting follow-up work to do to establish precisely what these regions do. 10/
Finally, we examined vATL, a putative semantic hub. vATL voxels localized with the semantic>perceptual contrast respond to both sentence and pic semantics - AND to passive sentence reading. Thus, vATL is not sensitive to semantic task load, whereas our semantic regions are 9/
Single-subject brain plots show that these semantic regions are adjacent but largely separate from language, MD, and DMN regions. 8/
Semantic regions are also distinct from fronto-parietal multiple demand & default mode networks, other candidate systems that could be supporting semantic reasoning. 7/
These semantic regions are distinct from the language network - the latter shows a preference for linguistic stimuli, whereas semantic regions do not. 6/
We discover a set of brain regions that respond to semantic reasoning over both sentences and pics.The results are stable across our 3 experiments. These regions are located in left frontal cortex, left temporo-parietal cortex, and right cerebellum. 5/
Using group-constrained, subject-specific analysis (GcSS), we search for brain regions that respond to semantic>perceptual tasks (~matched for difficulty) for both sentences and pictures. To do so, we leverage data from 3 experiments with the same general design structure 4/
Critically, we posit that semantic reasoning can operate over various input types - e.g., sentences and pictures. 3/
Humans have the remarkable capacity to sift through vast amounts of stored world knowledge to extract information that is immediately relevant to their goals. We call this process ~semantic reasoning~ and set out to discover its neural basis. 2/
The last chapter of my PhD (expanded) is finally out as a preprint!
βSemantic reasoning takes place largely outside the language networkβ π§ π§
www.biorxiv.org/content/10.6...
What is semantic reasoning? Read on! π§΅π
Still the best course if you want to actually understand Bayesian stats.
Kudos to @ruimingao.bsky.social for wrangling a large in-house dataset to investigate the consistency of the language regions when localized with different tasks! π§ π¬
Find Tahaβs poster today at the @dataonbrainmind.bsky.social workshop #NeurIPS2025
We leave the βhowβ to future work (by you and others and maybe us eventually)
Iβll keep ROSE in mind as a testable prediction of the βhowβ claims