Dear Guido, I am incredibly sorry for your loss and speechless. Let me know if there is anything I can do to help. β€οΈ
Dear Guido, I am incredibly sorry for your loss and speechless. Let me know if there is anything I can do to help. β€οΈ
We've posted a new fMRI study of semantic relations (has-part, is-a, made-of, etc.), a key aspect of language. We find that relations are represented in the same brain regions as are other semantic concepts, though voxels tend to be selective for only one relation or another.
doi.org/10.64898/202...
Interesting work! The link didnβt work for me, so Iβll repost it here in case itβs the same issue for anyone else: www.biorxiv.org/node/5254288...
This wins the internet for me.
OK, my son just complained he no longer sees the "teeth", so somewhere around the age of 4-5, likely his prior became too strong for him to see the alternative interpretation. The Batman logo is now no longer bistable for him.
1/7 Can infants recognise the world around them? πΆπ§ As part of the FOUNDCOG project, we scanned 134 awake infants using fMRI. Published today in Nature Neuroscience, our research reveals 2-month-old infants already possess complex visual representations in VVC that align with DNNs.
Human visual cortex representations may be much higher-dimensional than earlier work suggested, but are these higher dimensions of cortical activity actually relevant to behavior? Our new paper tackles this by studying how different people experience the same movies. π§΅ www.cell.com/current-biol...
Finally out in eLife!!
"Early foveal cortex predicts the features of saccade targets through feedback from higher cortical areas."
elifesciences.org/articles/107...
π¨ New paper out in Science Advances π¨
With @suryagayet.bsky.social and @peelen.bsky.social, in two fMRI studies we investigate mental object rotations that are driven by the scene context, rather than purely by cognitive operations. π§΅ www.science.org/doi/10.1126/...
I have a PhD opening for my #VIDI BrainShorts project π½οΈπ§ π€! Are you or do you know an ambitious, recent (or almost) MSc graduate with a background in NeuroAI and interest in large-scale data collection and video perception? Check out our vacancy! (deadline Feb 15).
werkenbij.uva.nl/en/vacancies...
Why you shouldnβt trust data collected on Mturk (anymore):
link.springer.com/article/10.3...
While this work highlights problems with Mturk, it also depends on the task & filtering.
That said, we have also migrated to @cloudresearch.bsky.social Connect for data quality like in the good old days!
Now in press at Nature Communications!
www.nature.com/articles/s41...
Check it out if you are interested in category selectivity, the organization of visual cortex, and topographic models!
A teaser figure showing the process of metamers rendered differentially (MRD). Target scene parameters are used to render a target scene. A new scene is initialized from some starting point, and renders are created from this scene. The loss between the initial and target scenes is measured. MRD allows the gradients wrt the loss to be propagated to the scene parameters (e.g. lighting, geometry or material) for gradient-based optimization.
Legit super excited about this work coming out. My amazing doctoral student @ben.graphics has been working on an idea to use physically based differentiable rendering (PBDR) to probe visual understanding. Here, we generate physically-grounded metamers for vision models. 1/4
arxiv.org/abs/2512.12307
But it always comes back π
Ok, this is nuts. Once you see it you cannot unsee it. Do you see it?
(OP @drgbuckingham.bsky.social )
A βuniversalβ pattern of cortical brain oscillations may be less ubiquitous than previously proposed.
By @claudia-lopez.bsky.social
#neuroskyence
www.thetransmitter.org/brain-waves/...
π¨ π Preprint π¨
How does the brain represent natural images?
Using MEG + multivariate analysis, we disentangle contributions of retinotopy, spatial frequency, shape, and texture
Together, our results reveal how visual features jointly and dynamically support human object recognition.
link π
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!
Amazing news, Alex! Huge congrats, and very well deserved!
@cimcyc.bsky.social is hiring!
SIX postdoc positions are coming up to dive into collaborative projects bridging together psychological science.
Amazing opportunity to boost a postdoc career in a cutting-edge research center with outstanding human teams!
ππ½
cimcyc.ugr.es/en/informati...
Very thoughtful thread on why it matters to compute the right noise ceiling & why communication is so important to prevent this issue from spreading. Kudos to Sam for being so transparent!
In brief:
NC for best R^2 == data reliability expressed as r
NC for best r == sqrt(reliability)
We recently stumbled upon a surprisingly common misunderstanding in computing noise ceilings that can be quite consequential. So if you care about noise ceilings, please check out Sanderβs thread and our preprint! π
New preprint w/ Malin Styrnal & @martinhebart.bsky.social
Have you ever computed noise ceilings to understand how well a model performs? We wrote a clarifying note on a subtle and common misapplication that can make models appear quite a lot better than they are.
osf.io/preprints/ps...
Super happy to announce that our Research Training Group "PIMON" is funded by the @dfg.de ! Starting in October, we will have exciting opportunities for PhD students that want to explore object and material perception & interaction in GieΓen @jlugiessen.bsky.social ! Just look at this amazing team!
New Correspondence with @davidpoeppel.bsky.social in Nat Rev Neurosci. www.nature.com/articles/s41...
Here, we critique a recent paper by Rosas et al. We argue that "Bottom-up" and "Top-down" neuroscience have various meanings in the literature.
PDF: rdcu.be/eSKYI
Investigating individual-specific topographic organization has traditionally been a resource-intensive and time-consuming process. But what if we could map visual cortex organization in thousands of brains? Here we offer the community with a toolbox that can do just that! tinyurl.com/deepretinotopy
(3) The community can now start to apply Fernanda's tool retrospectively to countless existing anatomical scans to investigate how individual differences in retinotopic organization relate to measures of individual differences in function.
Really curious to hear how the community receives this!
(2) From a more practical point of view, depending on the goal of a study and the required fidelity, we can now confidently say that we may no longer need to collect retinotopic mapping data, freeing up scan time for other tasks. 3/4
(1) If we can accurately predict individual-specific function from structure alone, this highlights that brain structure can act as a strong constraint to brain function in normally-developing individuals. To me, this offers a new paradigm for studying how structure and function are related. 2/4
Really excited to see this preprint out! Fernanda did an amazing job at demonstrating how you can accurately predict retinotopy from T1w scans alone. This is important for several reasons: 1/4