Derek Arnold's Avatar

Derek Arnold

@visnerd

Vision Scientist, Aphant

409
Followers
167
Following
56
Posts
12.10.2023
Joined
Posts Following

Latest posts by Derek Arnold @visnerd

Post image

⚑SYMPOSIA SPOTLIGHT #2

More symposia highlights from our upcoming conference!

09.03.2026 21:30 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
| Angelika Stefan

Please repost!

I have a wonderful colleague seeking a postdoc with expertise in *Bayesian Network Analysis*

Position for 1 year in Liverpool, starting as early as next month (April)!

Informal EOIs to Angelika Stefan: astefan1.github.io

10.03.2026 07:56 πŸ‘ 1 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Preview
Aphantasia and visual working memory: No direct evidence of impaired visual working memory in aphantasics, either in behavioral performance or the accuracy of a multivoxel pattern classifier Visual mental imagery and visual working memory are often thought to be closely related. After all, both have been argued to involve the temporary mai…

#Aphantasia and visual working memory: No direct evidence of impaired visual working memory in aphantasics, either in behavioral performance or the accuracy of a multivoxel pattern classifier www.sciencedirect.com/science/arti...

10.03.2026 09:24 πŸ‘ 3 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1
Post image

⚑SYMPOSIA SPOTLIGHT #1

We are super excited to announce a series of symposia taking place throughout our conference.

Here is a preview of some featured sessions and speakers. Of course, there is more to come. Stay tuned!

04.03.2026 21:17 πŸ‘ 7 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

**Postdoc position in human category learning**

@thecharleywu.bsky.social, Frank JΓ€kel and I are seeking a postdoctoral fellow to lead a joint project on human category learning at the Centre for Cognitive Science @tuda.bsky.social.

www.career.tu-darmstadt.de/tu-darmstadt...

23.02.2026 08:53 πŸ‘ 39 πŸ” 28 πŸ’¬ 1 πŸ“Œ 1
Preview
Which perceptual categories do observers experience during multistable perception? Abstract. Multistable perceptual phenomena provide insights into the mind’s dynamic states within a stable external environment, and the neural underpinnin

Everyone: Binocular rivalry describes an alternation between exclusive percepts from each eye.

Me + Prof Bex:
Which perceptual categories do observers experience during multistable perception? url: royalsocietypublishing.org/rspb/article...

18.02.2026 18:59 πŸ‘ 4 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Post image

New paper in Imaging Neuroscience by Melinda Sabo, Tijl Grootswagers, et al:

Multiple partially overlapping neural modules orchestrate conflict processing

doi.org/10.1162/IMAG...

23.02.2026 06:29 πŸ‘ 9 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
PNAS Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...

Bots have made their way to Prolific experiments. Our lab has stopped online testing of adults entirely now for this reason - we want to know if what we study is real. Probably data collected 2-3 years ago are ok, but moving forward we just can't know. www.pnas.org/doi/10.1073/...

19.02.2026 15:14 πŸ‘ 170 πŸ” 98 πŸ’¬ 6 πŸ“Œ 11
A teaser figure showing the process of metamers rendered differentially (MRD). Target scene parameters are used to render a target scene. A new scene is initialized from some starting point, and renders are created from this scene. The loss between the initial and target scenes is measured. MRD allows the gradients wrt the loss to be propagated to the scene parameters (e.g. lighting, geometry or material) for gradient-based optimization.

A teaser figure showing the process of metamers rendered differentially (MRD). Target scene parameters are used to render a target scene. A new scene is initialized from some starting point, and renders are created from this scene. The loss between the initial and target scenes is measured. MRD allows the gradients wrt the loss to be propagated to the scene parameters (e.g. lighting, geometry or material) for gradient-based optimization.

Legit super excited about this work coming out. My amazing doctoral student @ben.graphics has been working on an idea to use physically based differentiable rendering (PBDR) to probe visual understanding. Here, we generate physically-grounded metamers for vision models. 1/4

arxiv.org/abs/2512.12307

17.12.2025 21:17 πŸ‘ 53 πŸ” 15 πŸ’¬ 4 πŸ“Œ 3
Post image

Coming to APCV & EPC from outside of Auckland, New Zealand? Here is some more information about Student Travel Awards.

More available on our website:
visualneuroscience.auckland.ac.nz/epc-apcv-2026/

12.02.2026 19:49 πŸ‘ 4 πŸ” 4 πŸ’¬ 0 πŸ“Œ 1

Anyone want to do a PhD with me at the Sunny Coast? I'm recruiting, and I wanna do some fun psychophysics (but the possibilities for the PhD are very broad). Domestic students only, sadly.

In case y'all happen to know someone:
@nataliepeluso.com
@reubenrideaux.bsky.social
@visnerd.bsky.social

10.02.2026 21:12 πŸ‘ 10 πŸ” 7 πŸ’¬ 1 πŸ“Œ 3
Preview
Visual language models show widespread visual deficits on neuropsychological tests - Nature Machine Intelligence Tangtartharakul and Storrs use standardized neuropsychological tests to compare human visual abilities with those of visual language models (VLMs). They report that while VLMs excel in high-level obje...

Our latest paper, β€œVisual language models show widespread visual deficits on neuropsychological tests”, is now out in Nature Machine Intelligence: www.nature.com/articles/s42...

Non-paywalled version:
arxiv.org/abs/2504.10786

Tweet thread below from first author @genetang.bsky.social...

09.02.2026 02:40 πŸ‘ 70 πŸ” 36 πŸ’¬ 1 πŸ“Œ 2

No, but we probably should have done. Still can of course, but I doubt if it would work because we asked people for absolute ratings, as opposed to asking them to rate their own experiences relative to one another. That approach may be better for detecting within p variance

09.02.2026 00:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Redirecting

Now available without randomly placed repeated figures...

doi.org/10.1016/j.co...

08.02.2026 11:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Say him in Brisbane. Brilliant

07.02.2026 10:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Imagery modulates the pupillary response, but this does not reliably index differences in imagery vividness. Most people report that they can imagine seeing things in their mind’s eye. But there are large individual differences. A small proportion of people r…

New Paper: Pupillary responses are not a reliable index of differences in imagery vividness.

Our search for more reliable metrics of imagery continues...
www.sciencedirect.com/science/arti...

04.02.2026 05:20 πŸ‘ 37 πŸ” 12 πŸ’¬ 1 πŸ“Œ 0

If you keep it short (an hour or less) and include coffee and cake, you can always can it if it does not work, but the cake should prove popular

23.01.2026 08:14 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Interested in other experiences.

I think it has a lot to do with how time poor you are. Individual meetings are usually better, but can get challenging depending on other demands on your time.

Lab meetings create opportunities for cross pollination, but are often unproductive virtue signaling

23.01.2026 07:35 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
People report having consistent idiosyncratic β€˜diets’ of imagined sensations when they re-experience the past, and pre-experience the future To some extent, humans can re-experience the sensations of past events and pre-experience the future. These capacities are inter-related. But there are substantial individual differences. At the extre...

New Preprint: People in general have idiosyncratic imagined experiences characterised by salience differences. Some have more salient imagined sensations of smell than of imagery, while most have the opposite - and these differences shape people's daily lives.
www.biorxiv.org/content/10.6...

20.12.2025 23:57 πŸ‘ 7 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Preview
Corollary Discharge Dysfunction to Inner Speech and its Relationship to Auditory Verbal Hallucinations in Patients with Schizophrenia Spectrum Disorders AbstractBackground and Hypothesis. Auditory-verbal hallucinations (AVH)β€”the experience of hearing voices in the absence of auditory stimulationβ€”are a cardi

New paper with Tom Whitford, using EEG to investigate inner speech in people with auditory verbal hallucinations in schizophrenia.

academic.oup.com/schizophreni...

22.10.2025 07:06 πŸ‘ 5 πŸ” 5 πŸ’¬ 2 πŸ“Œ 0

It's really not. As described here, N is a subjective self-report. You may as well ask how many fairies people can see dancing on the head of a pin. Conceptually, this is simply not a verifiable performance metric.

21.10.2025 21:04 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

SAVE THE DATE! Australasian Society for Experimental Psychology (EPC) & Asia-Pacific Conference on Vision (APCV) Joint Meeting from 1-4 July at the University of Auckland, NZ.
#PsychSciSky #VisionScience #neuroskyence

More information to follow!
visualneuroscience.auckland.ac.nz/epc-apcv-2026/

20.10.2025 20:54 πŸ‘ 12 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0

I so wish : )

Failing that - I'll raise a glass to your continued good health at that time : )

20.08.2025 05:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If the DV are RTs - it would be important to control for local image contrasts. If the DV is recognition, controlling for ~all image properties is futile, as these are what we recognize. If you want to know what properties we rely on, well that is a different question (its some of them)

20.08.2025 05:07 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The problem - if there is one, is you didn't control for oriented contrast energy, spatial frequency content, local or long range curvature ect ect... Rotating an image causes big changes in these properties. Deciding if control is futile or sensible depends on context.

20.08.2025 05:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Exactly - attention kicks in and re-weights image properties ect - but as you say, the images are cool, and I want a coaster : )

20.08.2025 04:57 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If you are worried that detection or RTs might be related to contrast diffs ect - sure, control for that type of thing. But I think claiming to control for ~all image stats is futile if you still want to be able to recognize things in the image

20.08.2025 04:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

Obv it depends on context, but if you control for all image stats, you could not recognize - as that depends on image stats we have learnt to associate with meaning. So I never understand when papers claim to have controlled for image stats - as they haven't if people can still recognize things

20.08.2025 04:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Bluesky is not a great platform for nuance : )

I also find it really hard to follow conversations here, and think people should use tildes more often

If there is any disagreement - it is with the idea that controlling for low-level confounds is a sensible goal.

20.08.2025 04:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The info that is mapped to semantics is correlated image structure, that is changed when the images are reoriented. So it is a super cool demo of anagram images (I want a coaster), but it does not show that 'high-level' effects are driven by identical stimuli. You have to change the stimuli

20.08.2025 02:47 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0