tal boger's Avatar

tal boger

@talboger

third-year phd student at jhu psych | perception + cognition https://talboger.github.io/

308
Followers
45
Following
57
Posts
23.11.2024
Joined
Posts Following

Latest posts by tal boger @talboger

Algorithmic complexity for dice randomness and 3x3 grid randomness (z-scored)

Algorithmic complexity for dice randomness and 3x3 grid randomness (z-scored)

Results from a permutation test computing 10,000 iterations of shuffled mean absolute error (within-person, across-tasks) vs. the observed mean absolute error.

Results from a permutation test computing 10,000 iterations of shuffled mean absolute error (within-person, across-tasks) vs. the observed mean absolute error.

Their data only include each sequence’s algorithmic complexity score (not the raw sequences), but even so, the same patterns emerge. Pairwise correlations are significant (especially in dice + grid), and a permutation test shows just how strong this stability is.

06.03.2026 16:29 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Our work used longer (250-trial) lab tasks with a smaller sample. But the pudding.cool article collects data from tons of people, and the sequences it collects are extremely short (10-12 items), making it a super strong test for this stability.

06.03.2026 16:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Beyond being a great read, the article collected within-subject randomization data for over 52,000 people across these 3 tasks. Last year, I (+ @samiyousif.bsky.social and others) put out work demonstrating that random behavior is stable across tasks and time. talboger.github.io/files/Boger_...

06.03.2026 16:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
We think this cool study we found is flawed. Help us reproduce it. Are 25 year olds really more random than 60 year olds?

A few years ago, my favorite website (@puddingviz.bsky.social) put out this great piece analyzing a study of how randomizing ability changes with age. It includes demos where readers produce sequences of random coin flips, dice rolls, and locations in a 3x3 grid. pudding.cool/2022/04/rand...

06.03.2026 16:26 πŸ‘ 5 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

I am very excited to announce that over the holidays, my first ever paper (w/ @samiyousif.bsky.social) was published in Cognitive Science! Here, we describe a new illusion of *number*: The Crowd Size Illusion!

onlinelibrary.wiley.com/doi/10.1111/...

05.01.2026 17:04 πŸ‘ 36 πŸ” 12 πŸ’¬ 0 πŸ“Œ 1
Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio

Well this is exciting!

The Department of Psychological & Brain Sciences at Johns Hopkins University (@jhu.edu) invites applications for a full-time tenured or tenure-track faculty member in Cognitive Psychology, in any area and at any rank!

Application + more info: apply.interfolio.com/178146

02.12.2025 03:18 πŸ‘ 93 πŸ” 55 πŸ’¬ 1 πŸ“Œ 3
Video thumbnail

Congratulations (and thank you) to @talboger.bsky.social, who lectured in front of nearly 500 @jhu.edu undergraduates today on the psychology of music! They didn’t see it coming, and then they loved it :)

11.11.2025 22:09 πŸ‘ 15 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

(from lapidow & @ebonawitz.bsky.social's awesome 2023 explore-exploit paper)

14.10.2025 21:45 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
methods from lapidow & bonawitz, 2023. children are "dropped"

methods from lapidow & bonawitz, 2023. children are "dropped"

a falling child

a falling child

can't believe the IRB approved this part β€” hope the children are ok!

14.10.2025 21:44 πŸ‘ 67 πŸ” 7 πŸ’¬ 2 πŸ“Œ 2

What a lovely 'spotlight' of @talboger.bsky.social's work on style perception! Written by @aennebrielmann.bsky.social in @cp-trendscognsci.bsky.social.

See Aenne's paper below, as well as Tal's original work here: www.nature.com/articles/s41...

08.10.2025 17:27 πŸ‘ 29 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

When a butterfly becomes a bear, perception takes center stage.

Research from @talboger.bsky.social, @chazfirestone.bsky.social and the Perception & Mind Lab.

06.10.2025 20:02 πŸ‘ 35 πŸ” 8 πŸ’¬ 2 πŸ“Œ 2

Out today!

www.cell.com/current-biol...

06.10.2025 14:56 πŸ‘ 39 πŸ” 11 πŸ’¬ 1 πŸ“Œ 1

important question for dev people: when reporting demographics for a paper involving both kids and adults, we want some consistency in how we report that information. so do you call the kids "men" and "women", or do you call the adults "boys" and β€œgirls"?

01.10.2025 15:33 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

sami is such a creative, thoughtful, and fun mentor. anyone who gets to work with him is so lucky!

15.09.2025 18:23 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1
Preview
Can we β€œsee” value? Spatiotopic β€œvisual” adaptation to an imperceptible dimension In much recent philosophy of mind and cognitive science, repulsive adaptation effects are considered a litmus test β€” a crucial marker, that distinguis…

Visual adaptation is viewed as a test of whether a feature is represented by the visual system.

In a new paper, Sam Clarke and I push the limits of this test. We show spatially selective, putatively "visual" adaptation to a clearly non-visual dimension: Value!

www.sciencedirect.com/science/arti...

28.08.2025 20:18 πŸ‘ 42 πŸ” 15 πŸ’¬ 2 πŸ“Œ 1

It's true: This is the first project from our lab that has a "Merch" page!

Get yours @ www.perceptionresearch.org/anagrams/mer...

19.08.2025 19:28 πŸ‘ 33 πŸ” 4 πŸ’¬ 3 πŸ“Œ 1

The present work thus serves as a β€˜case study’ of sorts. It yields concrete discoveries about real-world size, and it also validates a broadly applicable tool for psychology and neuroscience. We hope it catches on!

19.08.2025 16:39 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Though we manipulated real-world size, you could generate anagrams of happy faces and sad faces, tools and non-tools, or animate and inanimate objects, overcoming low-level confounds associated with such stimuli. Our approach is perfectly general.

19.08.2025 16:39 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Overall, our work confronts the longstanding challenge of disentangling high-level properties from their lower-level covariates. We found that, once you do so, most (but not all) of the relevant effects remain.

19.08.2025 16:39 πŸ‘ 11 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

(Never fear, though: As we say in our paper, that last result is consistent with the original work, which suggested that mid-level features β€” the sort preserved in β€˜texform’ stimuli β€” may well explain these search advantages.)

19.08.2025 16:39 πŸ‘ 8 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
whereas previous work shows efficient visual search for real-world size, we did not find a similar effect with anagrams. our study included a successful replication of these previous findings with ordinary objects (i.e., non-anagram images).

whereas previous work shows efficient visual search for real-world size, we did not find a similar effect with anagrams. our study included a successful replication of these previous findings with ordinary objects (i.e., non-anagram images).

Finally, visual search. Previous work shows targets are easier to find when they differ from distractors in their real-world size. However, in our experiments with anagrams, this was not the case (even though we easily replicated this effect with ordinary, non-anagram images).

19.08.2025 16:38 πŸ‘ 11 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
people prefer to view real-world large objects as larger than real-world small objects, even with visual anagrams.

people prefer to view real-world large objects as larger than real-world small objects, even with visual anagrams.

Next, aesthetic preferences. People think real-world large objects look better when displayed large, and vice versa for small objects. Our experiments show that this is true with anagrams too!

19.08.2025 16:37 πŸ‘ 15 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
results from the real-world size Stroop effect with anagrams. performance is better when displayed size is congruent with real-world size.

results from the real-world size Stroop effect with anagrams. performance is better when displayed size is congruent with real-world size.

First, the β€œreal-world size Stroop effect”. If you have to say which of two images is larger (on the screen, not in real life), it’s easier if displayed size is congruent with real-world size. We found this to be true even when the images were perfect anagrams of one another!

19.08.2025 16:36 πŸ‘ 16 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Then, we placed these images in classic experiments on real-world size, to see if observed effects arise even under such highly controlled conditions.

(Spoiler: Most of these effects *did* arise with anagrams, confirming that real-world size per se drives many of these effects!)

19.08.2025 16:35 πŸ‘ 13 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
anagrams we generated, where rotating the object changes its real-world size.

anagrams we generated, where rotating the object changes its real-world size.

We generated images using this technique (see examples). Each pair differs in real-world size but are otherwise identical* in lower-level features, because they’re the same image down to the last pixel.

(*avg orientation, aspect-ratio, etc, may still vary. ask me about this!)

19.08.2025 16:35 πŸ‘ 31 πŸ” 2 πŸ’¬ 4 πŸ“Œ 1
depiction of the "visual anagrams" model by Geng et al.

depiction of the "visual anagrams" model by Geng et al.

This challenge may seem insurmountable. But maybe it isn’t! To overcome it, we used a new technique from Geng et al. called β€œvisual anagrams”, which allows you to generate images whose interpretations vary as a function of orientation.

19.08.2025 16:34 πŸ‘ 25 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1
the mind encodes differences in real-world size. but differences in size also carry differences in shape, spatial frequency, and contrast.

the mind encodes differences in real-world size. but differences in size also carry differences in shape, spatial frequency, and contrast.

Take real-world size. Tons of cool work shows that it’s encoded automatically, drives aesthetic judgments, and organizes neural responses. But there’s an interpretive challenge: Real-world size covaries with other features that may cause these effects independently.

19.08.2025 16:33 πŸ‘ 18 πŸ” 0 πŸ’¬ 2 πŸ“Œ 1

The problem: We often study β€œhigh-level” image features (animacy, emotion, real-world size) and find cool effects. But high-level properties covary with lower-level features, like shape or spatial frequency. So what seem like high-level effects may have low-level explanations.

19.08.2025 16:33 πŸ‘ 19 πŸ” 0 πŸ’¬ 2 πŸ“Œ 1
Video thumbnail

On the left is a rabbit. On the right is an elephant. But guess what: They’re the *same image*, rotated 90Β°!

In @currentbiology.bsky.social, @chazfirestone.bsky.social & I show how these imagesβ€”known as β€œvisual anagrams”—can help solve a longstanding problem in cognitive science. bit.ly/45BVnCZ

19.08.2025 16:32 πŸ‘ 352 πŸ” 105 πŸ’¬ 19 πŸ“Œ 30

Out today! www.nature.com/articles/s41...

05.08.2025 21:58 πŸ‘ 61 πŸ” 18 πŸ’¬ 3 πŸ“Œ 2