If you want to take your mind off awful politics and look at awful science stuff instead, this is a good read: www.sciencedetective.org/scientific-d...
If you want to take your mind off awful politics and look at awful science stuff instead, this is a good read: www.sciencedetective.org/scientific-d...
bro just one more future study bro, bro I swear just one more future study and it'll fix the inference bro
One funny effect of pretending that methodological issues can just be ignored because of βfuture studiesβ is that it probably prevents those future studies from happening. Like why bother actually addressing hard issues when you can just get away with hand waving?
No. I played Oxyd Magnum on an Atari for quite a while thoughβ¦
Screenshot of the "Does that use a lot of energy?" online app
Hannah Ritchie has built a fun little tool where you can compare energy usage of various products and activities.
This is super helpful imho, because it's so hard to develop intuitions even just about the scales involved here.
hannahritchie.substack.com/p/does-that-...
You know Iβm no measurement expert, but that does sound like a Rasch-scale situation to me
Wow! In that case, thank god for her strong moral convictions.
Thatβs interesting. I guess the fact that you either can or canβt climb a route might do a lot of heavy lifting (sorry) here?
Thank god fairies are so smol and weak
"40% of papers about subarachnoid haemorrhage in animals contained manipulated images."
We have to face up to the fact that in some fields, over half of published science might be fake.
Seeing that comments on this are getting a little more political, I want to highlight that my gripe is very much with *the headline* specifically. Neither Alan Milburn nor the author of the article seems to be so seriously confused about the situation.
Link to full article: archive.is/pA2lv
βDonation after circulatory death donors rose from 118 in 2000 (2% of all donors) to 8129 in 2025 (49%).β
Figures are for the US.
Chart of trends in recovered organs from circulatory deaths
The number of organs available for donation has risen massively in the past five years.
It seems to be the result of technological advances in preserving organs after circulatory death.
jamanetwork.com/journals/jam...
An array of 9 purple discs on a blue background. Figure from Hinnerk Schulz-Hildebrandt.
A nice shift in perceived colour between central and peripheral vision. The fixated disc looks purple while the others look blue.
The effect presumably comes from the absence of S-cones in the fovea.
From Hinnerk Schulz-Hildebrandt:
arxiv.org/pdf/2509.115...
The only feature I miss in Signal compared to WhatsApp was transcription of voice messages. So, I had Claude code a pipeline to do this locally on Mac using sigtop+ffmpeg+whisper at the click of a button. github.com/rubenarslan/...
New newspaper headline for your Intro to Causal Inference lecture just dropped
Online Studies Psychological Science requires that authors who use samples from online data collection include a statement in the Method section explicitly addressing their approach to preventing and detecting automated or AI-generated responses. Rationale As large language models and other generative AI tools become more accessible, the risk of data contamination by non-human respondents has increased dramatically in research. Psychological science (and the social sciences generally) is particularly susceptible to this issue given its growing reliance on online data collection. Preventing automated responses during data collection and detecting them afterward often involve methodological trade-offs. For instance, technical barriers that aim to prevent LLM use (e.g., blocking copy-pasting functionalities) may eliminate behavioral indicators needed for detection (e.g., pasting rather than typing). This policy aims to enhance transparency and reproducibility of reported results by requiring authors to articulate their approach across both prevention and detection dimensions, enabling readers and reviewers to assess the likelihood of reported data being influenced by automated responses. Scope This policy applies to any submission with at least one study that includes data collected online without direct human supervision (e.g., via crowdsourcing platforms, student participants who complete the study online, online recruitment ads, or remote survey distribution tools). Required Reporting Authors must include in the Methods section either: A statement confirming that procedures were in place to prevent and/or detect and exclude automated or AI-generated responses, including a description of those procedures (e.g., explicit participant instructions against LLM use, disabled copyβpaste functionality, CAPTCHA use, IP filtering, consistency checks, attention checks, adversarial prompting) as well as the types of automated responses that these procedures are suitable β¦
Maybe of interest: The submission guidelines of Psychological Science now demand an explicit statement on measures taken to reduce the risk of AI-generated responses for all online studies!
www.psychologicalscience.org/publications...
Usually you can control whether the machine turns on though, not only your intentions to turn it on, right?
But yeah, the heuristic isnβt perfect. Machine failure is better captured by what Julia said: when you can assume that itβs unrelated to what you were going to measure, itβs probably fine.
Exactly!
It takes a lot of accepting scientific imperfection ;) For me it personally makes sense when simply thinking about βwhat you can controlβ. You can control what people are told, but not what they do. The manipulation is always a mediation path with some loss along the way.
not sure I understand your argument, can you explain some more? The homogeneity assumption itself would just be causally ignorant/naive, right?
My personal feeling is that ppl always knew it was bad to only test psych students, but convinced each other that it was fine in the same way as many QRPs.
bsky.app/profile/anne...
After learning about collider bias, reading this gave me the second-worst existential crisis about learning anything causal from data *even when you can run experiments*. Really a must-read if youβre not already familiar with this stuff.
Per protocol analysis strikes again!
Folks, if you randomize but then donβt analyze some of the people who got randomized (maybe because they didnβt adhere to instructions, maybe because they dropped out), randomization will no longer do all the heavy causal inference lifting.
Collider bias strikes again
Always different journals, I should add. Makes me think about @ruben.the100.ci's blog post about a major bug of peer review:
www.the100.ci/2020/06/24/m...
Wonderful!
We regret to inform you that your paper cannot be considered for publication, but we encourage you to submit it to our GOLD Open Access sister journal
Thanks! Would be great to receive it, but no pressure of course.