Le rassemblement Stand Up for Science à l’institut Pasteur
One year after, we Stand Up for Science one more time at @pasteur.fr
#StandUpForScience
@standupforscifr.bsky.social
✊
Le rassemblement Stand Up for Science à l’institut Pasteur
One year after, we Stand Up for Science one more time at @pasteur.fr
#StandUpForScience
@standupforscifr.bsky.social
✊
Current NIH leadership want you to think they are using rigorous, consistent & scientific processes to screen studies to align them with agency priorities.
But the process that they have put down on paper is a sham.
It’s important to know NIH is not following its own guidance. Here’s why:
🧵1/
If you want to take your mind off awful politics and look at awful science stuff instead, this is a good read: www.sciencedetective.org/scientific-d...
The problem cannot be reduced to the for-profit nature of scholarly publishers. In domains where there is real competition, like groceries, cars, or TVs, the free market delivers products that constantly get better, and cheaper. www.experimental-history.com/p/the-one-sc...
Only 2 weeks left to apply to a 3-day hackathon to build tools to improve the visibility of replications!
Apply by 16 March: indico.uni-muenster.de/e/marco2
This is consistent with earlier psychometric work that suggests 5-7 is the best response scale options, but good to see that the finding holds up in contemporary research. Also, good to see that labeling scales whether anchored or not has little impact on findings. academic.oup.com/ijpor/articl...
No, you see the smartphones will moderate the effect of school absence for 13.5 year olds living in exurbs in Wyoming. Very nuanced.
This is the swamp in which we work. To pretend that these men are exceptional mistakes the deep misogyny & racism built into American higher ed. These same men have fashioned themselves as heterodox thinkers against the world of woke but this is who they really are. www.chronicle.com/article/unma...
The boys’ club: How Epstein’s influence shaped the exclusion of women in STEM ctmirror.org/2026/02/23/t...
Thank you to @joshuasweitz.bsky.social. The origin of the current assault on science and public health has a direct tie to COVID contrarianism. But Bhattacharya, Prasad and the rest couldn’t rise without help and patronage. 1/ joshuasweitz.substack.com/p/grievance-...
A different kind of reproducibility crisis.
Paper on statistical power necessary for interaction effects
doi.org/10.1177/2515...
Take a load off your minds everyone, it was regular old misogyny the whole time
Recent work has shown how vulnerable online survey research is to LLMs. Motivated by this, we examined our online Posner cueing data from Prolific. It's concerning. We now must carefully consider when (or whether?) online behavioral data can be trusted.
see our comment:
www.pnas.org/doi/10.1073/...
How the processes of recuperation vitiate apparent methodological advances in medical research, from meta-analysis to Mendelian randomization
Babbage, 1830, discussing the problem that scientists selectively report findings that they want to be true.
Confirmation bias is a strong human tendency. This is why we need to design science in a way that prevents conformation bias from leading us away from the truth.
This article raises the larger point that it’s not just Individual decisions or moral failings that the Epstein files reveal. It’s a culture and clique that was self-reinforcing and enabling.
New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.
Moldy maybe?
I'm monotonously monotonous. The age effect does, however, seem to be monotonic.
Well, isn't it an interaction of insult quality and age effect? Or is it my cohort which didn't use sunscreen???
Yeah, what does that make us 60-year olds? Desiccated husks?
I think you just called us older people raisins. I'm deeply offended...
New blog post about the age-period-cohort identification problem!
In which, for the first time ever, I ask "What's the mechanism?" and also suggest that sometimes you may actually *not* be interested in causal inference.
www.the100.ci/2026/02/13/o...
Happy birthday to one of my favourite haters, Charles Darwin
A meta-analysis on reducing discrimination finds:
1) passive interventions, such as short-term education or bias reminders, are ineffective
2) targeting behavior directly to inhibit bias (eg making individuals accountable or changing social norms) is helpful
psycnet.apa.org/doiLanding?d...
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
New: "Effect of Artificial Intelligence on Learning: A Meta-Meta-Analysis" by Wagenmakers and colleagues revealing evidence for "severe publication bias and extreme between-study heterogeneity" in existing meta-analyses of the effects of AI on learning: osf.io/preprints/ps...
screenshot of the paper title and abstract that is linked to in the post
***Just Published**
We tested whether purpose in life mediates the relation between #loneliness and future risk of death.
Loneliness seems to set in motion the erosion of purpose in life, that has downstream consequences on length of life.
Paper (open access)
doi.org/10.1016/j.so...