Ive started puttering with shinylive/webr: github.com/coatless-tut...
Ive started puttering with shinylive/webr: github.com/coatless-tut...
βΌοΈ Postdoc recruitment
Want to help build and understand the future of scientific collaboration? We are seeking a postdoc in computational metaβscience.
π UF (Gainesville, FL)
π° $55β60k (1-3 years)
π§ High intellectual agency
π
Deadline March 10
Send us your idea. Details attached!
I bet that many faculty and/or alumni from institutions that had training grants would chip in to get something like this together. (Iβm a Northwestern alum and faculty at UW Madison.)
Also this is basically the same thing as the CR3 cluster-robust standard error, implemented in clubSandwich: jepusto.github.io/clubSandwich/
Visiting Poverty Scholars Program, 2026-2027 The Institute for Research on Poverty is calling for applications for its Visiting Poverty Scholars Program. The Visiting Poverty Scholars program funds up to four poverty scholars per year to visit IRP or any one of its U.S. Collaborative of Poverty Centers (CPC) partners for five days in order to interact with its resident faculty, present a poverty-related seminar, and become acquainted with staff and resources. Visiting scholars will confer with a faculty host, who will arrange for interactions with others on campus. The application deadline is 11:59 p.m. Central on Friday. April 3, 2026 Eligibility: Applicants must be PhD-holding, U.S.-based poverty scholars at any career level who are from economically disadvantaged backgrounds.
#FundSocSci
www.irp.wisc.edu/visiting-pov...
Map showing βOne-year change in ZIP Code home prices between January 2025 and January 2026β with Wisconsin seeing some of the highest increases
itβs almost like Wisconsin needs a statewide housing strategyβ¦
Thinking of running an RCT in postsecondary education?
MDRC has created a fantastic set of resources to help you in projecting minimum effect sizes, randomizing, and processing data
Proud to have helped advise this project!
www.mdrc.org/the-rct
Aerial photo of Madisonβs state capital building with both sides of the isthmus visible
Madison, Wisconsin β 2026
Katie Fitzgerald and Beth Tipton (@statstipton.bsky.social) make a similar argument here: doi.org/10.3102/1076...
(This is not solely about meta-analysis, either. I would argue the same if a field relied on narrative / interpretive review methods.)
But I think it is critical that journals very carefully consider how their selection criteria might distort the published record in a way that hinders the systematic accumulation of evidence.
I think we could agree that there's no need for journals to publish poorly conducted studies, e.g., where assignment to condition was haphazard, where implementation of an intervention was compromised, where there were major confounds, where instrumentation was bad, etc.
Things that I did not assert and that I would not argue for:
1) that journals should publish all studies ever done
2) that journals should be indifferent to the nature of the evidence.
My argument was that the point of journals should be to curate the scientific record, and that this requires using systems of evaluation that allow for accumulation of evidence across individual studies.
Relevant for both yes, but I worry about the cure being worse than the disease. Sample reliability coefficients are noisy, so I think it's not obvious that one should routinely use them for artifact correction (for r or for d).
Hunter & Schmidt (2007, methods.sagepub.com/book/mono/me...) describe this as the artifact of direct range restriction. Much more well known for correlations, but your example is a great illustration that the issue is relevant for SMDs too.
What about Hedges (2007, doi.org/10.3102/1076...)? He describes several different ways of defining SMDs for cluster-randomized experiments, though in practice I've only ever standardized by total variance.
I agree with your main point that d = 22 is ridiculous in substantive terms and should not be included in a meta-analysis. But I would also note that this is partly because there is no universal SMD metric. There are many different ways of defining SMD, which are not all commeasurable.
which will usually be only a small part of the total variation in scores. I would think that the GRIM calculations would need to take this into account to determine whether a set of reported scores are plausible or not. Does your PubPeer comment do so? I couldn't tell from what you wrote.
In this article, the Ms and SDs in Table 2 are calculated by first averaging the individual scores at the classroom level, and then taking M and SD across classrooms (of which there were only a few per condition). So, roughly, the SD in the SMD is based only on between-classroom variation...
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
Aha very interesting. I would be interested to hear more about whatever alternative dissemination model you have in mind. (I'm by no means a proponent of the current journal-focused system, but I also don't have any vision for a better way to run things.)
I would also push back on the idea that non-significant finding = no new knowledge. A precisely estimated zero might well amount to new knowledge---knowledge that an intervention is ineffective or that there is no relation between two constructs.
The purpose of journals is to build a scientific record, so if it's difficult-to-impossible to accumulate and build, then there's something very wrong.
Among other reasons, selecting on statistical significance makes it much, much more difficult to accumulate evidence across studies, whether using quantitative meta-analysis methods or other synthesis techniques.
More of this type of careful meta-research, please. #SystematicReview #MetaAnalysis
Groundhog Harassed By Dipshits In Stupid Hats
Groundhog Harassed By Dipshits In Stupid Hats
If you're not already familiar, you might like Reichardt (2011, doi.org/10.1002/ev.364).