David Slichter's Avatar

David Slichter

@davidslichter

Labor econ, econometrics, econ of ed. Associate Prof at Binghamton. Fellow at IZA. Website: https://sites.google.com/site/slichterdavid/

155
Followers
307
Following
110
Posts
12.06.2025
Joined
Posts Following

Latest posts by David Slichter @davidslichter

From my experiences, when I've been able to do model tests, the sort of analysis you're describing ("we controlled for the obvious confounders") is more often than not basically fine. Maybe not zero bias, but small enough bias and variance to get pretty good estimates. YMMV

03.03.2026 03:24 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

It's totally fine to file drawer a paper if you conclude that there is too much modeling and/or sampling error to learn anything about the original question. This is totally different from selectively hiding informative analyses.

14.02.2026 23:56 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Why stop at 100%?

14.02.2026 23:43 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Good point, resolving uncertainty isn't the right term to use. The key is really just that the posterior should look different from the prior.

14.02.2026 23:34 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Precision relative to prior uncertainty. Quality of paper = (value of reducing prior uncertainty about this topic) * (amount of uncertainty reduction done by this paper).

14.02.2026 21:27 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Obviously the consequences for accuracy depend on whether the truth is in the permissible interval or not. But if you think social science is full of imprecise estimates of small effects (as I do), the truth is probably unpublishable a large fraction of the time.

14.02.2026 20:20 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Consider an estimator called "three". Whatever parameter you want, my estimate is 3. Not a great estimator! You shouldn't update at all based on my estimate. Selective reporting depending on what parameter estimate you get is a weaker version of that.

14.02.2026 20:20 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I agree that we shouldn't generally update much based on any single study, or even sometimes based on entire literatures. But if the scientific process prioritized accuracy more, maybe we could update more. Prioritizing statistical significance is incompatible with prioritizing accuracy.

14.02.2026 20:14 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

If every paper measured exactly the truth, there would be no problem focusing on the surprising results. But in a world where most effects are small, estimates are imprecise, and researchers make errors, surprising/big results are likely to be wrong.

14.02.2026 20:08 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I think it is bad that the literature says a bunch of things which are false, and which are in fact selected for being especially likely to be false. Field experts learn to discount but we are not the only ones who read papers. Also bad that "experts" are selected for saying false things.

14.02.2026 19:40 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The process you are describing is one in which published findings are interesting but usually false.

We can either try to be interesting or we can try to be right. I think the role of science in society is to be right.

14.02.2026 19:39 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The whole point of "correlation is not causation" is that correlations are usually consistent with multiple interpretations. But authors who are skillful at framing correlations can often persuade an audience into fixating to an unwarranted extent on one specific interpretation. (end)

13.02.2026 18:14 ๐Ÿ‘ 4 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The thing I like about this is that it drives home the extent to which papers can bias you in the direction of believing an IV simply by having placed you in the frame of mind of interpreting a correlation a certain way. (6/n)

13.02.2026 18:14 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Once you reveal the actual X from that paper, you can explain that this won a Nobel Prize. Enjoy the resulting facial expressions. (5/n)

13.02.2026 18:14 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1

For instance, try asking a roomful of undergrads or early PhD students why they think mortality rates among early European settlers are correlated with GDP per capita in 1995. (4/n)

13.02.2026 18:14 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

A fun classroom exercise when teaching IV is to tell the class Z and Y for some famous papers, and ask them to guess what X is. The results are usually entertaining. (3/n)

13.02.2026 18:14 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

For instance, if asked, "why do you think that the kids who were randomly admitted to a certain middle school wind up with different test scores from the kids who were randomly not admitted?" people will realize that this probably means the school affects scores. (2/n)

13.02.2026 18:14 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

My personal rule for IVs is that, if there's really only one reason why Z and Y are correlated, then you should be able to tell me Z and Y, and I'd be able to figure out what X is without you telling me. (1/n)

13.02.2026 18:14 ๐Ÿ‘ 9 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

My vision: Authors make the case that it is a priori plausible that beta could be at least as small as b, but also plausible that it could at least as large as b', then argue that their sampling + modeling error is small relative to b'-b. Refs' job is to evaluate these arguments.

13.02.2026 16:28 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Distribution of z-stats on Medline, via onlinelibrary.wiley.com/doi/10.1111/...

13.02.2026 15:48 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Think of assessing the traditional unbiasedness condition E(estimate|truth) = truth for an estimator where you estimate some true parameter by randomly sampling from the space of published estimates of it.

13.02.2026 15:33 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Without this, the current standard is to treat null results are presumptively imprecise, and non-null results as presumptively precise. But you want to judge precision only from the SEs, while significance is as much about the point estimates as about the SEs.

13.02.2026 12:07 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Yes, imprecise results shouldn't be publishable. The issue is that there is currently no expectation for authors to explain what constitutes a priori uncertainty based on theory, existing evidence, or auxiliary info.

13.02.2026 12:07 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Making it permissible to say that coffee affects dementia risk, but not permissible to say that it doesn't, means that the public cannot tell from the literature whether coffee affects dementia risk or not.

13.02.2026 11:22 ๐Ÿ‘ 4 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I think the right paradigm is that SEs + modeling error need to be small compared with a priori uncertainty about a parameter value. This means you learn something. If you select for estimates which disagree with priors, then you are selecting for publishing things which are likely to be false.

13.02.2026 11:14 ๐Ÿ‘ 11 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1

If these properties don't matter, why care about endogeneity or measurement error or other standard concerns?

13.02.2026 11:10 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Ideally, published estimates would be unbiased in the sense of E(true parameter | published parameter) = published parameter. That's unrealistic because it requires shrinking towards correct priors. More realistically, E(published estimate | true parameter) = true parameter would be fine.

13.02.2026 11:10 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

Definitely a step in the right direction, but the main issue with a cap is that better students take some classes than others. There's no reason Math 55 should have the same cap as Math 21.

07.02.2026 17:56 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

My opinion is that nobody should pay attention to any of the Nobels.

16.01.2026 20:24 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I have simply accepted that publication in an elite journal means that the paper was interesting, not that it was correct. There are lots of empirical indicators that peer review does not sort more reliable findings to more prestigious journals.

22.12.2025 02:58 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0