For now, the figure raises a simple question:
Is economics shifting away from formal, journal-based critique — even as concerns about credibility move center stage?
Full blog post here: i4replication.org/the-vanishin...
For now, the figure raises a simple question:
Is economics shifting away from formal, journal-based critique — even as concerns about credibility move center stage?
Full blog post here: i4replication.org/the-vanishin...
Where have all the comments gone?
For decades, the American Economic Review regularly published formal comments — papers that replicate, reassess, or challenge earlier AER articles.
In our latest blog post, we show: they’ve nearly disappeared.
We have a new blogpost @i4replication.bsky.social on the steady decline of replications published as comments in the American Economic Review. i4replication.org/the-vanishin...
Oder eben Wählerinnen und Wähler, weil sie eben der Ansicht sind, dass sich die internationale Klimapolitik nicht in deutschen Heizkellern entscheiden wird. Aber klar, wer das nicht glauben will, zieht die Lobby-Verschwörungsgeschichte heran.
Kann man so sehen, wenn man es so sehen möchte und nicht akzeptiert, dass es eben auch (aus Kostengründen) ziemlich nah am Volkswillen ist.
Natürlich geht es um frühere Pläne. Warum sonst der mystische Verweis auf die "fossile Lobby" im Eingangs-Post? Ich weiß, welches Gesetz gilt, und welches Habeck ursprünglich wollte. Was mich amüsiert bzw. ärgert sind diese schrägen Andeutungen, die anderswo Verschwörungserzählung heißen würden.
Ich dachte nur, es würde in diesem Thread die als überzogen empfundene Kritik an den "anfänglichen Plänen" kritisiert. Aber ok, wenn also jetzt das ganz neue Heizungsgesetz als ambitionierter empfunden wird (wegen der Beimischung) als das "existierende", dann ist doch umweltpolitisch alles in Butter
Geht doch nicht um "das existierende GEG". Sondern um das, was weiland beim "Heizhammer" geplant war. Mit dem wird das Ganze ja hier kontrastiert, und einerseits mit "offenem Messer"-Metaphorik gearbeitet, auf der anderen Seite jedoch nicht.
Aber um die "Bezahlbarkeit" für die Bürger:innen wäre es ja auch schwierig bestellt, wenn man sie mit einem straffen Zeitplan in Wärmepumpen bringen will. Warum wäre das Messer zugeklappter?
In dem Thread ist doch von CO2-Preis und Beimischungsquoten die Rede (ohne die seine Logik übrigens zerfällt). Also woher kommt nun die "Aufhebung aller Umweltstandards"?
Mag sein. Ich würde denken, für den Mittelbau zählen eher die bisweilen noch seltsameren Selektionsmechanismen innerhalb der Wissenschaft, nicht das, was "auf dem Olymp der Ökomomik" (O-Ton Handelsblatt) passiert. Aber wer weiß.
Ja, kann ich nachvollziehen. Es geht wohl tatsächlich um unterschiedliche Erwartungen in Sachen "Marktaffinität" (also Interventions/Regulationsbereitschaft, Staatsschulden). Daraus allerdings Rückschlüsse auf die Freiheit von Forschung & Wissenschaft abzuleiten, ist etwas abenteuerlich.
Mainstream-Ökonom:innen (unter 50) sind ja nicht mehr "ordo".
‘The Value of Robust Null Results in Agricultural Economics’
Very good and valuable initiative by the Journal of Agricultural Economics and the guest editors Laure Kuhfuss and @jensrommel.bsky.social to launch this special issue on such an important topic 👏
onlinelibrary.wiley.com/page/journal...
Screenshot of an article abstract. Title: Statistics in Service of Metascience: Measuring Replication Distance with Reproducibility Rate Authors: Erkan O. Buzbas and Berna Devezer Abstract: Motivated by the recent putative reproducibility crisis, we discuss the relationship between the replicability of scientific studies, the reproducibility of results obtained in these replications, and the philosophy of statistics. Our approach focuses on challenges in specifying scientific studies for scientific inference via statistical inference and is complementary to classical discussions in the philosophy of statistics. We particularly consider the challenges in replicating studies exactly, using the notion of the idealized experiment. We argue against treating reproducibility as an inherently desirable property of scientific results, and in favor of viewing it as a tool to measure the distance between an original study and its replications. To sensibly study the implications of replicability and results reproducibility on inference, such a measure of replication distance is needed. We present an effort to delineate such a framework here, addressing some challenges in capturing the components of scientific studies while identifying others as ongoing issues. We illustrate our measure of replication distance by simulations using a toy example. Rather than replications, we present purposefully planned modifications as an appropriate tool to inform scientific inquiry. Our ability to measure replication distance serves scientists in their search for replication-ready studies. We believe that likelihood-based and evidential approaches may play a critical role towards building statistics that effectively serve the practical needs of science. Keywords: replication distance; reproducibility rate; philosophy of statistics; scientific inference; idealized experiment; minimum viable experiment
I'd like to re-up this paper we published last year bec I believe it makes a fundamental contribution to theoretical metascience but it is woefully underappreciated. We address a key challenge in estimating the reproducibility of a result: The distance of a replication study from the original. 1/n
This is such a strong comment, published in 2022. It has 14 Google Scholar citations. The replicated paper has 65 citations in 2025 alone - and counting. www.aeaweb.org/articles?id=...
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
That is the case in all fields. However, in all fields, probably including labor, non-convincing models make it into the published literatures only when they are significant.
Sometimes there is good econ content on LinkedIn now eg onlinelibrary.wiley.com/doi/epdf/10....
New post on @voxdev.bsky.social about our long-term study of rural electrification in Rwanda. We revisited villages connected to the grid ~10 years ago. Modest short-term impacts persist—but adoption remains extremely low. voxdev.org/topic/energy...
To underpin my credibilty: I don't live in Essen. In fact, I celebrated the “Oh, I live in a much cooler city” arrogance towards Essen for many years. But I think I was wrong. It's an underrated place, and your initial post conveyed this nicely.
Let's put it this way: it's a similarly authentic city. But the people's accents... In any case, Shooshan's approach to cities is worth emulating. Look for what is special, what is valuable. Not just picturesque facades (which in Germany is anyhow a proxy for pre-war economic irrelevance...).
Let's put it like this: Essen & Ruhrgebiet are lot more interesting - historically, socially, culturally - than many "pretty" places. If you want an open-air museum while shopping, go somewhere else. But if you open your mind for the hyper-interesting dimensions, you will even find it pretty.
Another seminal 'causal' economic history paper revisited, another interesting replication saga. onlinelibrary.wiley.com/doi/abs/10.1...
Yes, I agree, and I am generally also in Katrin Auspurg's camp on this one. Without thinking very hard about the "model space" I cannot see how multiverse analysis should be insightful. At least not from a "policing replication" perspective to detect p-hacking.
New submission format at SBE:
“Replications as Registered Reports”
link.springer.com/journal/1118...
You can get "in-principle acceptance" before data collection even begins; final paper gets published regardless the results, if the study is conducted rigorously.
#EconSky
Interesting post. But all this is well known. It is of course not the purpose of a robustness replication to run 10000 unreasonable specifications. In fact, the pivotal step of any robustness replication is to think about theoretically justified specifications. I left a comment under the post:
The pilot replicability assessment for our large meta-replication project is out in Q Open - and it offers some interesting insights. The replicated paper is perfectly reproducible & very robust, but fairly negligent if it comes to construct validity and pre-specification. doi.org/10.1093/qope...
Ich freue mich über mein Interview in der SZ @szde.bsky.social! Die Ungleichheit bei den Vermögen in Deutschland ist extrem hoch. Das hat strukturelle Gründe, vor allem Unternehmensbesitz am oberen Ende. Daraus folgen höhere Renditen der Reichen, was die Ungleichheit weiter treibt sz.de/li.3372503
Diese Woche bei Jung & Naiv:
- Heute ist der französische Top-Ökonom Gabriel Zucman zu Gast. Ab 17 Uhr LIVE später youtu.be/F7TZSj-fWyI