Not sure offhand and don't have the data handy, but I would bet that trolls are mostly young men (assuming we trust them to report their demos). But I doubt that would explain away associations between violence and age and gender
Not sure offhand and don't have the data handy, but I would bet that trolls are mostly young men (assuming we trust them to report their demos). But I doubt that would explain away associations between violence and age and gender
Thanks! The vast majority are choosing outparty leaders, though it's more diffuse when it's unclear who the party leader is. Most other actors are politicians too, but some other elites get mentioned.
For those interested in measuring political violence, check out Lily and Nathan's new review paper below. See also my forthcoming paper at POQ (w/ @llopez.bsky.social and Lucas Lothamer) introducing our own measure scottaclifford.com/wp-content/u...
Thanks!
Looks interesting! Can you share a link to an ungated version?
After nearly a decade measuring American public support for political violence, @nathankalmoe.bsky.social and I have published a somewhat comprehensive guide to measuring these attitudes. This includes historical comparisons and responses to common critiques. doi.org/10.1093/poq/...
My take on the partisan expressive responding literature is now in print. Open access: doi.org/10.1017/S000...
My job market paper is now available as a preprint! π¨
Using survey evidence with a conjoint experiment, I test how state-level immigrant integration policy features affect perceptions of fairness and support.
3 key points, the big takeaway, the link, and a bonus belowβ¬οΈπ§΅
The research and analytics team at @statesunited.org is searching for a researcher to support our survey research program. Come join our fully remote team! Great mission, great pay, and excellent benefits.
recruiting.paylocity.com/recruiting/j...
New w/@scottclifford.bsky.social.
Lots of work uses agree-disagree scales, and a lit review shows these are 1) frequently just measured in one direction (agree = higher trait) and 2) correlated with each other.
This has potentially big issues for conclusions.
link.springer.com/article/10.1...
π¨ New paper out at @ajpseditor.bsky.social π¨
Do the public hold meaningful attitudes? Using the case of abortion policy preferences, we provide strong evidence that policy prefrences can be coherent, stable over time, and causally explain vote choice.
doi.org/10.1111/ajps...
Very excited to see this out at @bjpols.bsky.social! In this article, I show that contemporary political news coverage makes it challenging for readers to learn information that is helpful for democratic accountability, even for very politically engaged audiences.
A brief summary:
Nick Vivyan, Chris Hanretty (@chanret.bsky.social) and I have a new book out: βIdiosyncratic Issue Opinion and Political Choiceβ. The core of the book is making the argument that citizensβ views about political issues neither reduce to an ideological orientation nor to a lack of substance. (1/10)
π¨π New paper (conditional accepted at @thejop.bsky.social):
We test whether social desirability bias actually distorts answers in online surveys.
Short version:
It mostly doesnβt.
w. @timallinger.bsky.social @kristianvsf.bsky.social @morganlcj.bsky.social
URL: osf.io/preprints/os...
a graph showing JEPS has less selection on significance than other journals
When we look across journals, we see the same patterns repeated. The main exception is the Journal of Experimental Political Science, which has the highest rate of null-only reporting and lowest rate of rejection-only reporting. Kudos to them.
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
Reframing policy arguments with opponents' moral foundations did not change policy opinions across 5 issue areas:
www.tandfonline.com/doi/full/10....
Do ordinary Republicans and Democrats really avoid each other in everyday life? In a new working paper with Delia Baldassarri, we present descriptive and experimental evidence to challenge the view that partisanship drives the formation of social relationships.
osf.io/preprints/so...
1/15
The image features the large letters "JEPS" and the text "Call for Editor" against a dark background.
CALL FOR EDITOR -
@jepsjournal.bsky.social seeks an editor or editorial team with a commitment to publishing articles that represent the substantive and methodological diversity of experimental work in the discipline.
https://cup.org/4ae3Tul
cc @apsa.bsky.social @experimentsapsa.bsky.social
This is a belated post about our paper in @poqjournal.bsky.social.
We analyzed 100 survey experiments fielded by TESS (tessexperiments.org), using only information from the proposals to identify intended hypotheses.
Here are some of the things we learned:
Thankfully, no!
JEPS has been pretty steady. Arguably much less gain for experiments with relatively clean and simple data. So any change will likely be concentrated in specific types of research.
I am re-upping this as a reminder that @polbehavior.bsky.social is looking for a new editor! Editing was the most rewarding part of my career and I encourage folks to think about applying.
Here's a suggestion for a New Year's resolution: If you see influential bad research, say something. One part of the whole replication crisis story is that a lot of psychological researchers privately knew that a lot of stuff was bad, but it wasn't discussed publicly.
This book examines how moral rhetoric is a crucial β and potentially unifying β part of how we experience democratic representation.
Shared Morals: The Role of Moral Rhetoric in Party Politics by @jaeheejung.bsky.social, Coming Soon
https://cup.org/4pCplQa
#Politics #PoliSci πΊοΈ
Perhaps others have seen it already, but I found this pre-print (first posted in September) deeply troubling, raising concerns about how LLMs used for classification tasks in research open new researcher-degrees-of-freedom, which they call "LLM-hacking" (akin to p-hacking)
arxiv.org/pdf/2509.08825
Clear evidence that at universities conservatives don't face higher obstacles than liberals to establish student groups + invite outside speakers.
"These results fail to offer support for the view that conservative students encounter more difficulty in efforts to access campus resources."
As @seanjwestwood.bsky.social's terrifying new PNAS article demonstrates, LLMs can now pass almost every attention check, mirror personas, stay consistent across pages, and systematically bias responses in the aggregate.
So hereβs a different angle: verify physical presence, not text.
New paper with Salvo Nunnari which aims to detect (partisan) motivated reasoning (MR) using experimental designs based on information order. This provides a flexible way to detect deviations from Bayesian updating which can be explained by MR.
osf.io/preprints/so...
Yes!! And I'll add some evidence that these agree-disagree type questions seem to be *creating* some conspiracy beliefs.
www.cambridge.org/core/journal...