@killianmcloughlin.bsky.social @williambrady.bsky.social
@killianmcloughlin.bsky.social @williambrady.bsky.social
Interested in why moral conflict is so common on social media?
Join us at #SPSP 2026 for our symposium. Weโll present new findings on how platforms shape digital discourse and explore pathways toward healthier online environments
๐ Saturday, the 28th | 9:30โ10:40 AM
๐ Room E270, Level 2
Do ethnic minority interest parties grow through programs, or people? Schaaf, Otjes & Spierings show that DENKโs support in the Netherlands stems mainly from personal & religious networks, while online ties matter less. #ComparativePolitics
Read more:
link.springer.com/article/10.1...
Intervening on a central node in a network likely does little given that its connected neighbors will "flip it back" immediately. Happy to see this position supported now.
"Change is most likely [..] if it spreads first among relatively poorly connected nodes."
www.nature.com/articles/s41...
Depolarization is not "a scalable solution for reducing societal-level conflict.... achieving lasting depolarization will likely require....moving beyond individual-level treatments to address the elite behaviors and structural incentives that fuel partisan conflict" www.pnas.org/doi/10.1073/...
Partisan views on "more crime": used to move fairly closely, but now radically different (90% say up for GOP, 29% for Dems).
I mean, we are living in two different realities now, and this really hasn't always been the case.
Iโm very excited to share that my paper โCleavage theory meets civil society: A framework and research agendaโ with @eborbath.bsky.social & Swen Hutter has now been published online in โช@wepsocial.bsky.socialโฌ (w/ open access funding thanks to @wzb.bsky.socialโฌ!)
www.tandfonline.com/doi/full/10....
New preprint ๐จ
Cognitive bottlenecks make LLMs more morally aligned with people ๐ง ๐ค
We made AI โthinkโ more like people by narrowing its focus to a few key moral cues.
This AI better predicted peopleโs moral judgments & was more trusted.
๐งต โฌ๏ธ
Models as Prediction Machines: How to Convert Confusing Coefficients into Clear Quantities Abstract Psychological researchers usually make sense of regression models by interpreting coefficient estimates directly. This works well enough for simple linear models, but is more challenging for more complex models with, for example, categorical variables, interactions, non-linearities, and hierarchical structures. Here, we introduce an alternative approach to making sense of statistical models. The central idea is to abstract away from the mechanics of estimation, and to treat models as โcounterfactual prediction machines,โ which are subsequently queried to estimate quantities and conduct tests that matter substantively. This workflow is model-agnostic; it can be applied in a consistent fashion to draw causal or descriptive inference from a wide range of models. We illustrate how to implement this workflow with the marginaleffects package, which supports over 100 different classes of models in R and Python, and present two worked examples. These examples show how the workflow can be applied across designs (e.g., observational study, randomized experiment) to answer different research questions (e.g., associations, causal effects, effect heterogeneity) while facing various challenges (e.g., controlling for confounders in a flexible manner, modelling ordinal outcomes, and interpreting non-linear models).
Figure illustrating model predictions. On the X-axis the predictor, annual gross income in Euro. On the Y-axis the outcome, predicted life satisfaction. A solid line marks the curve of predictions on which individual data points are marked as model-implied outcomes at incomes of interest. Comparing two such predictions gives us a comparison. We can also fit a tangent to the line of predictions, which illustrates the slope at any given point of the curve.
A figure illustrating various ways to include age as a predictor in a model. On the x-axis age (predictor), on the y-axis the outcome (model-implied importance of friends, including confidence intervals). Illustrated are 1. age as a categorical predictor, resultings in the predictions bouncing around a lot with wide confidence intervals 2. age as a linear predictor, which forces a straight line through the data points that has a very tight confidence band and 3. age splines, which lies somewhere in between as it smoothly follows the data but has more uncertainty than the straight line.
Ever stared at a table of regression coefficients & wondered what you're doing with your life?
Very excited to share this gentle introduction to another way of making sense of statistical models (w @vincentab.bsky.social)
Preprint: doi.org/10.31234/osf...
Website: j-rohrer.github.io/marginal-psy...
The CHES EU team has published a new research note in @electoralstudies.bsky.social describing some trends across the 25 years now covered by our trend file and exploring two new items included in the 2024 wave of the survey: doi.org/10.1016/j.el...
Hereโs a summary thread:
1/
We live in an era of democratic backsliding. But the terminology of "backsliding" isn't up to the task of making sense of the deep crisis of liberal democracy around the world. I've just finished a working paper that lays out what I think is going on.
tl;dr it's about the state and society
๐งต
Screenshot of the article "How Convincing Is a Crowd? Quantifying the Persuasiveness of a Consensus for Different Individuals and Types of Claims"
We know that a consensus of opinions is persuasive, but how reliable is this effect across people and types of consensus, and are there any kinds of claims where people care less about what other people think? This is what we tested in our new(ish) paper in @psychscience.bsky.social
Title page of article "Electoral Hope" in journal Political Studies.
I have a new article out at @polstudies.bsky.social. In "Electoral Hope", I make the case that supposedly irrational "wishful thinking" is actually a crucial part of how voters make rational sense of their role in democracies.
OA link: doi.org/10.1177/0032...
๐
Can moral language boost pro-immigrant messages and be as effective as anti-immigrant messages?
โก๏ธ @kristinabsimonsen.bsky.social shows that pro-immigrant actors are not always bound to lose against the anti-immigrant side www.cambridge.org/core/journal... #FirstView #OpenAccess
๐จ New paper in Science Advances @science.org
Can changing how we argue about politics online improve the quality of replies we get?
T HeideJorgensen, @gregoryeady.bsky.social & I use an LLM to manipulate counter-arguments to see how people respond to different approaches to arguments
Thread ๐งต1/n
I have a new paper on "The Psychology of Virality" with @steverathje.bsky.social
We explain how similar psychological processes (eg preferential attention to negativity, social motives, etc.) drive the spread of information across online and offline contexts: www.sciencedirect.com/science/arti...
Really enjoyed my conversation with @chrislhayes.bsky.social about how protests can shape public opinion. He also generously invited me to share a bit of my personal story which helps put the research in context.
โ Apple: podcasts.apple.com/us/podcast/w...
โ Spotify: open.spotify.com/episode/2Byd...
English language is filled with trait words like โcaringโ and โsmartโ
These words are the currency of personality/social psych, yet key questions remain about their evolution, function, and structure
We take on these questions in a preprint led by @yuanzeliu.bsky.social
osf.io/preprints/ps...
Here is the link to the preprint: osf.io/preprints/ps...
Huge thanks to my collaborators: @williambrady.bsky.social @nourkteily.bsky.social , who helped shape this project from its inception. And to @joshcjackson.bsky.social and @ycleong.bsky.social , who helped expand the scope of this project.
Our results suggest social media can reshape public square, by pulling new topics into (and pushing some people out of) moral debate + turning up the heat on already moralized topics. Key Q: How can we design community norms and/or algorithms to curb moralization without chilling civic engagement?
Finding 4: Moralization both spread into new topics (hobbies, entertainment) AND intensified within already moralized topics (e.g., politics). But we only observed intensification on Twitter/X, which again, suggests platform design may matter when it comes to mitigating runaway mitigation.
Finding 3: Moralization rose in two ways: same users used 3 % more moral words each year BUT extreme voices also gained share. On Reddit we saw only the first (+0.3 %/yr); extremists didnโt take over. Hint that long comments, downvotes & lighter engagement ranking blunt selection effects.
What user dynamics drive increases in moralization? Are the same people becoming more moral (e.g. social learning), or are the types of people engaged online changing, such that high moralizers dominate discourse (e.g., selection effects)? We find the answer is BOTH.
Finding 2: Moralization increased relatively less in traditional media. Rate of moral words in Corpus of Contemporary American English increased, but occurred almost entirely in a single year (1.11% to 1.31% in 2016), and moral words actually decreased over time in the News on the Web corpus.
Finding 1: Moralization increased significantly on social media. Rate of moral words increased on Twitter/X by 41% from 2013-2021 (1.28% of words in posts to 1.80%) and word embeddings showed topics shifted .296 SD toward morality. Moral words also increased on Reddit, to a lesser degree: by 6%
Social media lets people share their perspectives globally and instantaneously for the first time in history. But it can also incentivize people to boil complex issues into simplistic, moralized narratives. This might create a moralizing shift in discourse, which we identify and explain here.
New preprint! We developed new measurement tools to examine moralization in ~2B Twitter/X & Reddit posts and ~5M traditional media texts.
Key finding: moralization increased markedly on social media from 2013-2021; more than traditional media; associated with multiple user dynamics
๐งต๐
Thanks to everybody who chimed in!
I arrived at the conclusion that (1) there's a lot of interesting stuff about interactions and (2) the figure I was looking for does not exist.
So, I made it myself! Here's a simple illustration of how to control for confounding in interactions:>
Plutopopulism: Wealth and Trumpโs Financial Base SEAN KATES, ERIC MANNING, TALI MENDELBERG and OMAR WASOW Comparative scholarship suggests authoritarian candidates often rely on backing from the wealthy. The wealthy are also said to play an important role in American campaign finance. Studies of Donald Trump, however, found that he drew significant support from white Americans with less education and privilege. We evaluate wealthy and non-wealthy Americansโ financial support for Trump, compared to other candidates, by constructing a comprehensive dataset of property values matched to contributions and voter files. We find Trump underperformed among wealthy Republican donors while mobilizing new non-wealthy donors. Trump also diversified the donorate, especially by education. That is, Trump built an unusual coalition of wealthy and non-wealthy donors. Our results support an alternative, โplutopopulistโ model of Trumpโs financial base.
Is America an oligarchy?
Bernie and AOC say yes, touring the country to โFight Oligarchy.โ Other Democratic leaders arenโt so sure.
With the debate heating up, I wanted to share a few insights from our recent paper on whoโs funding American politics. ๐งต cup.org/4cfm0Az
Title page of an academic article titled โPlutopopulism: Wealth and Trumpโs Financial Base.โ Authors listed are Sean Kates (University of Pennsylvania), Eric Manning (Princeton University), Tali Mendelberg (Princeton University), and Omar Wasow (University of California, Berkeley). The abstract below the title states: Comparative scholarship suggests authoritarian candidates often rely on backing from the wealthy. The wealthy are also said to play an important role in American campaign finance. Studies of Donald Trump, however, found that he drew significant support from white Americans with less education and privilege. We evaluate wealthy and non-wealthy Americansโ financial support for Trump, compared to other candidates, by constructing a comprehensive dataset of property values matched to contributions and voter files. We find Trump underperformed among wealthy Republican donors while mobilizing new non-wealthy donors. Trump also diversified the donorate, especially by education. That is, Trump built an unusual coalition of wealthy and non-wealthy donors. Our results support an alternative, โplutopopulistโ model of Trumpโs financial base. This study demonstrates the importance of studying both non-wealthy and wealthy Americans, the group who give the most but whose individual behavior has been studied the least. Open access link to paper: http://cup.org/4cfm0Az
Were wealthy donors key to Trumpโs campaigns in 2016 and 2020?
I'm thrilled to announce a new paper in which Sean Kates, Eric Manning, Tali Mendelberg and I analyzed data on 108 million (!) homeowner-voters.
See โPlutopopulism: Wealth and Trumpโs Financial Base.โ Open access: cup.org/4cfm0Az ๐งต 1/