Thrilled to share that our paper has been accepted to FAccT! See you all in MontrΓ©al in June π¨π¦
Thrilled to share that our paper has been accepted to FAccT! See you all in MontrΓ©al in June π¨π¦
Due to the high number of applicants we extended the deadline by one week to **March 8th**.
css2.lakecomoschool.org
Can feed algorithms shape what people think about politics? Our paper "The Political Effects of X's Feed Algorithm" is out today in Nature and answers "Yes."
www.nature.com/articles/s41...
π¨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πUsage is polarized, Grok users more likely to be Reps
πBUT Rep posts rated as false more oftenβeven by Grok
πBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
Thanks also to @johnholbein1.bsky.social whose numerous posts on demographic probing in the social sciences inspired this work and to Matthew Kearney for the useful benchmark dataset.
Lots more info in the paper: arxiv.org/abs/2601.18486
I had a blast working on this with my wonderful coauthors @nsehgal.bsky.social Niyati, Victor, Ana Maria, Lakshmi, Sharath and @valentinhofmann.bsky.social
Feedback welcome!
@oii.ox.ac.uk
Bottom line: LLM demographic probing lacks construct validity: it does not yield a stable characterization of how models condition on demographics.
We thus recommend using multiple, ecologically valid cues and controlling for confounders to make defensible claims on demographic effects in LLMs.
Why does this happen?
We find that cues differ both in how strongly models associate them with demographic traits and in the non-demographic linguistic features they carry, such as readability or length, and that both independently affect model behavior.
Key result 2: Conclusions on demographic bias depend on how identity is operationalized.
Group disparities, estimated as outcome ratios between groups (e.g., Black vs. White), are unstable and vary in magnitude and even direction across cues.
Key result 1: Different cues signalling the same group do not lead to the same model behavior.
Cues intended to represent the same demographic group often induce only moderately correlated changes in model behavior.
We study demographic probing in realistic advice-seeking interactions: healthcare, salary, and legal advice, focusing on race and gender in a U.S. context across multiple LLMs.
Same prompts. Same tasks. Only the demographic cue signalling group membership changes.
Demographic cues (eg, names, dialect) are widely used to study how LLM behavior may change depending on user demographics. Such cues are often assumed interchangeable.
π¨ We show they are not: different cues yield different model behavior for the same group and different conclusions on LLM bias. π§΅π
DΓ©tection de coordination, dΓ©tection de #deepfakes, biais et vulnΓ©rabilitΓ©s des algorithmes de recommandation...
@viginum.bsky.social et #INRIA lancent un prix scientifique de lutte contre les manipulations de l'information.
π pvi-lmi.sciencescall.org
Deadline le 14/02.
#disinfo #FIMI
Kudos to my wonderful co-authors Do Lee doqlee.github.io , Boris Sobol
il.linkedin.com/in/boris-sobol , @nirg.bsky.social and Sam Fraiberger samuelfraiberger.com.
@oii.ox.ac.uk @nyupress.bsky.social
11/fin
Yet platform data-access policies increasingly block this potential. Whether platforms or regulators will enable change in the coming years is a core policy question.
10/N
There is clear public value here, potentially extending to other countries, especially where official statistical systems are under-developed.
9/N
Why this matters?
Beyond forecasting, this approach can provide early warnings, surface local labor market stress hidden by national averages, and help flag measurement issues in real time.
8/N
Key finding 3:
This also works at the state and city (!) level, including "holdout cities" where official UI numbers are sparse or irregularly updated.
As expected, accuracy scales with platform penetration and unemployment shocks.
7/N
Key finding 2:
Our approach consistently outperforms industry consensus forecasts and can improve predictions of US UI claims up to two weeks ahead of official releases.
Thatβs two weeks of additional lead time for policymakers.
6/N
Key finding 1:
Capturing linguistic diversity matters.
Training LLMs with active learning lets us detect many more ways people talk about job loss, producing a far more representative sample of unemployed users than existing approaches.
5/N
We combine JoblessBERT (an encoder LLM developed in previous work aclanthology.org/2022.acl-lon... which detects ~3Γ more employment-related content without sacrificing precision) with post-stratification using inferred demographics to correct for platform bias.
4/N
So we ask a hard question economic actors and policymakers rightly worry about:
Can skewed social media data be turned into trustworthy indicators of unemployment?
Can we produce robust predictions across geography β
, time β
, demography β
, and forecasting horizon β
?
3/N
Why this matters:
In March 2020, weekly unemployment insurance claims jumped from 278K to nearly 6 million in two weeks.
As official data lagged, policymakers were flying blind about where the shock was hitting and who was being affected.
2/N
π¨ New paper out in @pnasnexus.org
We show how skewed social media data can still be used to reliably estimate unemployment, not just nationally but down to the city level. π
doi.org/10.1093/pnas...
1/N
Illustration of the official unemployment rate published in newspapers. Stock photo.
A transformer encoder-based classifier called JoblessBERT can identify posts about unemployment on social media, allowing researchers to predict US unemployment claims, up to two weeks in advance, at the national, state, and city levels. In PNAS Nexus: https://ow.ly/Zvi850XRa8I
ICYMI: Listen to @manueltonneau.bsky.social @oii.ox.ac.uk's interview with the SOEP podcast talking about his new research into hate speech, online platforms and disparities in content moderation across different European countries. Available here: bit.ly/4ntsiRU
π¨Hiring a fully funded (3.5 years) PhD for the @ldnsocmedobs.bsky.social to research social media and politics. Candidates should have quantitative/computational skills and/or be interested in content curation/moderation. UK home candidates only unfortunately. www.royalholloway.ac.uk/media/hquftp...
π£ New Preprint!
Have you ever wondered what the political content in LLM's training data is? What are the political opinions expressed? What is the proportion of left- vs right-leaning documents in the pre- and post-training data? Do they correlate with the political biases reflected in models?
Social media feeds today are optimized for engagement, often leading to misalignment between users' intentions and technology use.
In a new paper, we introduce Bonsai, a tool to create feeds based on stated preferences, rather than predicted engagement.
arxiv.org/abs/2509.10776
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.
π¨ New paper alert π¨ Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.
Paper: arxiv.org/pdf/2509.08825