Embracing Platform Transparency in a Digital World to Strengthen Democracy
A broad range of views on democracy to help break the stalemate caused by partisan conflict.
Next up in our "100 ideas" series — New by @jatucker.bsky.social @csmapnyu.org — "Embracing Platform Transparency in a Digital World to Strengthen Democracy"
Part of @nyulaw.bsky.social Democracy Project's "100 Ideas in 100 Days"
Read the full piece here: democracyproject.org/posts/embrac...
12.02.2026 14:38
👍 3
🔁 1
💬 1
📌 1
Work With Us - NYU’s Center for Social Media, AI, and Politics
@csmapnyu.org is hiring two postdocs.
Amazing group, highly recommend applying.
24.02.2026 18:43
👍 6
🔁 2
💬 0
📌 0
Google Trends chart showing interest in commercial coding agents increasing dramatically in early 2026
You can just research things. New from @jatucker.bsky.social & me at @brookings: Coding agents like Claude Code and Codex will likely accelerate research AND undermine institutional structures we built to support it.
03.03.2026 23:15
👍 32
🔁 12
💬 1
📌 5
Last week the story was that TikTok censored anti-Trump/ICE/Pretti videos after the U.S. ownership change. We investigated with a large set of US TikTok data and found some interesting results, short thread...
04.02.2026 17:52
👍 215
🔁 90
💬 6
📌 16
NYU's Center for Social Media and Politics
Strengthening democracy by conducting rigorous research, advancing evidence-based public policy, and training the next generation of scholars.
CSMaP is now the Center for Social Media, AI, and Politics (CSMAP). Our research has expanded beyond social media to include digital media more broadly—especially generative AI and large language models and their role in politics and public life.
Learn more at csmapnyu.org
28.01.2026 16:36
👍 2
🔁 1
💬 0
📌 0
Is Joe Rogan really just a voice of the right? Our new @csmapnyu.org piece for @goodauth.bsky.social shows he’s just as much a space for the left and the center, too. A look inside today’s surprisingly complicated podcast information ecosystem. 🎙️
goodauthority.org/news/podcast...
17.12.2025 16:39
👍 26
🔁 13
💬 5
📌 4
OSF
Congratulations to the authors:
@zevesanderson.com, Wei Zhong, @jatucker.bsky.social 🎉
📄 Read the preprint: osf.io/preprints/so...
#AI #SyntheticMedia #Misinformation #PoliticalCommunication #MediaLiteracy #AIPolicy #ResponsibleAI
01.12.2025 18:28
👍 1
🔁 0
💬 0
📌 1
The results reveal both the promise and limits of AI labeling. Labels communicate provenance when correctly applied, but do not reliably shift belief, change engagement, or reduce misinformation risk. Suggesting that labeling alone is unlikely to counter the influence of synthetic political visuals.
01.12.2025 18:28
👍 0
🔁 0
💬 1
📌 0
🔎 The team finds evidence of a mixed pattern: exposure to labeled synthetic images can make some participants view unlabeled synthetic ones as more likely to be human-made — but this is offset by the broader skepticism about images being made by humans that label exposure also triggers.
01.12.2025 18:28
👍 0
🔁 0
💬 1
📌 0
• Belief and engagement remained unchanged. Labels did not reduce belief that the depicted event occurred, nor did they affect intentions to like, share, comment, or seek more information.
📌 A follow-up experiment tested whether labeled synthetic images create an “implied authenticity effect.”
01.12.2025 18:28
👍 0
🔁 0
💬 1
📌 0
👉 Key findings:
• AI labels can improve transparency when properly applied. Participants reliably inferred that labeled images were more likely created with AI, even when that wasn’t the case. +
01.12.2025 18:28
👍 0
🔁 0
💬 1
📌 0
This enabled comparisons across both true and false political visuals. 🔍
01.12.2025 18:28
👍 0
🔁 0
💬 1
📌 0
To build realistic stimuli, the team created synthetic images using ChatGPT-written prompts and Midjourney outputs, and paired them with visually similar real photos. They also found synthetic images of events that never happened, and matched them with authentic images from comparable contexts.
01.12.2025 18:28
👍 0
🔁 0
💬 1
📌 0
Across two online experiments, participants viewed both authentic and AI-generated political images — some labeled “Made with AI,” others unlabeled — and rated:
• who created the image (provenance)
• whether the event happened (veracity)
• how likely they’d be to like, share, or comment (engagement)
01.12.2025 18:28
👍 0
🔁 0
💬 1
📌 0
As generative AI becomes more accessible, synthetic political images are reshaping how people see and interpret events. One question remains: Do AI labels help the public navigate this environment?
Our new preprint, It Works When It Works, tests exactly that.
🔗 osf.io/preprints/so...
01.12.2025 18:28
👍 12
🔁 3
💬 2
📌 1
9/
Congratulations to the authors: Aaron Erlich, Kevin Aslett, Sarah Graham, and Joshua Tucker! @aaronerlich.bsky.social, @selisegraham.bsky.social ham.bsky.social, @jatucker.bsky.social @kevinaslett.bsky.social
14.11.2025 21:20
👍 1
🔁 0
💬 0
📌 0
Taken together, the findings highlight that language itself can shape how people judge credibility in multilingual environments. Yet these effects are not uniform: they depend on which language a person prefers, and they don’t necessarily strengthen resilience against misinformation.
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
We also tested a popular media literacy intervention — “tips to spot false news” — that has been used by platforms like Facebook. While the intervention reduced belief in stories overall, it lowered belief in both true and false stories equally, producing no net gain in discernment. 👇👇
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
But there was also a tradeoff. Reading in a less-preferred language reduced belief in true stories as well as false ones. In other words, language shifted credibility judgments, but it did not improve people’s ability to distinguish fact from misinformation.
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
The results were striking. Ukrainian-preferring respondents were less likely to believe both true and false stories when written in Russian. By contrast, Russian-preferring respondents sometimes showed greater belief in false stories when those same stories appeared in Ukrainian.
👇👇
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
Our goal was simple yet important: to test whether individuals are more or less susceptible to believing false news stories when they are presented in people’s non-preferred language — and to determine if language itself functions as a credibility cue.
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
Participants were randomly assigned to read stories in their preferred language or their less-preferred language, within days of publication. 👇👇
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
To study this, we asked bilingual Ukrainians to evaluate news articles in Ukrainian and Russian as to whether they were true, false or misleading, or couldn’t tell.
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
This means people encounter true and false information in two linguistic environments, one of which is also used in active disinformation campaigns and is the language of the invader in the current war.
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
Ukraine is a crucial case: most citizens are bilingual in Ukrainian and Russian, regularly consuming news in both languages.
14.11.2025 21:20
👍 0
🔁 0
💬 1
📌 0
How Language Shapes Belief in Misinformation: A Study Among Multilinguals in Ukraine | Journal of Experimental Political Science | Cambridge Core
How Language Shapes Belief in Misinformation: A Study Among Multilinguals in Ukraine
When we think about false news stories, we usually focus on what is being said. But what if the language of a story shapes whether people believe it?
Our new paper in the Journal of Experimental Political Science explores this question in Ukraine.
www.cambridge.org/core/journal...
14.11.2025 21:20
👍 9
🔁 2
💬 1
📌 0
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text
But... can they? We don’t actually know.
In our new study, we develop a Computational Turing Test.
And our findings are striking:
LLMs may be far less human-like than we think.🧵
07.11.2025 11:13
👍 334
🔁 134
💬 14
📌 38
Survey Professionalism: New Evidence from Web Browsing Data | Political Analysis | Cambridge Core
Survey Professionalism: New Evidence from Web Browsing Data
The paper is co-authored with Bernhard Von Clemm, @ericka.bric.digital
@jonathannagler.bsky.social @magdalenawojciesza.bsky.social
This is also one of the projects I started at @csmapnyu.org ---- thanks to the entire lab involved!
The paper can be found here: www.cambridge.org/core/journal...
07.10.2025 18:49
👍 10
🔁 1
💬 0
📌 0
How common are “survey professionals” - people who take dozens of online surveys for pay - across online panels, and do they harm data quality?
Our paper, FirstView at @politicalanalysis.bsky.social, tackles this question using browsing data from three U.S. samples (Facebook, YouGov, and Lucid):
07.10.2025 18:49
👍 136
🔁 54
💬 4
📌 6