Matt DeVerna's Avatar

Matt DeVerna

@matthewdeverna.com

Postdoc with Stanford's Tech Impact and Policy Center (@techimpactpolicy.bsky.social). Formerly IU / Observatory on Social Media. Computational social science, human-AI interaction, social media, trust and safety, etc. 🧨 matthewdeverna.com

8,909
Followers
817
Following
206
Posts
05.07.2023
Joined
Posts Following

Latest posts by Matt DeVerna @matthewdeverna.com

Preview
The train has left the station: Agentic AI and the future of social science research | Brookings A new era of agentic AI agents has begun. What does it mean for social scientists? Solomon Messing and Joshua Tucker discuss.

Thoughtful piece from @jatucker.bsky.social and @solmg.bsky.social about agentic AI and social science.

03.03.2026 18:33 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
CySoc 2026 - International Workshop on Cyber Social Threats

πŸ“£CFP: 7th edition International Workshop on Cyber Social Threats (CySoc)

We welcome papers that examine a diverse range of issues related to online harmful communications.

πŸ“…Submission: March 22nd, 2026
πŸ“…Notification: April 8th, 2026

πŸ”— Details: cy-soc.github.io/2026/

02.03.2026 14:34 πŸ‘ 4 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0

🚨🚨🚨

02.03.2026 14:22 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Organized with love with @yang3kc.bsky.social @frapierri.bsky.social @yelenamejova.bsky.social @ugurkursuncu.bsky.social @mrjimmyblack.com

27.02.2026 18:16 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

We are looking forward to your amazing submissions to the CySoc workshop at ICWSM 2026!

Learn more here: cy-soc.github.io/2026/

Note: the previously circulated submission deadline has been shifted.

27.02.2026 18:16 πŸ‘ 5 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Post image

Yikes...

27.02.2026 00:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

πŸ€¦β€β™‚οΈ

26.02.2026 21:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

πŸ‘€πŸ‘€

26.02.2026 20:55 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

- Set up the github command line tool gh
- Have Claude Code create something and create a pull request
- Leave inline comments with detailed instructions on GitHub
- Ask CC to pull them down and make a plan to address
- Rinse and repeat

Nice balance between automation and quality control, IMHO.

26.02.2026 19:26 πŸ‘ 1 πŸ” 0 πŸ’¬ 7 πŸ“Œ 0
Preview
Large Language Models Require Curated Context for Reliable Political Fact-Checking -- Even with Reasoning and Web Search Large language models (LLMs) have raised hopes for automated end-to-end fact-checking, but prior studies report mixed results. As mainstream chatbots increasingly ship with reasoning capabilities and ...

Explore the preprint ‡️

23.02.2026 17:04 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
AI Chatbots Struggle at Fact-Checking, but Curated Evidence Can Help Can AI chatbots reliably tell you whether a political claim is true or false? And if not, what would it take to make them trustworthy fact-checkers?

Matt and co-authors Kai-Cheng Yang, Harry Yaojun, and Filippo Menczer found that today's leading models perform poorly, even when equipped with advanced reasoning and web search capabilities.

πŸ‘‰ The key to better performance? Giving them access to high-quality, curated evidence.

Read the summary ‡️

23.02.2026 17:04 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

Can #AI #chatbots reliably tell you whether a political claim is true or false? If not, what would it take to make them trustworthy fact-checkers?

A new study led by Matt DeVerna tackles these questions by evaluating 15 #LLMs on more than 6K claims fact-checked by PolitiFact over an 18-year period.

23.02.2026 17:04 πŸ‘ 6 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Preview
Work With Us - NYU’s Center for Social Media, AI, and Politics

@csmapnyu.org is hiring two postdocs.

Amazing group, highly recommend applying.

24.02.2026 18:43 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

Abstract submissions close on March 3rd!

We are also extending a ✨ call for mentored reviewers ✨ if you advise excellent graduate or postdoctoral researchers you are welcome to recommend them to review for IC2S2 2026. Email IC2S2@uvm.edu to nominate mentored reviewers (or faculty colleagues)

23.02.2026 19:39 πŸ‘ 14 πŸ” 12 πŸ’¬ 1 πŸ“Œ 2

CySoc 2026 is back! Check out the CfP below.

We are also looking for PC members. Pin me if you are interested in joining!

23.02.2026 17:15 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πŸ“ŒUsage is polarized, Grok users more likely to be Reps
πŸ“ŒBUT Rep posts rated as false more oftenβ€”even by Grok
πŸ“ŒBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...

03.02.2026 21:55 πŸ‘ 118 πŸ” 48 πŸ’¬ 2 πŸ“Œ 3
Preview
Inside the marketplace powering bespoke AI deepfakes of real women New research details how Civitai lets users buy and sell tools to fine-tune deepfakes the company says are banned.

Neither Civitai nor a16z responded to requests for comment. Study led by @matthewdeverna.com and Shalmoli Ghosh. Full story in @technologyreview.com here www.technologyreview.com/2026/01/30/1...

30.01.2026 17:15 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Inside the marketplace powering bespoke AI deepfakes of real women 🧡

30.01.2026 17:15 πŸ‘ 18 πŸ” 12 πŸ’¬ 1 πŸ“Œ 0
Preview
Inside the marketplace powering bespoke AI deepfakes of real women New research details how Civitai lets users buy and sell tools to fine-tune deepfakes the company says are banned.

Congrats to @matthewdeverna.com and @shalmoli-ghosh.bsky.social for coverage of their important research on deepfakes of real women:

www.technologyreview.com/2026/01/30/1...

30.01.2026 17:28 πŸ‘ 5 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Latest working paper πŸ§ͺ w/ @shalmoli-ghosh.bsky.social and @matthewdeverna.com shows that AI porn and NSFW deepfakes targeting women are being commoditized

A Marketplace for AI-Generated Adult Content and Deepfakes

Preprint: doi.org/10.48550/arX...

28.01.2026 23:21 πŸ‘ 9 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0

Our latest paper in @science.org warns about malicious AI swarms, agents capable of adaptive influence campaigns at scale. We already observed some in the wild (picture). AI is a real threat to democracy.
#SciencePolicyForum #ScienceResearch πŸ§ͺ
Paper: doi.org/10.1126/scie...

26.01.2026 01:08 πŸ‘ 99 πŸ” 54 πŸ’¬ 2 πŸ“Œ 5
Preview
[Mature Content] From the videos community on Reddit: The Killing of Alex Pretti β€” A Step-by-step Analysis, Time-matched & Compared to Bovino's Statements Posted by Ratspeed - 755 votes and 25 comments
25.01.2026 03:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1
Preview
Claude's Constitution Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

www.anthropic.com/constitution

Control-f "Some of our views on Claude’s nature"

23.01.2026 21:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Great job available at @odissei.bsky.social: Chief Technology Officer. This position provides an opportunity to help create the future of computational social science: www.eur.nl/en/working-a...

23.01.2026 00:34 πŸ‘ 7 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

Consider submitting an ICWSM workshop proposal! It’s a great opportunity to create space for discussions around emerging research threads, new methods, or even old but exciting topics. Deadline: Jan 30.

23.01.2026 01:18 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Many thanks to my collaborators on this project: @shalmoli-ghosh.bsky.social and @fil.bsky.social ❀️❀️

23.01.2026 05:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yeah. Civit bounties featured prominently in our last Crimes Against Children conference presentation. It’s quite the collection of requests.

23.01.2026 02:14 πŸ‘ 5 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

Absolutely.

Fun fact: You're talk for OSoMe in 2023 inspired this line of work. I left the presentation and immediately started looking into how to collect the data.

23.01.2026 02:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Overall, it looks like incentive-driven features on gen-AI platformsβ€”like Civitai's Bountiesβ€”can scale demand for NSFW content and deepfakes.

We think it deserves a lot more attention.

23.01.2026 02:06 πŸ‘ 2 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
Stacked horizontal bar chart showing the distribution of platform content marking for deepfake bounties classified as SFW versus NSFW. Each bar represents 100\% of bounties within that category, divided into ``Not marked'' (light purple) and ``Marked'' (dark purple) segments. For SFW deepfake bounties, 85.6\% were marked by the platform while 14.4\% remained unmarked. In contrast, NSFW deepfake bounties show a more balanced distribution, with 58.3\% marked and 41.7\% not marked. The substantially higher marking rate for SFW deepfakes (85.6\%) compared to NSFW deepfakes (58.3\%) suggests that the platform's content moderation systems more effectively detect deepfakes when they violate deepfake policies in non-adult content contexts. The near-even split for NSFW deepfakes indicates potential detection challenges when deepfake content overlaps with adult content categories, where the presence of NSFW elements may obscure or complicate deepfake identification. This differential marking effectiveness has significant implications for platform governance, suggesting that moderation strategies may need category-specific approaches to achieve consistent enforcement across different content types.

Stacked horizontal bar chart showing the distribution of platform content marking for deepfake bounties classified as SFW versus NSFW. Each bar represents 100\% of bounties within that category, divided into ``Not marked'' (light purple) and ``Marked'' (dark purple) segments. For SFW deepfake bounties, 85.6\% were marked by the platform while 14.4\% remained unmarked. In contrast, NSFW deepfake bounties show a more balanced distribution, with 58.3\% marked and 41.7\% not marked. The substantially higher marking rate for SFW deepfakes (85.6\%) compared to NSFW deepfakes (58.3\%) suggests that the platform's content moderation systems more effectively detect deepfakes when they violate deepfake policies in non-adult content contexts. The near-even split for NSFW deepfakes indicates potential detection challenges when deepfake content overlaps with adult content categories, where the presence of NSFW elements may obscure or complicate deepfake identification. This differential marking effectiveness has significant implications for platform governance, suggesting that moderation strategies may need category-specific approaches to achieve consistent enforcement across different content types.

We also find that platform governance is inconsistent. Civitai displays a 'real person likeness' notice on 86% of SFW deepfake bounties, but that figure drops to just 58% for NSFW bounties.

23.01.2026 02:06 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0