Thoughtful piece from @jatucker.bsky.social and @solmg.bsky.social about agentic AI and social science.
@matthewdeverna.com
Postdoc with Stanford's Tech Impact and Policy Center (@techimpactpolicy.bsky.social). Formerly IU / Observatory on Social Media. Computational social science, human-AI interaction, social media, trust and safety, etc. 𧨠matthewdeverna.com
Thoughtful piece from @jatucker.bsky.social and @solmg.bsky.social about agentic AI and social science.
π£CFP: 7th edition International Workshop on Cyber Social Threats (CySoc)
We welcome papers that examine a diverse range of issues related to online harmful communications.
π
Submission: March 22nd, 2026
π
Notification: April 8th, 2026
π Details: cy-soc.github.io/2026/
π¨π¨π¨
Organized with love with @yang3kc.bsky.social @frapierri.bsky.social @yelenamejova.bsky.social @ugurkursuncu.bsky.social @mrjimmyblack.com
We are looking forward to your amazing submissions to the CySoc workshop at ICWSM 2026!
Learn more here: cy-soc.github.io/2026/
Note: the previously circulated submission deadline has been shifted.
Yikes...
π€¦ββοΈ
ππ
- Set up the github command line tool gh
- Have Claude Code create something and create a pull request
- Leave inline comments with detailed instructions on GitHub
- Ask CC to pull them down and make a plan to address
- Rinse and repeat
Nice balance between automation and quality control, IMHO.
Matt and co-authors Kai-Cheng Yang, Harry Yaojun, and Filippo Menczer found that today's leading models perform poorly, even when equipped with advanced reasoning and web search capabilities.
π The key to better performance? Giving them access to high-quality, curated evidence.
Read the summary ‡οΈ
Can #AI #chatbots reliably tell you whether a political claim is true or false? If not, what would it take to make them trustworthy fact-checkers?
A new study led by Matt DeVerna tackles these questions by evaluating 15 #LLMs on more than 6K claims fact-checked by PolitiFact over an 18-year period.
@csmapnyu.org is hiring two postdocs.
Amazing group, highly recommend applying.
Abstract submissions close on March 3rd!
We are also extending a β¨ call for mentored reviewers β¨ if you advise excellent graduate or postdoctoral researchers you are welcome to recommend them to review for IC2S2 2026. Email IC2S2@uvm.edu to nominate mentored reviewers (or faculty colleagues)
CySoc 2026 is back! Check out the CfP below.
We are also looking for PC members. Pin me if you are interested in joining!
π¨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πUsage is polarized, Grok users more likely to be Reps
πBUT Rep posts rated as false more oftenβeven by Grok
πBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
Neither Civitai nor a16z responded to requests for comment. Study led by @matthewdeverna.com and Shalmoli Ghosh. Full story in @technologyreview.com here www.technologyreview.com/2026/01/30/1...
Inside the marketplace powering bespoke AI deepfakes of real women π§΅
Congrats to @matthewdeverna.com and @shalmoli-ghosh.bsky.social for coverage of their important research on deepfakes of real women:
www.technologyreview.com/2026/01/30/1...
Latest working paper π§ͺ w/ @shalmoli-ghosh.bsky.social and @matthewdeverna.com shows that AI porn and NSFW deepfakes targeting women are being commoditized
A Marketplace for AI-Generated Adult Content and Deepfakes
Preprint: doi.org/10.48550/arX...
Our latest paper in @science.org warns about malicious AI swarms, agents capable of adaptive influence campaigns at scale. We already observed some in the wild (picture). AI is a real threat to democracy.
#SciencePolicyForum #ScienceResearch π§ͺ
Paper: doi.org/10.1126/scie...
www.anthropic.com/constitution
Control-f "Some of our views on Claudeβs nature"
Great job available at @odissei.bsky.social: Chief Technology Officer. This position provides an opportunity to help create the future of computational social science: www.eur.nl/en/working-a...
Consider submitting an ICWSM workshop proposal! Itβs a great opportunity to create space for discussions around emerging research threads, new methods, or even old but exciting topics. Deadline: Jan 30.
Many thanks to my collaborators on this project: @shalmoli-ghosh.bsky.social and @fil.bsky.social β€οΈβ€οΈ
Yeah. Civit bounties featured prominently in our last Crimes Against Children conference presentation. Itβs quite the collection of requests.
Absolutely.
Fun fact: You're talk for OSoMe in 2023 inspired this line of work. I left the presentation and immediately started looking into how to collect the data.
Overall, it looks like incentive-driven features on gen-AI platformsβlike Civitai's Bountiesβcan scale demand for NSFW content and deepfakes.
We think it deserves a lot more attention.
Stacked horizontal bar chart showing the distribution of platform content marking for deepfake bounties classified as SFW versus NSFW. Each bar represents 100\% of bounties within that category, divided into ``Not marked'' (light purple) and ``Marked'' (dark purple) segments. For SFW deepfake bounties, 85.6\% were marked by the platform while 14.4\% remained unmarked. In contrast, NSFW deepfake bounties show a more balanced distribution, with 58.3\% marked and 41.7\% not marked. The substantially higher marking rate for SFW deepfakes (85.6\%) compared to NSFW deepfakes (58.3\%) suggests that the platform's content moderation systems more effectively detect deepfakes when they violate deepfake policies in non-adult content contexts. The near-even split for NSFW deepfakes indicates potential detection challenges when deepfake content overlaps with adult content categories, where the presence of NSFW elements may obscure or complicate deepfake identification. This differential marking effectiveness has significant implications for platform governance, suggesting that moderation strategies may need category-specific approaches to achieve consistent enforcement across different content types.
We also find that platform governance is inconsistent. Civitai displays a 'real person likeness' notice on 86% of SFW deepfake bounties, but that figure drops to just 58% for NSFW bounties.