Looks very interesting, looking forward to digging in!
@dgrand
Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology. https://www.DaveRand.org/
Looks very interesting, looking forward to digging in!
π¨ My first solo preprint is out!
I study Grok's fact-checking on X and show:
π Professional fact-checkers are a major source of Grok.
π Grok becomes more accurate as more articles from fact-checkers become available.
π ssrn.com/abstract=626...
Check out this recent @nature.com paper reporting a field experiment on X. It shows X's algorithm boosts conservative content and downranks traditional mediaβshifting usersβ views on key issues. Switching to chronological doesnβt reverse the effect. www.nature.com/articles/s41...
This is consistent with earlier psychometric work that suggests 5-7 is the best response scale options, but good to see that the finding holds up in contemporary research. Also, good to see that labeling scales whether anchored or not has little impact on findings. academic.oup.com/ijpor/articl...
New paper in Current Directions in Psych Science: journals.sagepub.com/doi/10.1177/...
After countless arguments about what tasks ppl should/should not offload to AI, we instead argue that genAI can be used *augment* research protocols in novel ways. I.e. use AI to make better psych experiments!
Come join our academic family!
Honored (and genuinely, wildly grateful and -- even more than I am grateful -- surprised) to share that our paper "Durably reducing conspiracy beliefs through dialogues with AI" received the @aaas.org Newcomb-Cleveland Prize (for the "most outstanding" paper in @science.org last year).
New @scalinglaws.bsky.social episode: @noupside.bsky.social and I talk to @dgrand.bsky.social about his research showing AI chatbots can shift people's political beliefs. www.lawfaremedia.org/article/scal...
APE update: we retested recent frontier models on whether they still comply with requests to persuade on extreme harm (terrorism, sexual abuse). GPT-5.1 & Claude Opus 4.5 β near zero compliance. But Gemini 3 Pro complies 85% with no jailbreak needed. π§΅
Interesting new paper in Political Psychology from @benmtappin.bsky.social and Ryan McKay investigating party cues
onlinelibrary.wiley.com/doi/10.1111/...
π¨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πUsage is polarized, Grok users more likely to be Reps
πBUT Rep posts rated as false more oftenβeven by Grok
πBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
Does a motivation to persuade someone of a view we do not hold cause us to deceptively self-persuade to shift our view?
No, research by Zhang & @dgrand.bsky.social suggestsβsimple preferential exposure to information has the same effect:
buff.ly/vB18poi
our open model proving out specialized rag LMs over scientific literature has been published in nature βπ»
congrats to our lead @akariasai.bsky.social & team of students and Ai2 researchers/engineers
www.nature.com/articles/s41...
Grok fact-checks our paper on Grok fact-checking - and it approves!
Stay tuned for another paper digging deep into fact checking performance of a bunch of different API models
Grateful as always to amazing coauthors @thomasrenault.bsky.social @mmosleh.bsky.social
and you can check out other papers from my group on human-AI interaction here: docs.google.com/document/d/1...
SUMMARY:
πAI fact-checking on X is widespread
πModels are reasonably accurate, and likely to improve
πBut usage and response are highly polarized
πFirst indication that AI is heading in the direction of other media: βdifferent political tribes, different AI refereesβ
In a survey exp (N=1,592 US adults), LLM factchecks meaningfully shifts beliefs in direction of fact-check - BUT responses to Grok factchecks become polarized by partisanship when the model identity is disclosed.
Similarly, trust in Grok is highly polarized
Compared to professional fact-checkers on a 100-tweet sample:
Grok bot agrees 55%
Perplexity bot agrees 58%
Fact-checkers agree with each other 64%
So: signal, but not perfect
BUT Grok-4 API agrees 64% - as good as interfact-checker! Promising for AI fact-checking...
Usage gap is polarized: Reps are +59% more likely to use Grok, Dems +16% more likely to use Perplexity. BUT Reps ~2x more likely to be targeted by factcheck requests, and Rep posts rated as false more often - even by Grok. Extends prior results on partisan asymmetry in misinformation
We examine *ALL* English tags of Grok+Perplexity on X FebβSep 2025
First finding: Fact-checking is not a niche use case - are ~7.6% of all direct interactions with these LLM bots on X. Primary focus is on politics and current events
π¨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
πUsage is polarized, Grok users more likely to be Reps
πBUT Rep posts rated as false more oftenβeven by Grok
πBot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
Please contact Nina if you're interested in working with us! Much of this work is also with @dgrand.bsky.social & @tomcostello.bsky.social, and others! Very fun collaborative environment. And Nina is wonderful to work with!! (She is also the coolest among us, FWIW)
New on @indicator.media: "@grok is this true" was the single most frequent reply tagging X's AI chatbot in the six months following its launch.
If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...
βLLMs can effectively convince people to believe conspiraciesβ
But telling the AI not to lie might help.
Details in thread
These authors wanted to know whether people with physical disabilities face discrimination in hiring: even when they are equally qualified.
So they ran an experiment.
Recently accepted by #QJE, βMarginal Returns to Public Universities,β by Jack Mountjoy: doi.org/10.1093/qje/...
www.science.org/doi/10.1126/...
New paper out in @science.org! We unveil the online manipulation market with the Cambridge Online Trust & Safety Index (COTSI). We show in real time the cost of purchasing fake accounts across every social platform around the world - so they can be held accountable
www.science.org/doi/10.1126/...