Thomas Davidson's Avatar

Thomas Davidson

@thomasdavidson

Sociologist at Rutgers. Substantive interests include right-wing politics, populism, and hate speech on social media. I also write about computational methods and AI. https://www.thomasrdavidson.com/

874
Followers
677
Following
76
Posts
13.10.2023
Joined
Posts Following

Latest posts by Thomas Davidson @thomasdavidson

Home

Call for Submissions: AI for Social Science Methodology (Yale)
• Keynote: @nachristakis.bsky.social
• Panel with editors of leading journals on publishing AI research
• Mentoring roundtables for early-career scholars
• Generous travel support
Discussion-driven, high-quality research.

06.03.2026 16:30 👍 7 🔁 9 💬 1 📌 0
Post image

Recently, van der Stigchel and colleagues posted a provocative commentary suggesting that we should be wary of bots in online behavioral data collection (🧵by @cstrauch.bsky.social here: bsky.app/profile/cstr...). But should we? Here is my response letter osf.io/preprints/ps.... 1/5

04.03.2026 12:51 👍 46 🔁 29 💬 5 📌 3

Great write-up on our recent studies:

“Back in the day, if you wanted to know what the Seattle General Strike was, you’d grab an encyclopedia or ... check Wikipedia,” Karell said. “Now, you just ask ChatGPT... Increasingly, the information we rely on is being packaged by tools built by companies.”

04.03.2026 16:54 👍 5 🔁 1 💬 0 📌 0
Post image

Our new paper is out today in @pnasnexus.org with colleagues at Yale (@matthewshu.com, Danny Karell, @keitarookura.bsky.social)

We wanted to understand how using AI-generated summaries to learn about history influenced attitudes compared to existing resources like Wikipedia. 1/4

03.03.2026 16:55 👍 16 🔁 7 💬 1 📌 1
Post image

Want to learn about computational social science *for free* and identify new research partners across academic fields? Apply to one of the 2026 Summer Institutes in Computational Social Science (described in yellow in the attached map) here: sicss.io/locations

03.03.2026 15:01 👍 31 🔁 30 💬 0 📌 0

Robustness checks showed that GPT-4o tended to generate liberal-leaning summaries across a wide range of historical events, highlighting the importance of default biases.

More work is needed to understand how these patterns might generalize to other topics and across different models. 4/4

03.03.2026 16:55 👍 0 🔁 0 💬 0 📌 0
Preview
How latent and prompting biases in AI-generated historical narratives influence opinions Abstract. Large language models (LLMs) can be used to persuade people on a range of issues, particularly through user-driven strategies such as personalizi

This helps distinguish between two pathways of AI influence: latent biases baked into models from training, and prompting biases introduced through deliberate prompting. Both can shape opinions even when the content is factually accurate. 3/4

Paper: academic.oup.com/pnasnexus/ar...

03.03.2026 16:55 👍 0 🔁 0 💬 1 📌 0
Post image Post image

We ran an experiment where people read GPT-4o summaries or Wikipedia.

Default summaries with no ideological slant and texts generated using a liberal persona both shifted readers toward liberal opinions relative to Wikipedia.

Conservative-framed texts only shifted among conservatives. 2/4

03.03.2026 16:55 👍 0 🔁 0 💬 1 📌 0
Post image

Our new paper is out today in @pnasnexus.org with colleagues at Yale (@matthewshu.com, Danny Karell, @keitarookura.bsky.social)

We wanted to understand how using AI-generated summaries to learn about history influenced attitudes compared to existing resources like Wikipedia. 1/4

03.03.2026 16:55 👍 16 🔁 7 💬 1 📌 1
microgpt Musings of a Computer Scientist.

a gpt in a sublime 200 lines of pure Python — it is all there. Incredible for teaching students (and yourself)

karpathy.github.io/2026/02/12/m...

27.02.2026 14:39 👍 35 🔁 8 💬 1 📌 2
Post image Post image

Come work w CSMaP!

We're hiring two postdocs.: one with a focus on AI; other is general focus.

Let me know if you have questions about the roles. And please share widely.

apply.interfolio.com/181817

apply.interfolio.com/181820

20.02.2026 14:16 👍 6 🔁 4 💬 1 📌 0
Post image

Benchmarks of LLM common sense overwhelmingly rely on correct labels to report an accuracy score. But what if your "ground truth" genuinely differs from mine?

In a new @pnasnexus.org paper, @duncanjwatts.bsky.social, @whiting.me and I explore the implications of this intriguing question.

🧵⤵️

16.02.2026 22:39 👍 6 🔁 2 💬 1 📌 1
Preview
Postdoctoral researcher on applications of AI in sociological research Are you able to lead sociological research into the AI age?

📢WORK! At the Sociology department of @utrechtuniversity.bsky.social we are hiring a postdoc who will work on applications of AI in sociological research. Join our vibrant-yet-cohesive research community doing cutting-edge research. Please share or apply! www.uu.nl/en/organisat...

12.02.2026 11:11 👍 17 🔁 31 💬 0 📌 0
Post image

AI does not engage in motivated reasoning

While individuals processing information may be motivated to
reach a certain conclusion, LLMs have no such motivation and operate on purely cognitive input. As such, they do not mimic humans in motivated reasoning tasks.

arxiv.org/pdf/2601.16130

12.02.2026 16:16 👍 15 🔁 3 💬 0 📌 0

LLMs are very good at extracting information from academic articles. They are much better than even highly trained humans (our grad RAs had hundreds of hours of practice). And of course they're ~1000x cheaper and faster

11.02.2026 17:08 👍 25 🔁 8 💬 1 📌 2
Preview
From (almost) open to heavily restricted data access – The development of the Twitter/X developer policies - Luisa Golland, Oliver Watteler, Jonas Recker, Jan Schwalbach, Libby Bishop, 2026 Archiving data is a crucial practice, as it ensures reproducibility of research and aligns with the FAIR principles (Findable, Accessible, Interoperable, and Re...

Really interesting new paper from some of my former @gesis.org colleagues in @bigdatasoc.bsky.social: "From (almost) open to heavily restricted data access – The development of the Twitter/X developer policies"
doi.org/10.1177/2053...
#commsky #computationalsocialscience

10.02.2026 12:31 👍 16 🔁 8 💬 1 📌 0
Screen shot of title page of a preprint.
Title: Should generative AI be used in reflexive qualitative research?
Authors: Elida Izani Ibrahim, Laura K. Nelson, and Andrea Voyer

Screen shot of title page of a preprint. Title: Should generative AI be used in reflexive qualitative research? Authors: Elida Izani Ibrahim, Laura K. Nelson, and Andrea Voyer

Recent publications arguing against the use of genAI in reflexive qual research inspired us (Elida Ibrahim and @andreavoyer.bsky.social) to write our own perspective. Not to convince anyone to use genAI but for those who might be interested and are looking for guidance.

osf.io/preprints/so...

09.02.2026 18:49 👍 52 🔁 21 💬 2 📌 0
Post image

Last week the story was that TikTok censored anti-Trump/ICE/Pretti videos after the U.S. ownership change. We investigated with a large set of US TikTok data and found some interesting results, short thread...

04.02.2026 17:52 👍 215 🔁 90 💬 6 📌 16
Post image Post image

First 50 downloads are free if you use this link: www.tandfonline.com/eprint/AYC6H...

03.02.2026 15:08 👍 0 🔁 1 💬 0 📌 0
Preview
Structuring articulation: asylum applications and elite political communication on social media during the European refugee crisis How do political parties adapt their discourse when confronted with rapidly changing structural conditions? This study examines the relationship between asylum applications and elite political comm...

New paper examines the relationship between online political discourse and structural conditions.

European parties responded to national asylum numbers by increasing online attention, but content differed. Left-wing parties posted more in solidarity and right-wing parties shared more about crime.

03.02.2026 15:07 👍 2 🔁 3 💬 1 📌 0
Post image

Hey sociologists, I'm organizing an ASA Methodology session on AI! Submissions are due by 2/25. Looking forward to a timely cross-method convo on emerging research best practices and disciplinary norms and ethics in August.

02.02.2026 17:49 👍 14 🔁 7 💬 1 📌 0
Post image

Very interesting research paper that shows that using AI with programming can significantly reduce mastery over topics. Perhaps unsurprising, but the lack of significant speed gains in this exercise are remarkable

www.anthropic.com/research/AI-...

31.01.2026 00:23 👍 177 🔁 58 💬 4 📌 6
Preview
Generative AI in Sociological Research: State of the Discipline Article: Generative AI in Sociological Research: State of the Discipline | Sociological Science | Posted January 20, 2026

Now out in Sociological Science

(How) do sociologists use GenAI for their research? Find out in our paper.

Written with @ajalvero.bsky.social @dustinstoltz.com and Marshall Taylor. Thank you to everyone who participated in the survey!!

20.01.2026 20:16 👍 42 🔁 17 💬 1 📌 1
Preview
How we built CoPE We just published the methodology behind CoPE. This is the model that powers Zentropi, and we think the approach might be useful for others working on policy-steerable classification systems. We had ...

We just published the methodology behind CoPE, our 9B parameter model that matches GPT-4o at content classification at 1% the size! The model is already open source, but now we're sharing our training technique. blog.zentropi.ai/how-we-built... 🧵 1/6

15.01.2026 18:51 👍 87 🔁 24 💬 2 📌 4
Preview
Measuring context sensitivity in artificial intelligence content moderation - Nature Human Behaviour Automated content moderation systems designed to detect prohibited content on social media often struggle to account for contextual information, which sometimes leads to the erroneous flagging of inno...

www.nature.com/articles/s41...

06.01.2026 21:06 👍 1 🔁 0 💬 0 📌 0
Post image Post image

On the topic of AI and social science research, the Research Briefing on my Nature Human Behaviour paper is now online. It's an accessible summary of the research, implications, and some behind-the-scenes commentary.

Thanks @gligoric.bsky.social for providing an expert opinion!

06.01.2026 21:06 👍 5 🔁 1 💬 1 📌 0

Particularly if academics block each other for engaging in legitimate discussions about contested issues

06.01.2026 18:06 👍 2 🔁 0 💬 0 📌 0

On the consent front, I think the use of LLMs to create more bespoke, even "individualized" instruments raises new ethical questions that warrant discussion. Seeing how polarizing the topic has become, I expect we'll see a lot more acrimonious debate before any consensus emerges

06.01.2026 18:05 👍 2 🔁 0 💬 1 📌 0

Putting informed consent aside, there is a strong minimal risk argument for using LLMs to analyze publicly available documents (particularly open-access published research), given that such materials are already routinely ingested by LLMs, with and without researcher intervention.

06.01.2026 17:52 👍 5 🔁 0 💬 1 📌 0
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

New paper in Social Science Computer Review 🚨

We conducted two experiments to understand the effects of reading AI summaries, focusing on historical events 📜🤖👩‍💻

We found that AI improved factual recall, possibly due to post-training optimization

journals.sagepub.com/doi/10.1177/...

29.12.2025 20:45 👍 10 🔁 1 💬 0 📌 0