Demographic cues (eg, names, dialect) are widely used to study how LLM behavior may change depending on user demographics. Such cues are often assumed interchangeable.
๐จ We show they are not: different cues yield different model behavior for the same group and different conclusions on LLM bias. ๐งต๐
27.01.2026 13:07
๐ 18
๐ 10
๐ฌ 1
๐ 0
@pennldi.bsky.social @pennengineering.bsky.social @upenn.edu @pennchibe.bsky.social @pennmedicine.bsky.social @pminnovation.bsky.social
14.07.2025 14:06
๐ 0
๐ 0
๐ฌ 0
๐ 0
We hope these findings help health systems design more effective & scalable outreach to close preventive care gaps.
Thoughts welcome!
w/
@manueltonneau.bsky.social, @alison-buttenheim.bsky.social, @sharathg.bsky.social + team
14.07.2025 14:06
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ Results:
โ
Both AI formats significantly boosted stool-test intent (+13 pts) over expert material.
๐ฉบ For colonoscopy, no AI advantage over expert material.
Surprisingly: single AI message โ chatbot โ despite participants choosing to spend 3.5 minutes longer with the chatbot!
14.07.2025 14:06
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐งช In a randomized trial (n=915), we compared:
1๏ธโฃ No intervention
2๏ธโฃ Expert-written patient materials
3๏ธโฃ Single AI message
4๏ธโฃ AI chatbot using motivational interviewing techniques
Outcome: intent to screen (stool test & colonoscopy) over 12 months.
14.07.2025 14:06
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฉบ Why it matters:
Colorectal cancer is the 2nd leading cause of cancer death in the US โ but ~1/3 of eligible adults arenโt screened.
We need scalable, persuasive tools to close this gap. Can AI help?
14.07.2025 14:06
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐จ New study!
We tested whether AI-generated messages โ single static messages vs. conversations โ can boost intent to screen for colorectal cancer.
Turns out: short, tailored AI messages outperform expert-written materials & match conversations, at a fraction of the time! ๐งต๐
14.07.2025 14:06
๐ 3
๐ 1
๐ฌ 1
๐ 1
tagging some others who may be interested!
@chrisbail.bsky.social @hugoreasoning.bsky.social @tnfalpha.bsky.social @emollick.bsky.social @jennyallen.bsky.social @noelbrewer.bsky.social @julieleask.bsky.social @pminnovation.bsky.social @susanmichie.bsky.social
01.05.2025 13:30
๐ 1
๐ 0
๐ฌ 1
๐ 0
@pennldi.bsky.social @pennengineering.bsky.social @upenn.edu @pennchibe.bsky.social @pennmedicine.bsky.social
30.04.2025 09:40
๐ 0
๐ 0
๐ฌ 1
๐ 0
Shout-outs to inspiring work: @gordpennycook.bsky.social @dgrand.bsky.social @tomcostello.bsky.social, @jeffhancock.bsky.social @kobihackenburg.bsky.social on AI persuasion & others pushing this field forward ๐
30.04.2025 09:40
๐ 0
๐ 0
๐ฌ 1
๐ 0
Thoughts welcome!
w/ @manueltonneau.bsky.social, @sharathg.bsky.social, @alison-buttenheim.bsky.social
+team
30.04.2025 09:40
๐ 0
๐ 0
๐ฌ 1
๐ 0
In a 15-day follow-up, gains from the reading arm stuck (+7 pts) while chatbot effects faded to โ0. We also found no spill-over to flu/COVID or general vaccine hesitancy
30.04.2025 09:40
๐ 0
๐ 0
๐ฌ 1
๐ 0
In an RCT with 930 parents (US/CA/UK, with kids old enough for the HPV vaccine): chatbots raised vaccine intent vs. no interventionโbut neither variant beat simply reading official public-health materials, with the conversational chatbot doing significantly worse.
30.04.2025 09:40
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐จ New preprint on AI persuasion and public health ๐จ
A 3-min conversation with GPT-4o nudged HPV-vax-hesitant parents (who obv knew it was AI & consented!)โBUT reading standard public-health material still outperformed chatbots in impact and longevity. Details below ๐
30.04.2025 09:40
๐ 14
๐ 7
๐ฌ 2
๐ 1
LDI Senior Fellows Neil Sehgal, Anish Agarwal, Raina Merchant, Sharath Chandra Guntuku, and colleagues analyzed Yelp reviews of health care facilities to asses how patient sentiment toward changed before and after COVID-19.
03.03.2025 16:58
๐ 6
๐ 1
๐ฌ 1
๐ 0