Educating the public about the realism of AI-generated voices, especially their ability to mimic regional accents, can significantly reduce vulnerability to voice-based scams. doi.org/hbq64f
Educating the public about the realism of AI-generated voices, especially their ability to mimic regional accents, can significantly reduce vulnerability to voice-based scams. doi.org/hbq64f
I've developed a psychological countermeasure for AI voice deceptionβ¦ and it's surprisingly simple! π£οΈ
Thread below π§΅
doi.org/10.1093/cybs...
Full paper (open access) π
doi.org/10.1093/cybs...
β’ AI voices generated using @elevenlabs.io
β’ Study designed and run on Gorilla Experiment Builder
β’ Participants recruited via @joinprolific.bsky.social
This work has just been published in the Journal of Cybersecurity and was possible thanks to a Think Big Leverage Fund award from @the-sipr.bsky.social
In practical terms, AI voice fraud prevention campaigns and public awareness messaging should focus on updating people's knowledge of what AI can do, instead of just telling them to be careful.
But the good news is, we can change their MINDSET to protect them!
Because technology has historically struggled to understand their speech, they don't expect it to sound like them either - so when it does, it seems all the more convincingly human. I call this the MINDSET bias (Minority, Indigenous, Non-standard, and Dialect-Shaped Expectations of Technology).
Why does this matter?
Speakers of regional, minority, and non-standard language varieties are at particular risk of AI voice-based fraud.
But warning them about the potential dangers of AI voices and explicitly encouraging them to be vigilant did nothing!
While my previous work showed that people are more likely to assume an AI voice is human when it speaks in a familiar local dialect, my new paper shows that simply telling people that AI can authentically use regional accents and dialects is enough to increase their vigilance.
I've developed a psychological countermeasure for AI voice deceptionβ¦ and it's surprisingly simple! π£οΈ
Thread below π§΅
doi.org/10.1093/cybs...
A selfie of me wearing a patterned Christmas jumper
Tis the season
As synthetic voices become indistinguishable from real ones, DIGIT spoke with psychologist Dr Neil Kirk about how our instinct to trust our own accents could make way for deepfakes targeting not just individuals, but entire regions.
www.digit.fyi/psychology-b...
#deepfakes #AIvoices #AyeRobot
Here's a feature on my recent AI Voice work - really honoured to have been asked to talk about this! www.digit.fyi/psychology-b...
π§ Is your mindset making you more vulnerable to AI voice-based deception?
I've got a new pre-print out: osf.io/preprints/ps...
Vigilance towards AI voices can be nudged through a change in MINDSET
Thread below π
#AI #Voice #Cybersecurity #Fraud #Psychology 1/11
Very grateful to @the-sipr.bsky.social for funding this important work. 11/11
π‘ Why it matters: This could have real-world implications for designing public awareness campaigns and scam prevention messages. 10/11
π Take-Home Message: Simply telling people that AI voices can speak with a Scottish accent/dialect was far more effective than warning them to be vigilant. 9/11
However, an explicit vigilance-based nudge warning about the dangers of AI voices and urging listeners βif in doubt, think AIβ had no effect, unless paired with the capability message about AIβs linguistic abilities. 8/11
A positively framed nudge highlighting AIβs capability to reproduce underrepresented accents and dialects significantly reduced this bias β in other words, changing their MINDSET made them more vigilant towards AI voices using these varieties. 7/11
In this manuscript, I investigate whether simple informational nudges can shift these assumptions and reduce the bias for responding βHumanβ. Across two experiments, participants categorised voices as either Human or AI. 6/11
Yet that assumption could be putting some language communities at greater risk of AI voice-based deception if they believe a voice speaking that way must be a real person. 5/11
In my new paper, I introduce the concept of MINDSET: Minority, Indigenous, Non-standard, and Dialect-Shaped Expectations of Technology. It reflects the idea that people assume AI canβt convincingly reproduce underrepresented ways of speaking. 4/11
I also suspect this is not unique to Scotland, but part of a global pattern affecting communities whose voices have historically been excluded from these systems. 3/11
My previous work showed that listeners were more likely to believe an AI voice was a real human when it spoke in a local dialect. I think this happens because weβre not used to speech technology understanding these varieties - never mind speaking them! 2/11
π§ Is your mindset making you more vulnerable to AI voice-based deception?
I've got a new pre-print out: osf.io/preprints/ps...
Vigilance towards AI voices can be nudged through a change in MINDSET
Thread below π
#AI #Voice #Cybersecurity #Fraud #Psychology 1/11
New achievement unlocked - delivering the oration for Brian Cox. I didnβt faint, nor did he use any of his famous catchphrases on me. Success!
Read the paper here ππ» bpspsychub.onlinelibrary.wiley.com/doi/10.1111/...
Made another wee video about one of our recently published papersβ¦
β¦ or take Monday off. (Yes that was me).