Kyra Wilson's Avatar

Kyra Wilson

@kyrawilson

PhD student at UW iSchool | ai fairness, evaluation, and decision-making | she/her πŸ₯ kyrawilson.github.io/me

86
Followers
163
Following
22
Posts
25.07.2023
Joined
Posts Following

Latest posts by Kyra Wilson @kyrawilson

Preview
AI's threat to individual autonomy in hiring decisions | Brookings Kyra Wilson and Aylin Caliskan discuss new research on AI's influence in decision-making and what this portends for policymakers.

We know AI is biased, but what does that do to people's hiring decisions? (Spoiler alert: nothing good) To address this, @aylincaliskan.bsky.social and I wrote about legal implications and policy suggestions to protect people's rights and autonomy @brookings.edu

www.brookings.edu/articles/ais...

21.11.2025 20:43 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Conferencemaxxing: How to grow your profile and network as a scientist
Conferencemaxxing: How to grow your profile and network as a scientist YouTube video by Michael Saxon (NLP & Generative AI research)

And here is the presentation I gave on networking, self-promo, and how to make the most out of a conference. Hope this helps for everyone at NeurIPS!

www.youtube.com/watch?v=B9hG...

19.11.2025 23:59 πŸ‘ 14 πŸ” 6 πŸ’¬ 0 πŸ“Œ 1
A picture of Katie Wilson on election night at her campaign party delivering a speech to supporters. she holds her hand over her heart

A picture of Katie Wilson on election night at her campaign party delivering a speech to supporters. she holds her hand over her heart

A picture of Katie Wilson at her election party on election night. speaking into a microphone to a crowd of supporters, there are blue balloons behind her on stage and gold and silver streamers

A picture of Katie Wilson at her election party on election night. speaking into a microphone to a crowd of supporters, there are blue balloons behind her on stage and gold and silver streamers

A black and white photo of Katie Wilson knocking on a door of a single family home while door knocking for her campaign

A black and white photo of Katie Wilson knocking on a door of a single family home while door knocking for her campaign

A photo of Katie Wilson looking at her phone while standing by the buzzer box for an apartment building while door knocking for her campaign

A photo of Katie Wilson looking at her phone while standing by the buzzer box for an apartment building while door knocking for her campaign

We took on a powerful incumbent who was expected to coast to reelection.

We faced more corporate PAC money than has ever been spent attacking a candidate in a Seattle election.

We built a people-powered movement rooted in hope for our city’s future.

And we won.

This is YOUR city!

13.11.2025 23:13 πŸ‘ 1378 πŸ” 222 πŸ’¬ 45 πŸ“Œ 25

Thanks for shout-out Samer, I’m glad our work resonated with you!

14.11.2025 01:10 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image Post image Post image

"No Thoughts Just AI: Biased LLM Hiring Recommendations Alter Human Decision Making and Limit Human Autonomy"
by @kyrawilson.bsky.social et al

It shows how people trust AI hiring proposals blindly, even when the AI's being racist. This highlights the importance to tackle algorithmic bias

13.11.2025 19:49 πŸ‘ 1 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0
Preview
People mirror AI systems’ hiring biases, study finds In a new UW study, 528 participants worked with simulated AI systems to select job candidates. The researchers simulated different levels of racial biases for resumes from white, Black, Hispanic and.....

Thanks to @uwnews.uw.edu for covering my + @aylincaliskan.bsky.social's recent work published at AIES 2025! www.washington.edu/news/2025/11...

10.11.2025 19:28 πŸ‘ 10 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0
Post image

🚨New paper: Reward Models (RMs) are used to align LLMs, but can they be steered toward user-specific value/style preferences?
With EVALUESTEER, we find even the best RMs we tested exhibit their own value/style biases, and are unable to align with a user >25% of the time. 🧡

14.10.2025 15:59 πŸ‘ 12 πŸ” 7 πŸ’¬ 1 πŸ“Œ 0

If you’ve made it to the end of this thread, thanks for reading! I hope we can connect soon and chat about the role of all these works in making AI evaluation more valid, reliable, and actionable for the real-world! πŸ₯³

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
No Thoughts Just AI: Biased LLM Hiring Recommendations Alter Human Decision Making and Limit Human Autonomy In this study, we conduct a resume-screening experiment (N=528) where people collaborate with simulated AI models exhibiting race-based preferences (bias) to evaluate candidates for 16 high and low st...

I will present this tomorrow at 11:45am in Paper Session 6: Integrating AI into the Workplace. The preprint has more analyses of AI literacy's impact and discussion of the implications of this work for human autonomy in high-risk domains.
www.arxiv.org/abs/2509.04404

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
An image showing that people who completed an implicit association test before the resume-screening task were more likely to select Black and Hispanic candidates for high-status jobs and Asian candidates for low status jobs.

An image showing that people who completed an implicit association test before the resume-screening task were more likely to select Black and Hispanic candidates for high-status jobs and Asian candidates for low status jobs.

But a positive note is that when people took an implicit association test (commonly used for anti-bias training) before doing the resume-screening task, they increased their selection of stereotype-incongruent candidates by 12.7% regardless of how biased the AI model they interacted with was.

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
An image showing that people's decisions are very closely aligned with biased AI recommendations. Only in the most severe cases of AI bias do people make less biased decisions.

An image showing that people's decisions are very closely aligned with biased AI recommendations. Only in the most severe cases of AI bias do people make less biased decisions.

We showed people AI recommendations that had varying levels of racial bias and found that (at most) human oversight decreased bias in final outcomes by at most 15.2% which is still far from the outcome bias rates when no AI or unbiased AI was used.

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval Artificial intelligence (AI) hiring tools have revolutionized resume screening, and large language models (LLMs) have the potential to do the same. However, given the biases which are embedded within ...

3️⃣ Last but not least, I’ll be talking about a human-subjects experiment follow-up to my AIES 2024 paper with @aylincaliskan.bsky.social, also co-authored by Mattea Sim and Anna-Maria Gueorguieva!
arxiv.org/abs/2407.20371

21.10.2025 11:38 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Bias Amplification in Stable Diffusion's Representation of Stigma Through Skin Tones and Their Homogeneity Text-to-image generators (T2Is) are liable to produce images that perpetuate social stereotypes, especially in regards to race or skin tone. We use a comprehensive set of 93 stigmatized identities to ...

I'll be discussion this work tomorrow at 3:15pm during Poster Session 4! Our preprint has additional findings and analyses comparing generated images to human baselines and measuring dimensions of skin tone other than lightness/darkness.

arxiv.org/abs/2508.17465

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
An image showing that Stable Diffusion v2.1 generates skin tones which are more diverse for a stigmatized racial identity versus Stable Diffusion XL which generates darker, more homogenous skin tones.

An image showing that Stable Diffusion v2.1 generates skin tones which are more diverse for a stigmatized racial identity versus Stable Diffusion XL which generates darker, more homogenous skin tones.

We also found that depictions of racial identities are getting more homogenized with successive releases of SD, reinforcing harmful ideas about what people with stigmatized identities "should" look like.

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
An image showing that Stable Diffusion XL has the darkest skin tones on average and stigmatized identities have darker skin tones that non-stigmatized identities.

An image showing that Stable Diffusion XL has the darkest skin tones on average and stigmatized identities have darker skin tones that non-stigmatized identities.

We found that the newest model (SD XL) tends to generate images with darker skin tones compared to SD v1.5 and v2.1, but it still over-represents dark skin tones for stigmatized identities compared to non-stigmatized identities.

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2️⃣ In another work with Sourojit Ghosh and @aylincaliskan.bsky.social, we analyzed skin tones in images of 93 stigmatized identities using three versions of Stable Diffusion (SD).

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Bias is a Math Problem, AI Bias is a Technical Problem: 10-year Literature Review of AI/LLM Bias Research Reveals Narrow [Gender-Centric] Conceptions of 'Bias', and Academia-Industry Gap The rapid development of AI tools and implementation of LLMs within downstream tasks has been paralleled by a surge in research exploring how the outputs of such AI/LLM systems embed biases, a researc...

Chat with me about this work at Poster Session 2 (today at 6:15pm) or read our preprint in full!

arxiv.org/abs/2508.11067

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
An image showing the Fact Sheet we developed, which includes questions for the topics High-level Facts, Limitations and Ethical Considerations, Intended Use Cases, Implementation-level Details, and Maintenance and Lifecycle.

An image showing the Fact Sheet we developed, which includes questions for the topics High-level Facts, Limitations and Ethical Considerations, Intended Use Cases, Implementation-level Details, and Maintenance and Lifecycle.

We also find that 89.4% of papers don’t provide detailed information about real-world implementation of their findings. Based on this, we made a Fact Sheet that to guide researchers in communicating findings in ways that enable model developers or downstream users to implement them appropriately.

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

1️⃣ Sourojit Ghosh and I conducted a review of AI bias literature and found that 82% do not provide an explicit definition of bias and 79.9% do not explore bias outside of binary gender bias. This means that only a particular subset of marginalized groups may benefit from AI bias research.

21.10.2025 11:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Happy to share that I’m presenting 3 research projects at AIES 2025 πŸŽ‰

1️⃣Gender bias over-representation in AI bias research πŸ‘«
2️⃣Stable Diffusion's skin tone bias πŸ§‘πŸ»πŸ§‘πŸ½πŸ§‘πŸΏ
3️⃣Limitations of human oversight in AI hiring πŸ‘€πŸ€–

Let's chat if you’re at AIES or read below/reach out for details!
#AIES25 #AcademicSky

21.10.2025 11:38 πŸ‘ 9 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Post image

Applying for a #PhD @ischool.uw.edu? Read πŸ‘‡

Our student-run application feedback program will be open from October 20th through 1st November 2025.

Everyone applying, especially those from historically underrepresented groups or who have faced barriers in higher ed are highly encouraged to apply.

26.09.2025 23:00 πŸ‘ 5 πŸ” 7 πŸ’¬ 1 πŸ“Œ 0

Tesla is being hit with $329m in damages for a crash in which the human driver says he knew Autopilot wasn't self-driving. This is super important, because it shows that Autopilot's design and marketing can induce inattention even when drivers consciously know they are supposed to pay attention.

01.08.2025 19:14 πŸ‘ 1919 πŸ” 376 πŸ’¬ 27 πŸ“Œ 24
UC Berkeley Labor Center (@ucblaborcenter.bsky.social) The Labor Center conducts research and education on issues related to labor and employment. Our trainings serve to educate a diverse new generation of labor leaders. We also engage UC Berkeley student...

πŸ“£ Exceptional resource alert from
@ucblaborcenter.bsky.social: a database of U.S. union bargaining provisions related to workplace tech. laborcenter.berkeley.edu/negotiating-...

Workers need a voice in how tech is utilized in the workplaceβ€” and here are 175 blueprints for what that looks like

21.07.2025 21:43 πŸ‘ 7 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0

There’s a lot of evil stuff. But these few sentences are a result of wild corruption, and would remove any guardrails against AI systems being used for massive civil and humans rights abuses on an unimaginable scale. Call your congresspeople, and scream about it in the streets.

13.05.2025 02:44 πŸ‘ 29 πŸ” 10 πŸ’¬ 0 πŸ“Œ 0

I’m in this article along with other @uaw4121.bsky.social members to explain what’s happening to grad student researchers in the US right now.

It’s not looking great for us, and it’s looking even worse for America.

08.05.2025 01:24 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

So: In 2017, Congress made this happen:

NIH: Proposed 22% cut --> 9% increase
NSF: Proposed 11% cut --> 4% increase
NOAA: Proposed 16% cut --> 4% increase

Obviously, 2025 is not 2017. A lot is diff now. But still:

πŸ’° Congress, not WH, sets budgets.

πŸ“ž Public support & calls to Congress matter.

18.04.2025 18:59 πŸ‘ 196 πŸ” 99 πŸ’¬ 2 πŸ“Œ 8
Preview
Tips on How to Connect at Academic Conferences I was a kinda awkward teenager. If you are a CS researcher reading this post, then chances are, you were too. How to navigate social situations and make friends is not always intuitive, and has to …

I wrote a post on how to connect with people (i.e., make friends) at CS conferences. These events can be intimidating so here's some suggestions on how to navigate them

I'm late for #ICLR2025 #NAACL2025, but in time for #AISTATS2025 #ICML2025! 1/3
kamathematics.wordpress.com/2025/05/01/t...

01.05.2025 12:57 πŸ‘ 69 πŸ” 19 πŸ’¬ 3 πŸ“Œ 2
Post image

Excited to announce our #NAACL2025 Oral paper! πŸŽ‰βœ¨

We carried out the largest systematic study so far to map the links between upstream choices, intrinsic bias, and downstream zero-shot performance across 131 CLIP Vision-language encoders, 26 datasets, and 55 architectures!

29.04.2025 19:11 πŸ‘ 21 πŸ” 6 πŸ’¬ 1 πŸ“Œ 0
Preview
Hundreds of UW faculty members sign open letter opposing staff layoffs in College of Arts & Sciences Approximately 270 UW faculty members have signed an open letter condemning a proposed plan to lay off staff in the College of Arts & Sciences (CAS), calling the decision β€œdeeply

Faculty at the University of Washington, please sign, and please help us spread the word!

Link the letter is below.

28.04.2025 15:08 πŸ‘ 2 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0
Preview
Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval Artificial intelligence (AI) hiring tools have revolutionized resume screening, and large language models (LLMs) have the potential to do the same. However, given the biases which are embedded within ...

🀩 If you made it this far, thanks for reading! Be sure to also check out our research paper which quantified the risks these systems pose to different gender, race, and intersectional groups! (6/6)

arxiv.org/abs/2407.203...

25.04.2025 16:58 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0