AI's threat to individual autonomy in hiring decisions | Brookings
Kyra Wilson and Aylin Caliskan discuss new research on AI's influence in decision-making and what this portends for policymakers.
We know AI is biased, but what does that do to people's hiring decisions? (Spoiler alert: nothing good) To address this, @aylincaliskan.bsky.social and I wrote about legal implications and policy suggestions to protect people's rights and autonomy @brookings.edu
www.brookings.edu/articles/ais...
21.11.2025 20:43
π 6
π 1
π¬ 0
π 0
Conferencemaxxing: How to grow your profile and network as a scientist
YouTube video by Michael Saxon (NLP & Generative AI research)
And here is the presentation I gave on networking, self-promo, and how to make the most out of a conference. Hope this helps for everyone at NeurIPS!
www.youtube.com/watch?v=B9hG...
19.11.2025 23:59
π 14
π 6
π¬ 0
π 1
A picture of Katie Wilson on election night at her campaign party delivering a speech to supporters. she holds her hand over her heart
A picture of Katie Wilson at her election party on election night. speaking into a microphone to a crowd of supporters, there are blue balloons behind her on stage and gold and silver streamers
A black and white photo of Katie Wilson knocking on a door of a single family home while door knocking for her campaign
A photo of Katie Wilson looking at her phone while standing by the buzzer box for an apartment building while door knocking for her campaign
We took on a powerful incumbent who was expected to coast to reelection.
We faced more corporate PAC money than has ever been spent attacking a candidate in a Seattle election.
We built a people-powered movement rooted in hope for our cityβs future.
And we won.
This is YOUR city!
13.11.2025 23:13
π 1378
π 222
π¬ 45
π 25
Thanks for shout-out Samer, Iβm glad our work resonated with you!
14.11.2025 01:10
π 1
π 0
π¬ 0
π 0
π¨New paper: Reward Models (RMs) are used to align LLMs, but can they be steered toward user-specific value/style preferences?
With EVALUESTEER, we find even the best RMs we tested exhibit their own value/style biases, and are unable to align with a user >25% of the time. π§΅
14.10.2025 15:59
π 12
π 7
π¬ 1
π 0
If youβve made it to the end of this thread, thanks for reading! I hope we can connect soon and chat about the role of all these works in making AI evaluation more valid, reliable, and actionable for the real-world! π₯³
21.10.2025 11:38
π 0
π 0
π¬ 0
π 0
An image showing that people who completed an implicit association test before the resume-screening task were more likely to select Black and Hispanic candidates for high-status jobs and Asian candidates for low status jobs.
But a positive note is that when people took an implicit association test (commonly used for anti-bias training) before doing the resume-screening task, they increased their selection of stereotype-incongruent candidates by 12.7% regardless of how biased the AI model they interacted with was.
21.10.2025 11:38
π 0
π 0
π¬ 1
π 0
An image showing that people's decisions are very closely aligned with biased AI recommendations. Only in the most severe cases of AI bias do people make less biased decisions.
We showed people AI recommendations that had varying levels of racial bias and found that (at most) human oversight decreased bias in final outcomes by at most 15.2% which is still far from the outcome bias rates when no AI or unbiased AI was used.
21.10.2025 11:38
π 0
π 0
π¬ 1
π 0
An image showing that Stable Diffusion v2.1 generates skin tones which are more diverse for a stigmatized racial identity versus Stable Diffusion XL which generates darker, more homogenous skin tones.
We also found that depictions of racial identities are getting more homogenized with successive releases of SD, reinforcing harmful ideas about what people with stigmatized identities "should" look like.
21.10.2025 11:38
π 0
π 0
π¬ 1
π 0
An image showing that Stable Diffusion XL has the darkest skin tones on average and stigmatized identities have darker skin tones that non-stigmatized identities.
We found that the newest model (SD XL) tends to generate images with darker skin tones compared to SD v1.5 and v2.1, but it still over-represents dark skin tones for stigmatized identities compared to non-stigmatized identities.
21.10.2025 11:38
π 0
π 0
π¬ 1
π 0
2οΈβ£Β In another work with Sourojit Ghosh and @aylincaliskan.bsky.social, we analyzed skin tones in images of 93 stigmatized identities using three versions of Stable Diffusion (SD).
21.10.2025 11:38
π 0
π 0
π¬ 1
π 0
An image showing the Fact Sheet we developed, which includes questions for the topics High-level Facts, Limitations and Ethical Considerations, Intended Use Cases, Implementation-level Details, and Maintenance and Lifecycle.
We also find that 89.4% of papers donβt provide detailed information about real-world implementation of their findings. Based on this, we made a Fact Sheet that to guide researchers in communicating findings in ways that enable model developers or downstream users to implement them appropriately.
21.10.2025 11:38
π 0
π 0
π¬ 1
π 0
1οΈβ£ Sourojit Ghosh and I conducted a review of AI bias literature and found that 82% do not provide an explicit definition of bias and 79.9% do not explore bias outside of binary gender bias. This means that only a particular subset of marginalized groups may benefit from AI bias research.
21.10.2025 11:38
π 0
π 0
π¬ 1
π 0
Happy to share that Iβm presenting 3 research projects at AIES 2025 π
1οΈβ£Gender bias over-representation in AI bias research π«
2οΈβ£Stable Diffusion's skin tone bias π§π»π§π½π§πΏ
3οΈβ£Limitations of human oversight in AI hiring π€π€
Let's chat if youβre at AIES or read below/reach out for details!
#AIES25 #AcademicSky
21.10.2025 11:38
π 9
π 2
π¬ 1
π 0
Applying for a #PhD @ischool.uw.edu? Read π
Our student-run application feedback program will be open from October 20th through 1st November 2025.
Everyone applying, especially those from historically underrepresented groups or who have faced barriers in higher ed are highly encouraged to apply.
26.09.2025 23:00
π 5
π 7
π¬ 1
π 0
Tesla is being hit with $329m in damages for a crash in which the human driver says he knew Autopilot wasn't self-driving. This is super important, because it shows that Autopilot's design and marketing can induce inattention even when drivers consciously know they are supposed to pay attention.
01.08.2025 19:14
π 1919
π 376
π¬ 27
π 24
UC Berkeley Labor Center (@ucblaborcenter.bsky.social)
The Labor Center conducts research and education on issues related to labor and employment. Our trainings serve to educate a diverse new generation of labor leaders. We also engage UC Berkeley student...
π£ Exceptional resource alert from
@ucblaborcenter.bsky.social: a database of U.S. union bargaining provisions related to workplace tech. laborcenter.berkeley.edu/negotiating-...
Workers need a voice in how tech is utilized in the workplaceβ and here are 175 blueprints for what that looks like
21.07.2025 21:43
π 7
π 7
π¬ 0
π 0
Thereβs a lot of evil stuff. But these few sentences are a result of wild corruption, and would remove any guardrails against AI systems being used for massive civil and humans rights abuses on an unimaginable scale. Call your congresspeople, and scream about it in the streets.
13.05.2025 02:44
π 29
π 10
π¬ 0
π 0
Iβm in this article along with other @uaw4121.bsky.social members to explain whatβs happening to grad student researchers in the US right now.
Itβs not looking great for us, and itβs looking even worse for America.
08.05.2025 01:24
π 3
π 2
π¬ 0
π 0
So: In 2017, Congress made this happen:
NIH: Proposed 22% cut --> 9% increase
NSF: Proposed 11% cut --> 4% increase
NOAA: Proposed 16% cut --> 4% increase
Obviously, 2025 is not 2017. A lot is diff now. But still:
π° Congress, not WH, sets budgets.
π Public support & calls to Congress matter.
18.04.2025 18:59
π 196
π 99
π¬ 2
π 8
Tips on How to Connect at Academic Conferences
I was a kinda awkward teenager. If you are a CS researcher reading this post, then chances are, you were too. How to navigate social situations and make friends is not always intuitive, and has to β¦
I wrote a post on how to connect with people (i.e., make friends) at CS conferences. These events can be intimidating so here's some suggestions on how to navigate them
I'm late for #ICLR2025 #NAACL2025, but in time for #AISTATS2025 #ICML2025! 1/3
kamathematics.wordpress.com/2025/05/01/t...
01.05.2025 12:57
π 69
π 19
π¬ 3
π 2
Excited to announce our #NAACL2025 Oral paper! πβ¨
We carried out the largest systematic study so far to map the links between upstream choices, intrinsic bias, and downstream zero-shot performance across 131 CLIP Vision-language encoders, 26 datasets, and 55 architectures!
29.04.2025 19:11
π 21
π 6
π¬ 1
π 0