Julian Skirzynski's Avatar

Julian Skirzynski

@jskirzynski

PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.

529
Followers
161
Following
48
Posts
22.11.2024
Joined
Posts Following

Latest posts by Julian Skirzynski @jskirzynski

Do We Make Better Decisions with AI? Human Bias & Interpretability
Do We Make Better Decisions with AI? Human Bias & Interpretability YouTube video by SAIL Media

I recently gave a 15-min talk at #NeurIPS2025 on why "interpretable" AI doesn't automatically lead to better human decisions, and discussed my research on human-AI collaboration.

Watch here: www.youtube.com/watch?v=JTuU...

06.01.2026 13:57 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Computational Turing Test Reveals Systematic Differences Between Human and AI Language Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...

LLMs are now widely used in social science as stand-ins for humansβ€”assuming they can produce realistic, human-like text

But... can they? We don’t actually know.

In our new study, we develop a Computational Turing Test.

And our findings are striking:
LLMs may be far less human-like than we think.🧡

07.11.2025 11:13 πŸ‘ 334 πŸ” 134 πŸ’¬ 14 πŸ“Œ 38
Preview
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolat...

Preliminary results show that the current framework of "AI" makes ppl less likely to help or seek help from other humans, or to seek to soothe conflict, and that people actively prefer that framework to any others, literally serving to make them more dependent on it.

05.10.2025 17:45 πŸ‘ 462 πŸ” 218 πŸ’¬ 15 πŸ“Œ 44
Preview
NYAS Publications Generative artificial intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. Whi...

New research out!🚨

In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
πŸ§΅πŸ‘‡
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...

28.07.2025 10:15 πŸ‘ 15 πŸ” 9 πŸ’¬ 1 πŸ“Œ 1

Sure :)

24.06.2025 14:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We’ll be presenting β€ͺ@facct on 06.24 at 10:45 AM during the Evaluating Explainable AI session!

Come chat with us. We would love to discuss implications for AI policy, better auditing methods, and next steps for algorithmic fairness research.

#AIFairness #XAI

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

But if they are indeed used to dispute discrimination claims, we can expect multiple failed cases due to insufficient evidence and many undetected discriminatory decisions.

Current explanation-based auditing is, therefore, fundamentally flawed, and we need additional safeguards.

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Despite their unreliability, explanations are suggested as anti-discrimination measures by a number of regulations.

GDPR βœ“ Digital Services Act βœ“ Algorithmic Accountability Act βœ“ GDPD (Brazil) βœ“

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

So why do explanations fail?

1️⃣ They target individuals, while discrimination operates on groups
2️⃣ Users’ causal models are flawed
3️⃣ Users overestimate proxy strength and treat its presence in the explanation as discrimination
4️⃣ Feature-outcome relationships bias user claims

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

BADLY.

When participants flag discrimination, they are correct ~50% of the time, miss 55% of the discriminatory predictions and keep a 30% FPR.

Additional knowledge (protected attributes, proxy strength) improves the detection to roughly 60% without affecting other measures.

24.06.2025 06:14 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Our setup lets us assign each robot a ground-truth discrimination outcome, which lets us evaluate how well each participant could do under different information regimes.

So, how did they do?

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We recruited participants, anchored their beliefs on discrimination, trained them to use explanations, and tested to make sure they got it right.

We then saw how well they could flag unfair predictions based on counterfactual explanations and feature attribution scores.

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Participants audit a model to predict if robots sent to Mars will break down. Some are built by β€œCompany X.” Others by β€œCompany S.”

Our model predicts failure based on robot body parts. It can discriminate against Company X by predicting that robots without an antenna fail.

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We cannot tell if explanations work or not due to these reasons.

To tackle this challenge, we introduce a synthetic task where we:
- Teach users how to use explanations
- Control their beliefs
- Adapt the world to fit their beliefs
- Control the explanation content

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Users may fail to detect discrimination through explanations due to:

- Proxies not being revealed by explanations
- Issues with interpreting explanations
- Wrong assumptions about proxy strength
- Unknown protected class
- Incorrect causal beliefs

24.06.2025 06:14 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Imagine a model that predicts loan approval based on credit history and salary.

Would a rejected female applicant get approved if she somehow applied as a man?

If yes, her prediction was discriminatory.

Fairness requires predictions to stay the same regardless of the protected class.

24.06.2025 06:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Right to explanation laws assume explanations help people detect algorithmic discrimination.

But is there any evidence for that?

In our latest work w/ David Danks @berkustun, we show explanations fail to help people, even under optimal conditions.

PDF shorturl.at/yaRua

24.06.2025 06:14 πŸ‘ 8 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

You're both in!

11.06.2025 07:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.

In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse

24.04.2025 06:19 πŸ‘ 16 πŸ” 7 πŸ’¬ 1 πŸ“Œ 1

Welcome in :)

29.01.2025 12:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Oh yeah, welcome to the pack!

08.01.2025 16:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Of course!

21.12.2024 17:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Actually, I've added you some time ago already so you're good :)

08.12.2024 15:29 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Let's have bioinformatics represented then :) Regarding the clubs, I have not heard of any, might be just a coincidence :D

08.12.2024 15:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Added!

08.12.2024 15:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Sure Max!

06.12.2024 21:34 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

Hey Lucas, consider it done :)

06.12.2024 13:41 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Welcome to the pack :)

03.12.2024 14:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Interesting stuff, welcome to the hood!

02.12.2024 12:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Of course, welcome in!

01.12.2024 17:06 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0