Drew Prinster's Avatar

Drew Prinster

@drewprinster

Trustworthy AI/ML in healthcare & high-stakes apps | My job is (mostly) error bars \SaluteEmoji (eg, conformal prediction) | CS PhD at Johns Hopkins. Prev at Yale. he/him https://drewprinster.github.io/

36
Followers
40
Following
16
Posts
22.11.2024
Joined
Posts Following

Latest posts by Drew Prinster @drewprinster

WATCH opens up many opportunities for future work in AI safety monitoring: Eg, adaptive monitoring algorithms for other data-generating settings, extensions to monitoring generative models, LLMs, AI agents, and more!
7/

13.05.2025 19:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Takeaway 3 (Root-Cause Analysis): Beyond catching harmful shifts, monitoring should inform recovery. WATCH helps find the cause of degradation, ie by diagnosing between covariate shifts in the inputs X vs concept shifts in conditional output Y|X relation, to inform retraining.
6/

13.05.2025 19:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Takeaway 2 (Fast Detection): Empirically, WATCH quickly catches harmful shifts (that degrade safety or utility of AI outputs): WATCH tends to be much more efficient than directly tracking loss metrics (& similar to standard conformal martingale baselines that it generalizes).
5/

13.05.2025 19:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Takeaway 1 (Adaptation): Prior monitoring methods do sequential hypothesis testing (eg, to detect changes from IID/exchangeability), but many often raise unneeded alarms even to benign shifts. Our methods adapt online to mild shifts to maintain safety & utility! 4/

13.05.2025 19:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

…via methods based on weighted #ConformalPrediction (we construct novel martingales), w/ false-alarm control for continual (anytime-valid) & scheduled (set time horizon) settings.

Intuitively, we monitor the safety (coverage) & utility (sharpness) of an AI’s confidence sets.
3/

13.05.2025 19:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

In real-world #AI deployments, you need to prep for the worst: unexpected data shifts or black swan events (eg COVID-19 outbreak, new LLM jailbreaks) can harm performance. So, post-deployment system monitoring is crucial. Our WATCH approach addresses drawbacks of prior work…
2/

13.05.2025 19:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

AI monitoring is key to responsible deployment. Our #ICML2025 paper develops approaches for 3 main goals:

1) *Adapting* to mild data shifts
2) *Quickly Detecting* harmful shifts
3) *Diagnosing* cause of degradation

🧡w/ Xing Han, Anqi Liu, Suchi Saria
arxiv.org/abs/2505.04608

13.05.2025 19:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Explain yourself: Designing AI for better human-machine teaming A new study by Hopkins researchers finds that doctors’ diagnostic performance and trust in AI advice depends on how the AI assistant explains itself.

For #WorldHealthDay, Hopkins researchers including @drewprinster.bsky.social found that more specific #AI explanations increase physicians’ diagnostic accuracy and efficiencyβ€”but can also foster misplaced trust. Learn more:

07.04.2025 18:01 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

In sum: How AI explains its advice impacts doctors’ diagnostic performance & trust in AI, even if they don’t know it.

Developers & clinical users: Keep this in mind!

Many Qs for future work…. Eg, can we dynamically select explanation types to optimize human-AI teaming? πŸ‘€

9/9

07.01.2025 19:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Takeaway 4: Doctors trust local AI explanations more than global, *regardless of if AI is correct.*

- For correct AI: Explains why local explanations improve diagnostic performance.
- For incorrect AI: Local may worsen overreliance on AI errors--this needs further study!

8/

07.01.2025 19:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Takeaway 2: Local AI explanations are more efficient than global: Doctors agree/disagree more quickly.

Takeaway 3: Doctors may not realize how AI explanations impact their diagnostic performance! (AI explanation types did not affect whether doctors viewed AI as useful.)

7/

07.01.2025 19:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Takeaway 1: AI explanations impact benefits/harms of correct/incorrect AI advice!

- For correct AI: Local AI explanations improve diagnostic accuracy over global!
- Confident local swayed non-task experts (for correct AI)
- (Inconclusive for incorrect AI, but underpowered)

6/

07.01.2025 19:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We simulated a real clinical X-ray diagnosis workflow for 220 practicing doctors. Along w/ AI explanations, we looked at:
Correctness of AI advice: +/-
Confidence of AI advice: 65%-94%
Physician task expertise: radiologist (expert) vs internal/emergency med (task non-expert)

5/

07.01.2025 19:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

So, we studied how doctors may be affected by two main categories of AI explanations in medical imaging:
- Local: Why this prediction on this input? (eg, highlighting key features)
- Global: How does the AI work in general? (eg, comparing to exemplar images of a class)

4/

07.01.2025 19:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Interpretability may be key to effective AI, but when AI explanations *actually* provide transparency vs add bias is highly debated.

Despite so many explainable AI (XAI) methods, there’s too little understanding of when clinicians find XAI interpretable & useful in practice!

3/

07.01.2025 19:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Care to Explain? AI Explanation Types Differentially Impact Chest Radiograph Diagnostic Performance and Physician Trust in AI | Radiology Background It is unclear whether artificial intelligence (AI) explanations help or hurt radiologists and other physicians in AI-assisted radiologic diagnostic decision-making. Purpose To test whether ...

⬇️ key takeaways from β€œCare to Explain? AI Explanation Types Differentially Impact Chest Radiograph Diagnostic Performance and Physician Trust in AI” pubs.rsna.org/doi/10.1148/... With Amama Mahmood, Suchi Saria, Jean Jeudy, Cheng Ting Lin, Paul Yi, & Chien-Ming Huang

2/

07.01.2025 19:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

When do AI explanations actually help, & promote appropriate trust?

Spoiler, via prospective, multisite @radiology_rsna study of 220 doctors: *How* AI explains its advice has big impacts on doctors’ diagnostic performance and trust in AI--even if they *don’t realize it*!

🧡1/ #AI #Radiology

07.01.2025 19:47 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0