WATCH opens up many opportunities for future work in AI safety monitoring: Eg, adaptive monitoring algorithms for other data-generating settings, extensions to monitoring generative models, LLMs, AI agents, and more!
7/
WATCH opens up many opportunities for future work in AI safety monitoring: Eg, adaptive monitoring algorithms for other data-generating settings, extensions to monitoring generative models, LLMs, AI agents, and more!
7/
Takeaway 3 (Root-Cause Analysis): Beyond catching harmful shifts, monitoring should inform recovery. WATCH helps find the cause of degradation, ie by diagnosing between covariate shifts in the inputs X vs concept shifts in conditional output Y|X relation, to inform retraining.
6/
Takeaway 2 (Fast Detection): Empirically, WATCH quickly catches harmful shifts (that degrade safety or utility of AI outputs): WATCH tends to be much more efficient than directly tracking loss metrics (& similar to standard conformal martingale baselines that it generalizes).
5/
Takeaway 1 (Adaptation): Prior monitoring methods do sequential hypothesis testing (eg, to detect changes from IID/exchangeability), but many often raise unneeded alarms even to benign shifts. Our methods adapt online to mild shifts to maintain safety & utility! 4/
β¦via methods based on weighted #ConformalPrediction (we construct novel martingales), w/ false-alarm control for continual (anytime-valid) & scheduled (set time horizon) settings.
Intuitively, we monitor the safety (coverage) & utility (sharpness) of an AIβs confidence sets.
3/
In real-world #AI deployments, you need to prep for the worst: unexpected data shifts or black swan events (eg COVID-19 outbreak, new LLM jailbreaks) can harm performance. So, post-deployment system monitoring is crucial. Our WATCH approach addresses drawbacks of prior workβ¦
2/
AI monitoring is key to responsible deployment. Our #ICML2025 paper develops approaches for 3 main goals:
1) *Adapting* to mild data shifts
2) *Quickly Detecting* harmful shifts
3) *Diagnosing* cause of degradation
π§΅w/ Xing Han, Anqi Liu, Suchi Saria
arxiv.org/abs/2505.04608
For #WorldHealthDay, Hopkins researchers including @drewprinster.bsky.social found that more specific #AI explanations increase physiciansβ diagnostic accuracy and efficiencyβbut can also foster misplaced trust. Learn more:
In sum: How AI explains its advice impacts doctorsβ diagnostic performance & trust in AI, even if they donβt know it.
Developers & clinical users: Keep this in mind!
Many Qs for future workβ¦. Eg, can we dynamically select explanation types to optimize human-AI teaming? π
9/9
Takeaway 4: Doctors trust local AI explanations more than global, *regardless of if AI is correct.*
- For correct AI: Explains why local explanations improve diagnostic performance.
- For incorrect AI: Local may worsen overreliance on AI errors--this needs further study!
8/
Takeaway 2: Local AI explanations are more efficient than global: Doctors agree/disagree more quickly.
Takeaway 3: Doctors may not realize how AI explanations impact their diagnostic performance! (AI explanation types did not affect whether doctors viewed AI as useful.)
7/
Takeaway 1: AI explanations impact benefits/harms of correct/incorrect AI advice!
- For correct AI: Local AI explanations improve diagnostic accuracy over global!
- Confident local swayed non-task experts (for correct AI)
- (Inconclusive for incorrect AI, but underpowered)
6/
We simulated a real clinical X-ray diagnosis workflow for 220 practicing doctors. Along w/ AI explanations, we looked at:
Correctness of AI advice: +/-
Confidence of AI advice: 65%-94%
Physician task expertise: radiologist (expert) vs internal/emergency med (task non-expert)
5/
So, we studied how doctors may be affected by two main categories of AI explanations in medical imaging:
- Local: Why this prediction on this input? (eg, highlighting key features)
- Global: How does the AI work in general? (eg, comparing to exemplar images of a class)
4/
Interpretability may be key to effective AI, but when AI explanations *actually* provide transparency vs add bias is highly debated.
Despite so many explainable AI (XAI) methods, thereβs too little understanding of when clinicians find XAI interpretable & useful in practice!
3/
β¬οΈ key takeaways from βCare to Explain? AI Explanation Types Differentially Impact Chest Radiograph Diagnostic Performance and Physician Trust in AIβ pubs.rsna.org/doi/10.1148/... With Amama Mahmood, Suchi Saria, Jean Jeudy, Cheng Ting Lin, Paul Yi, & Chien-Ming Huang
2/
When do AI explanations actually help, & promote appropriate trust?
Spoiler, via prospective, multisite @radiology_rsna study of 220 doctors: *How* AI explains its advice has big impacts on doctorsβ diagnostic performance and trust in AI--even if they *donβt realize it*!
π§΅1/ #AI #Radiology