In a new paper, I try to resolve the counterintuitive evidence of Meehlβs βclinical vs statistical predictionβ problems: Statistics only wins because the game is rigged.
In a new paper, I try to resolve the counterintuitive evidence of Meehlβs βclinical vs statistical predictionβ problems: Statistics only wins because the game is rigged.
When RAG systems hallucinate, is the LLM misusing available information or is the retrieved context insufficient? In our #ICLR2025 paper, we introduce "sufficient context" to disentangle these failure modes. Work w Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, @cyroid.bsky.social
Hey AI folks - stop using SHAP! It won't help you debug [1], won't catch discrimination [2], and makes no sense for feature importance [3].
Plus - as we show - it also won't give recourse.
In a paper at #ICLR we introduce feature responsiveness scores... 1/
arxiv.org/pdf/2410.22598
Many ML models predict labels that donβt reflect what we care about, e.g.:
β Diagnoses from unreliable tests
β Outcomes from noisy electronic health records
In a new paper w/@berkustun, we study how this subjects individuals to a lottery of mistakes.
Paper: bit.ly/3Y673uZ
π§΅π
We'll be @ ICLR!
Poster: Sat 26 Apr 10AM β 12:30PM SGT
Paper: tinyurl.com/2deek4wx
Code: tinyurl.com/2rb6zc28
We develop methods to compute responsiveness scores for any dataset and models. Three main advantages:
1. Can be swapped in place of existing methods
2. Highlight responsive features
3. Flag instances where such features don't exist!
Current approaches are unable to inform consumers when:
1. features are not responsive
2. features are not monotonically responsive (e.g., can't increase income "too much")
3. features must change in counterintuitive ways (e.g., decrease income) to obtain the desired prediction
But, SHAP highlights features that are:
1. Immutable: HistoryOfLatePayment
2. Mutable but not actionable: Age, NumberOfDependents
3. Actionable but not responsive: CreditUtilization
Hence, we designed responsiveness scores to highlight features that are actionable and responsive (i.e., lead to desired prediction when changed)
Many countries seek to protect consumers in applications like lending and hiring by requiring explanations for adverse outcomes. But,
- Many provide companies with substantial flexibility
- Standard approach is to use methods like SHAP and LIME to highlight important features
Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse