Shan Chen's Avatar

Shan Chen

@shan23chen

PhDing @AIM_Harvard @MassGenBrigham|PhD Fellow @Google | Previously @Bos_CHIP @BrandeisU More robustness and explainabilities 🧐 for Health AI. shanchen.dev

1,433
Followers
231
Following
36
Posts
11.11.2024
Joined
Posts Following

Latest posts by Shan Chen @shan23chen

As I shared in the NYT, models often see the data but fail to weigh it like a physician, drifting toward generic "average patient" responses. Context window β‰  Clinical reasoning.

www.nytimes.com/2025/12/03/w...

04.12.2025 14:10 πŸ‘ 11 πŸ” 2 πŸ’¬ 1 πŸ“Œ 2

Check out our editorial on Zazzetti et al (2025)'s paper on synthetic data generation for breast cancer, in JCO CCI! Synthetic data could help with many gaps in clinical AI research, but challenges remain especially (IMO) issues with out-of-domain generalization @shan23chen.bsky.social

30.11.2025 17:37 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ€”πŸ’­What even is reasoning? It's time to answer the hard questions!

We built the first unified taxonomy of 28 cognitive elements underlying reasoning

Spoilerβ€”LLMs commonly employ sequential reasoning, rarely self-awareness, and often fail to use correct reasoning structures🧠

25.11.2025 18:25 πŸ‘ 46 πŸ” 8 πŸ’¬ 2 πŸ“Œ 0
Post image

Super proud of @shan23chen.bsky.social for his podium presentation on his research into LLM sycophancy in the face of illogical medical queries at #AMIA25!

Full paper: www.nature.com/articles/s41...

Also cited yesterday in the NYT! www.nytimes.com/2025/11/16/w...

17.11.2025 21:44 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 1
Preview
When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior - npj Digital Medicine npj Digital Medicine - When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior

LLMs tend to prioritize helpfulness > reason. We show that safety-aware, compute-efficient fine-tuning helps models reason more critically in healthcare domain, and generalizes to improved safety alignment across other domains.
www.nature.com/articles/s41... @shan23chen.bsky.social

18.10.2025 14:18 πŸ‘ 8 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0
Preview
When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior - npj Digital Medicine npj Digital Medicine - When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior

An overemphasis on helpfulness makes LLMs vulnerable.
Research shows models will comply with illogical medical requests, generating false information. This sycophantic tendency can be corrected with specific prompting and fine-tuning. #MedSky #MedAI #MLSky

17.10.2025 15:53 πŸ‘ 7 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Post image

[1/]πŸ’‘New Paper
Large reasoning models (LRMs) are strong in English β€” but how well do they reason in your language?

Our latest work uncovers their limitation and a clear trade-off:
Controlling Thinking Trace Language Comes at the Cost of Accuracy

πŸ“„Link: arxiv.org/abs/2505.22888

30.05.2025 13:08 πŸ‘ 8 πŸ” 5 πŸ’¬ 1 πŸ“Œ 3
Post image

Agents are all the rage and we need to track their abilities in the medical domain. Enter MedBrowseComp, the 1st benchmark to assess agents' abilities to reason, navigate the web, and search for verifiable med info!

Preprint: arxiv.org/abs/2505.14963
Site: moreirap12.github.io/mbc-browse-a...

22.05.2025 16:27 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
FaceAge, a deep learning system to estimate biological age from face photographs to improve prognostication: a model development and validation study Our results suggest that a deep learning model can estimate biological age from face photographs and thereby enhance survival prediction in patients with cancer. Further research, including validation...

✨ What if your face could tell something about how old your body really is?

Excited to share our latest paper just published in The Lancet Digital Health (open access!)

πŸ‘‰ www.thelancet.com/journals/lan...

09.05.2025 15:06 πŸ‘ 3 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

congrats!

27.03.2025 02:31 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

CALL FOR REMOTE SPEAKERS: Science in the News Seminar Series, hosted by Harvard x Beacon Hill Seminars

scientists, engineers & doctors, from academic researchers to industry professionals! πŸ§‘β€πŸ”¬πŸ§‘β€πŸ’»Β 

Email the organizers at scienceinthenews.bhs@gmail.com to sign up for a date! (First-come-first-served)

07.03.2025 01:45 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
https://www.reddit.com/r/OpenAI/comments/1ieonxv/comment/ma9f5me/

Source: t.co/mV27ZZg5MN

01.02.2025 04:01 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

We have a NEW PAPER in @naturemedicine.bsky.social on reporting recommendations for addressing the unique challenges of #largelanguagemodels (LLMs) in biomedical applications

www.nature.com/articles/s41...

#MLSky #StatsSky #medSky #AISky #artificialintelligence #generativeAI #transparency

08.01.2025 10:24 πŸ‘ 28 πŸ” 8 πŸ’¬ 1 πŸ“Œ 2

Yea… he does have problems portraying female in stereotypical ways, big critics in China too

04.01.2025 23:06 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

During the QA session, one stood up to her regarding this issue really respectfully and her response was: β€œThat was not based on my judgment. That was based on the student's quote saying that the school was not teaching it, which meant that it applied to a lot of people from there."

14.12.2024 18:10 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Most of the talk discussed about bad practices. But only one slide mentioned specific group of people.

14.12.2024 18:10 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Haha which one has more nowadays?

11.12.2024 05:26 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Haha transformers really transformed both.

However, I feel like the division is even further… currently, seems like RL is taking over LM post training and many NLProc are dealing with language model enabled new applications

11.12.2024 05:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Is It Time to Worry About Benzene in Personal Care Products? The carcinogen has been found in sunscreen, deodorants, acne creams and other personal care products. Here’s what to know.

I am always worrying about Benzene (my cat)! www.nytimes.com/2024/12/05/w...

But please don't stop wearing sunscreen! Sun exposure is a known cancer risk, benzene risks unknown. This article has good tips if you want to minimize benzene exposure.

Obligatory Benzene (cat) pic ⬇️

06.12.2024 23:12 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Thanks!

06.12.2024 22:07 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Imagine a world where these will be positively correlated

06.12.2024 02:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Quite possible!

Here, we found some early evidence that SAE features trained on language models are still meaningful to LLaVA.

More details will be provided in the post, and more details will be provided soon!

@JackGallifant

@oldbayes.bsky.social

@daniellebitterman.bsky.social

05.12.2024 20:16 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Are SAE features from the Base Model still meaningful to LLaVA? β€” LessWrong Shan Chen, Jack Gallifant, Kuleen Sasse, Danielle Bitterman[1] Please read this as a work in progress where we are colleagues sharing this in a lab (…

Team @AnthropicAI & @thesubhashk @joshengels.bsky.social shows SAE features can be good for classifications.

Good evidence by @arthurconmy.bsky.social & @neelnanda.bsky.social on SAE features are transferable across base and IT models.

🧐 How about LLaVA?

tiny.cc/sae1

05.12.2024 20:16 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 1

More on future potential reliance on LLM agent doing reviews and audits

27.11.2024 21:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I’m terrified by the massive openreview data. Potentially gonna bite back on us πŸ₯²πŸ˜₯

27.11.2024 17:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

END/🧡 Thanks to all our awesome co-authors:
@jannahastings.bsky.social

@daniellebitterman.bsky.social

And all our awesome collaborators who are not on the right platform yet! πŸ¦‹

Happy Thanksgiving! πŸ‚

27.11.2024 15:17 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Cross-Care: Assessing the Healthcare Implications of Pre-training Data on Language Model Bias Large language models (LLMs) are increasingly essential in processing natural languages, yet their application is frequently compromised by biases and inaccuracies originating in their training data. ...

5/🧡 Dive deeper into our methods, findings, and the implications of our research by checking out the full πŸ“œ paper here: arxiv.org/abs/2405.05506
All our data can be downloaded from our website: crosscare.net

27.11.2024 15:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

4.5/🧡 For the arxiv pretraining dataset, we also have an overall trend based on entity mentions! Guess which two terms are the big bump there back in 2019

27.11.2024 15:13 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

4/🧡 We've also developed a new data visualization tool, available at [http://crosscare.net], to allow researchers and practitioners to explore these biases from different pretraining corpus and understand their implications better. Tools in progress! πŸ› οΈπŸ“Š

27.11.2024 15:13 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

3.5/🧡 Moreover, alignment methods don’t resolve inconsistencies in disease prevalence across languages (EN πŸ‡ΊπŸ‡Έ, ES πŸ‡ͺπŸ‡Έ, FR πŸ‡«πŸ‡·, ZH πŸ‡¨πŸ‡³). And tuning on English usually only affects English prompt output

27.11.2024 15:12 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0