Stephanie Hyland's Avatar

Stephanie Hyland

@hylandsl

machine learning for health at microsoft research, based in cambridge UK 🌻 she/her

2,308
Followers
888
Following
80
Posts
16.11.2024
Joined
Posts Following

Latest posts by Stephanie Hyland @hylandsl

the greatest joy of being a computational scientist is having the computer work for you while you do something else

15.01.2026 09:29 πŸ‘ 13 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1

β€œInterpretability plays a special role in machine learning because instead of focusing on making the AI smarter, we focus on improving human insight. I think this is the most important category of interpretability research, and we do not do enough of it.”

😎😎😎

12.12.2025 08:04 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
A poster titled β€œa circular argument” which has been cut into a circular shape

A poster titled β€œa circular argument” which has been cut into a circular shape

It’s a CIRCULAR poster! #eurips presenters innovating in poster design / fine motor skills

04.12.2025 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
a hand-written poster on a poster board, featuring a hand-drawn QR code (the code does not work)

a hand-written poster on a poster board, featuring a hand-drawn QR code (the code does not work)

remember to always include a QR code on your poster. spotted at #eurips

04.12.2025 16:18 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

What coding with an LLM feels like sometimes.

03.12.2025 09:29 πŸ‘ 267 πŸ” 64 πŸ’¬ 10 πŸ“Œ 6

when I ask candidates whether they've worked with "real medical data" this is the kind of thing that I mean

23.11.2025 17:05 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

found a file from PhD days with the FORTY-EIGHT ways "ACE inhibitor" was encoded in the EHR system we were working wth

23.11.2025 17:04 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

finally got around to booking my travel for #EurIPS2025! Looking forward to connecting with the European ML scene in Copenhagen

16.11.2025 17:17 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

uv is so good

21.09.2025 22:25 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Some papers really have a good intro

10.09.2025 21:26 πŸ‘ 16 πŸ” 1 πŸ’¬ 4 πŸ“Œ 0

The more rigorous peer review happens in conversations and reading groups after the paper is out with reputational costs for publishing bad work

17.08.2025 16:12 πŸ‘ 49 πŸ” 5 πŸ’¬ 2 πŸ“Œ 3
Google's Gemini AI tells a Redditor it's 'cautiously optimistic' about fixing a coding bug, fails repeatedly, calls itself an embarrassment to 'all possible and impossible universes' before repeating 'I am a disgrace' 86 times in succession

Google's Gemini AI tells a Redditor it's 'cautiously optimistic' about fixing a coding bug, fails repeatedly, calls itself an embarrassment to 'all possible and impossible universes' before repeating 'I am a disgrace' 86 times in succession

I'll admit, I was skeptical when they said Gemini was just like a bunch of PhDs. But I gotta admit they nailed it.

17.08.2025 13:51 πŸ‘ 7255 πŸ” 1657 πŸ’¬ 71 πŸ“Œ 161

what is the purpose of VQA datasets where text-only models do better than random?

14.08.2025 14:08 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Zotero screenshot showing four different papers with titles beginning with "MedAgent"

Zotero screenshot showing four different papers with titles beginning with "MedAgent"

lads can we stop

13.08.2025 13:34 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
diagram from Anthropic paper with an icon & label that says β€œsubtract evil vector”

diagram from Anthropic paper with an icon & label that says β€œsubtract evil vector”

quick diagram of Bluesky’s architecture and why it’s nicer here

02.08.2025 23:19 πŸ‘ 72 πŸ” 5 πŸ’¬ 4 πŸ“Œ 1

Emojis and massive try: except blocks. GitHub Copilot (at least Claude Sonnet 4) is very concerned about error handling.

03.08.2025 06:46 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

if openreview were a lot fancier you could dynamically reallocate/cancel remaining reviews once a paper meets that expected minimum.

ideally you would mark these remaining reviews as optional rather than fully cancelled, in case that reviewer has already done work

30.07.2025 16:26 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1

it's frustrating how inefficient review assignments are: we target a minimum number of completed reviews per paper but in accounting for inevitable no-shows, some people end up doing technically unnecessary (if still beneficial) reviews

30.07.2025 16:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

How many AI researchers fold their own laundry?

29.07.2025 06:29 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

I am in the UK so feel free to discard, but I recently noticed Discord asking for age verification for some channels:

25.07.2025 07:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
microsoft/maira-2-sae Β· Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

ALSO we have released the SAEs we trained, and the automated interp for all(!!)* features:
huggingface.co/microsoft/ma...

*all features for a subset of SAEs, we didn't run the full auto-interp pipeline on the widest SAE

18.07.2025 09:42 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We also found that the majority of the SAE features remained "uninterpretable", indicating room for improvement both in automated interpretability (we focused primarily on textual features!), but perhaps also questioning the SAE training and modelling assumptions. More work to be done here ✌️

18.07.2025 09:40 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

... and in some cases we were able to steer MAIRA-2's generations, selectively introducing or removing concepts from its generated report.

But steering worked inconsistently! Sometimes it did nothing, or introduced off-target effects. We still don't fully understand when it will work.

18.07.2025 09:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We found interpretable and radiology-relevant concepts in MAIRA-2, like:
- "Aortic tortuosity or calcification"
- "Placement and position of PICC lines"
- "Presence of 'shortness of breath' in indication"
- "Describing findings without comparison to prior images"
- "Use of 'possible' or 'possibly'"

18.07.2025 09:34 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We performed the full pipeline of SAE training, automated interpretation with LLMs, steering, and automated steering evaluation.

18.07.2025 09:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Insights into a radiology-specialised multimodal large language model with sparse autoencoders Interpretability can improve the safety, transparency and trust of AI models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic...

New work from my team! arxiv.org/abs/2507.12950
Intersecting mechanistic interpretability and health AI 😎

We trained and interpreted sparse autoencoders on MAIRA-2, our radiology MLLM. We found a range of human-interpretable radiology reporting concepts, but also many uninterpretable SAE features.

18.07.2025 09:30 πŸ‘ 11 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0

Mexico is an *official* NeurIPS event, it’s an additional location for the conference and is different to the endorsement of EurIPS.

17.07.2025 19:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It’s an endorsed event but is not actually officially NeurIPS! Maybe if this experiment works well there will be more distributed (official) NeurIPS locations in future.

17.07.2025 14:26 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We're excited to announce a second physical location for NeurIPS 2025, in Mexico City, which we hope will address concerns around skyrocketing attendance and difficulties in travel visas that some attendees have experienced in previous years.

Read more in our blog:
blog.neurips.cc/2025/07/16/n...

16.07.2025 22:05 πŸ‘ 46 πŸ” 21 πŸ’¬ 1 πŸ“Œ 2
Post image

During the last couple of years, we have read a lot of papers on explainability and often felt that something was fundamentally missingπŸ€”

This led us to write a position paper (accepted at #ICML2025) that attempts to identify the problem and to propose a solution.

arxiv.org/abs/2402.02870
πŸ‘‡πŸ§΅

10.07.2025 17:58 πŸ‘ 12 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1