Chen (Cherise) Chen's Avatar

Chen (Cherise) Chen

@cherise-chen

Lecturer in Computer Vision at University of Sheffield | Robust&Trustworthy AI for healthcare https://cherise215.github.io/

77
Followers
43
Following
5
Posts
22.11.2024
Joined
Posts Following

Latest posts by Chen (Cherise) Chen @cherise-chen

Work done by my PhD student: Sijie Li, co-supervised with Prof. Jungong Han. @shefcmi.bsky.social

04.08.2025 18:41 πŸ‘ 2 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

New paper: SimMLM: A Simple Framework for Multi-modal Learning with #MissingModality, to be presented at @iccv.bsky.social 2025.
βœ… Tested on BraTS 2018, UPMC Food-101, avMNIST
βœ… Robust to missing inputs at Test time
βœ… Better uncertainty & interpretability
πŸ“„ Preprint: arxiv.org/pdf/2507.19264

04.08.2025 18:41 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

In particular, we'd like to thank our speakers: Prof Laoise Mcnamara, Dr Dawn Walker, Dr Xiancheng Yu, Prof John Rasmussen, Dr @elleneb.bsky.social‬, Prof Julia Weinstein, Dr Neil Stewart, Prof Gwendolen Reilly, @proflemaitre.bsky.social‬, Dr @cherise-chen.bsky.social‬ & Prof Robin Purshouse.

07.07.2025 11:27 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
GitHub - cherise215/LLM-ECG-Dual-Attention Contribute to cherise215/LLM-ECG-Dual-Attention development by creating an account on GitHub.

Our approach makes AI-driven ECG analysis more reliable for real-world applications. Let's work on robust, interpretable AI for better human-AI collaboration in healthcare together!
Code: github.com/cherise215/L...
Paper: doi.org/10.1109/tbda...

All open access!

13.02.2025 13:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

πŸ”Ή Key Insights:
βœ… Robust Dual Attention Design with Explainability – Captures cross-lead interactions and local temporal dynamics, improving accuracy and clinical relevance!
βœ… LLM-Informed Pretraining- Uses a LLM pre-trained on ECG reports to align text and waveform data, improving generalizability

13.02.2025 13:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Can AI predict future heart failure risk using low-cost ECGs for global health? πŸ«€The challenge is: public datasets like UK Biobank have very few recorded HF events, making it hard for AI to learn meaningful patterns. We are excited to introduce our recently published multi-modal learning approach.

13.02.2025 13:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

MCML Director Prof. Daniel RΓΌckert has been awarded the Gottfried Wilhelm Leibniz Prize 2025!

The Prize is endowed with 2.5 million euros by the Deutsche Forschungsgemeinschaft (DFG) - German Research Foundation.

Congratulations!

@danielrueckert.bsky.social
πŸ“Έ Juli Eberle

11.12.2024 15:00 πŸ‘ 26 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

πŸš€ Exciting news! #MIDL2025 paper call! Submit work on AI for diagnosis and prognosis, multimodal learning & more! πŸ©ΊπŸ“Š
πŸ‘‰ Registration Deadline: Jan 17, 2025
πŸ‘‰ Full Paper Submission Deadline: Jan 24, 2025
πŸ‘‰ Short Paper Submission Deadline: April 11, 2025
🌐 Details:https://2025.midl.io/call-for-papers

26.11.2024 09:29 πŸ‘ 8 πŸ” 1 πŸ’¬ 0 πŸ“Œ 2