Nikita Makarov's Avatar

Nikita Makarov

@nikitamakarov

LLMs & Digital Twins for Cancer | PhD student at Roche pRED & Helmholtz Munich | Opinions are my own

557
Followers
719
Following
27
Posts
14.11.2024
Joined
Posts Following

Latest posts by Nikita Makarov @nikitamakarov

Preview
Large language models forecast patient health trajectories enabling digital twins - npj Digital Medicine npj Digital Medicine - Large language models forecast patient health trajectories enabling digital twins

Paper here: www.nature.com/articles/s41...

“Large Language Models forecast Patient Health Trajectories enabling Digital Twins”

07.10.2025 07:39 👍 0 🔁 0 💬 0 📌 0
Post image

Overall, DT-GPT shows that LLMs have the potential to become human digital twins. We hope that, in the future, LLM based digital twins will revolutionize the way we run clinical trials & patient care (10/10).

07.10.2025 07:39 👍 0 🔁 0 💬 1 📌 0

Check out the paper and appendix for many more results, including exploration of zero shot, various input parameters, latent clinical knowledge and tech details (9/10)

07.10.2025 07:39 👍 0 🔁 0 💬 1 📌 0
Post image

In zero shot forecasting, DT-GPT outperformed a fully trained model on 13 variables. These variables were typically biologically linked to the target variables used during training (8/10)

07.10.2025 07:38 👍 0 🔁 0 💬 1 📌 0
Post image

We show that key variables (e.g. therapy, ECOG) can drive differences in both predictions and real data. DT-GPT can even offer preliminary explainability & perform zero-shot on variables that it did not see during training. (7/10)

07.10.2025 07:38 👍 0 🔁 0 💬 1 📌 0
Post image

DT-GPT is robust: achieves competitive performance after ~5,000 patients, can even handle a 20% increase in missingness and up to 25 misspellings per sample without significant performance degradation (6/10)

07.10.2025 07:38 👍 0 🔁 0 💬 1 📌 0
Post image

Taking a step back, we see that DT-GPT also preserves the overall distribution of the outputs better than other baselines, quantified with the KS-distance (5/10)

07.10.2025 07:37 👍 0 🔁 0 💬 1 📌 0
Post image

Digging deeper, DT-GPT generally outperforms the second best model longitudinally. In many cases, high error predictions occur since our forecasts are based on aggregations of multiple trajectories, even if some individual trajectories are closer to the ground truth (4/10)

07.10.2025 07:37 👍 0 🔁 0 💬 1 📌 0
Post image

Our method, DT-GPT, outperforms the SOTA baselines in most cases, or achieves very competitive performance. Here you see the mean absolute error (MAE) across 12 variables in 3 different indications (3/10)

07.10.2025 07:37 👍 0 🔁 0 💬 1 📌 0
Post image

We fine-tune biomedical LLMs on patient clinical data, exploring the method on both a long term lung cancer dataset, and a short term ICU dataset. A few adjustments are required to get full performance, esp. trajectory aggregation & instruction masking (2/10)

07.10.2025 07:37 👍 0 🔁 0 💬 1 📌 0
Post image

DT-GPT: showing that LLMs can forecast patient trajectories (1/10)

Now in npj Digital Medicine www.nature.com/articles/s41...
Also in Doctor Penguin!

Big thanks to Maria Bordukova, @raulrod.bsky.social Papichaya Quengdaeng Daniel Garger @fschmich.bsky.social Michael Menden Helmholtz Munich Roche

07.10.2025 07:36 👍 2 🔁 1 💬 2 📌 0
Preview
Large language models forecast patient health trajectories enabling digital twins - npj Digital Medicine npj Digital Medicine - Large language models forecast patient health trajectories enabling digital twins

A new model, DT-GPT, uses LLMs to forecast patient health trajectories, enabling "digital twins." By processing raw EHR data, it outperformed state-of-the-art methods in cancer, ICU, and Alzheimer's cohorts and can even forecast untrained variables.
#MedSky #MedAI #MLSky

02.10.2025 14:47 👍 5 🔁 3 💬 0 📌 0
Preview
Advancing Responsible Healthcare AI with Longitudinal EHR Datasets Current evaluations of AI models in healthcare rely on limited datasets like MIMIC, lacking complete patient trajectories. New benchmark datasets offer an alternative.

[1/4] 🎉 We're thrilled to announce the general release of three de-identified, longitudinal EHR datasets from Stanford Medicine—now freely available for non-commercial research use worldwide! 🚀
Learn more on our HAI blog:
hai.stanford.edu/news/advanci...

13.02.2025 01:38 👍 7 🔁 3 💬 1 📌 1

Want to push the limits of LLMs in drug development?

🚀Apply now for our 2 summer internships for 2025:

1️⃣ Multimodal - www.linkedin.com/jobs/view/40...
2️⃣ Preclinical - www.linkedin.com/jobs/view/40...

DM me if you have any questions or know anybody who would be interested in this.

16.12.2024 12:19 👍 1 🔁 1 💬 0 📌 0
Preview
Medical AI Join the conversation

For my fellow researchers in AI, Medical, and Healthcare domain:, Here is the Medical AI Startup pack if you are new there
go.bsky.app/PddA2uy

27.11.2024 09:13 👍 15 🔁 4 💬 4 📌 0

Thanks!

27.11.2024 09:21 👍 0 🔁 0 💬 0 📌 0

This is great! Would it be possible to add me to this? 🙏

27.11.2024 09:18 👍 0 🔁 0 💬 1 📌 0

I created a starter pack for Health AI and Informatics. Mix of folks (reporters and researchers) that I think you should follow.

I've got room to include more, so please tag anyone you think I should add! 🧪🩺 🤖 🛟

25.10.2024 15:06 👍 24 🔁 5 💬 7 📌 0

Thank you!

22.11.2024 13:21 👍 1 🔁 0 💬 0 📌 0

This is fantastic! Would it be possible to add me in as well?

22.11.2024 07:05 👍 1 🔁 0 💬 1 📌 0

Thanks!!

20.11.2024 10:38 👍 0 🔁 0 💬 0 📌 0
Preview
Large Language Models forecast Patient Health Trajectories enabling Digital Twins Background Generative artificial intelligence (AI) accelerates the development of digital twins, which enable virtual representations of real patients to explore, predict and simulate patient health t...

Pre-print here: medrxiv.org/content/10.1...

“Large Language Models forecast Patient Health Trajectories enabling Digital Twins”

Reposted from X to have some content here :)

20.11.2024 10:07 👍 1 🔁 0 💬 0 📌 0

Overall, DT-GPT shows that LLMs have the potential to become human digital twins. We hope that, in the future, LLM based digital twins will revolutionize the way we run clinical trials & patient care (8/8).

20.11.2024 10:07 👍 3 🔁 0 💬 1 📌 0
Post image

In zero shot forecasting, DT-GPT outperformed a fully trained model on 13 variables. These variables were typically biologically linked to the target variables used during training (7/8)

20.11.2024 10:07 👍 1 🔁 0 💬 1 📌 0
Post image

DT-GPT can offer preliminary explainability & perform zero-shot on variables that it did not see during training. We show that key variables (e.g. therapy, ECOG) can drive differences in both predictions and real data (6/8)

20.11.2024 10:06 👍 2 🔁 0 💬 1 📌 0
Post image

DT-GPT is robust: achieves competitive performance after ~5,000 patients, can even handle a 20% increase in missingness and up to 25 misspellings per sample without significant performance degradation (5/8)

20.11.2024 10:06 👍 2 🔁 0 💬 1 📌 0
Post image

Digging deeper, DT-GPT generally outperforms the second best model longitudinally. In many cases, high error predictions occur since our forecasts are based on aggregations of multiple trajectories, even if some individual trajectories are closer to the ground truth (4/8)

20.11.2024 10:06 👍 2 🔁 0 💬 1 📌 0
Post image

Our method, DT-GPT, outperforms the SOTA baselines in most cases, or achieves very competitive performance. Here you see the mean absolute error (MAE) across 9 variables (3/8)

20.11.2024 10:06 👍 1 🔁 1 💬 1 📌 0
Post image

We fine-tune biomedical LLMs on patient clinical data, exploring the method on both a long term lung cancer dataset, and a short term ICU dataset. A few adjustments are required to get full performance, esp. trajectory aggregation & instruction masking (2/8)

20.11.2024 10:05 👍 3 🔁 1 💬 1 📌 0
Post image

Introducing DT-GPT: showing that LLMs can forecast patient trajectories (1/8)

Pre-print here 👉 medrxiv.org/content/10.1...

Big thanks to Maria Bordukova, @raulrod.bsky.social, Fabian Schmich, Michael Menden, UniMelb, HelmholtzMunich, Roche

20.11.2024 10:04 👍 13 🔁 0 💬 1 📌 1