Here are the ten papers that captured my attention in February. I hope the insights (and the ideas they spark) are useful to you. www.linkedin.com/pulse/intere...
Here are the ten papers that captured my attention in February. I hope the insights (and the ideas they spark) are useful to you. www.linkedin.com/pulse/intere...
We talk about AI safety. We rarely talk about AI status loss. The political one stalls more projects.
Thatβs 1 of 95 Hard Truths About AI in Healthcare.
Just published the March 2026 update. A lot changed since June.
Read it. Tell me which ones are wrong.
hardtruth.carrd.co
Vienna bound for ECR.
Ready for a few packed days of ideas, familiar faces, new connections, and probably too much coffee.
Looking forward to catching up with partners, customers and friends. If weβre already working together, or should be, letβs try to find a slot.
Will I see you at ECR in Vienna this week?
As AI models commoditize, the gap between vendors in the healthcare industry wonβt be in the architecture. Itβll be in the three seconds between a finding and the next click.
Is it funny cause itβs true?
Matt Shumer just described what happened to his job in the last couple of months, and itβs the most honest thing Iβve read about where this is actually heading.
If youβve been dismissing AI based on your experience from 2023, this will change your mind.ββββββββββββββββ
shumer.dev/something-bi...
AI has reached a tipping point, moving from narrow tools toward scalable clinical infrastructure in defined use cases.
Multimodal and agentic AI can help enable more proactive, connected care, while keeping the human at the center.
Thanks TEDx New River for having me.
AI has reached a tipping point, moving from narrow tools toward scalable clinical infrastructure in defined use cases. Multimodal and agentic AI can help enable more proactive, connected care, while keeping the human at the center.
Excited to share this perspective today at TEDx New River.
If I can explain it to AI, I understand it.
If not, I keep going.
Confusion isnβt failure.
Itβs incomplete thinking.
This is a love story. A drama. A science fiction novel that sometimes feels like fantasy. Or at least, thatβs how it feels when youβve spent most of your professional life where artificial intelligence meets healthcare ... janbeger.substack.com/p/healthcare...
Stop asking whether AI can beat a radiologist on hard cases. Ask whether it can stop a radiologist from missing an obvious, catastrophic finding on a terrible night.
Biggest lesson AI taught me wasnβt about tech. It was about communication.
Every bad prompt is a mirror. It shows you exactly where your thinking is fuzzy.
AI doesnβt just make you smarter. It shows you where youβre not. π€
Chaos. German style. π
An AI habit more people should use: force disagreement. When a decision matters, I donβt ask one AI. I set up a few. The yes voice. The no voice. The numbers brain. The context brain. One AI is good at echoing your thinking. A small group thatβs built to push back is how you get better answers.
The hardest part of AI transformation is not choosing the right tools. It is redesigning governance systems built for a slower, more static healthcare era.
I donβt use AI to get answers. I use it to argue with my ideas. Iβll toss out something half-baked and say, poke holes in this. Whatβs weak here? Iβm not trying to win the argument. Iβm trying to see what survives.
When AI enters healthcare, nothing stays where it was. How work flows, how data matters, and how responsibility is carried all shift. Delays are not inefficiency. They are renegotiation.
At this stage, healthcare AI involves unavoidable tradeoffs.
So hereβs the uncomfortable question:
Which is worse, an alert that shouldnβt have fired, or a case the system never flagged?
AI wonβt decide our future.
Our habits around it will.
Who experiments.
Who stays curious.
Who waits.
Most outcomes wonβt come from a single breakthrough.
Theyβll come from daily use, or daily avoidance.
In medicine, keeping a human in the loop is vital, but where does that loop begin and end? As AI becomes a 2nd opinion, mastering it may be part of competent care. The future standard: empathy from humans, data from machines. Ignoring AI? That might become negligence.
You canβt master a new paradigm while staring at a deadline. If you want your team to actually leverage AI, give them the breathing room to fail with it first.
What would happen if you gave your team one hour this week to explore AI with no deliverables attached?
AI systems can now flag people as βhigh riskβ for diseases long before they feel any symptoms. How does that shift who we treat as βsickβ and what does that do to someoneβs identity, stigma, and life choices before theyβre actually ill?
This study exposes the 'autopilot trap,' but also the way out. While passive reliance degrades skill, active collaboration sharpens it. Doctors remain in the driverβs seat by treating AI as a coach rather than a crutch. The future isn't about being tethered by tech itβs about being amplified by it.
We must design AI that artificially re-introduces friction for learners, or we will inherit a generation of doctors who are merely liability sponges for algorithms.
If AI cheapens specialized medical knowledge (doctor's value), but elevates non-automatable human skills (nurse's value)... Will nurses soon earn more than doctors?
This older paper shows that matching human accuracy is not enough for AI in healthcare. AI systems face higher trust expectations, and their mistakes often trigger more concern than human errors. This is due to AI's lack of transparency, adaptability, and social presence.
AI-assisted tasks completed using Claude show an estimated 80% reduction in task time, which could potentially double US labor productivity growth over the next decade if widely adopted.