What does the 2025-26 Federal Budget mean for Australiaβs investment in AI? | Ai Health Alliance
What does the 2025-26 Federal Budget mean for Australiaβs investment in healthcare AI?
aihealthalliance.org/2025/03/28/w...
28.03.2025 01:18
π 3
π 0
π¬ 0
π 0
Trump hits NIH with βdevastatingβ freezes on meetings, travel, communications, and hiring
Researchers facing
US NIH Grant review panels are suspended, and a freeze imposed on travel, communication with the public, hiring ....
www.science.org/content/arti...
23.01.2025 02:05
π 6
π 1
π¬ 0
π 1
The Challenges of Establishing Assurance Labs for Health Artificial Intelligence (AI) - Journal of Medical Systems
Journal of Medical Systems -
There is a growing US push for clinical AI safety to be certified by academic assurance labs rather than the FDA.
But there are many challenges - conflicts of interest via industry funding to universities, scaleability, and suitability to post-market monitoring.
link.springer.com/article/10.1...
09.01.2025 05:35
π 7
π 2
π¬ 0
π 0
TRIPOD-LLM is out! Check out our consensus guidelines for reporting #LLM research in biomedicine. TRIPOD-LLM is intended to be a living guideline to keep up with the rapid advances in LLMs. Kudos to lead author
Dr. Jack Gallifant
08.01.2025 20:13
π 42
π 13
π¬ 1
π 2
Itβs remarkably easy to inject new medical misinformation into LLMs
Changing just 0.001% of inputs to misinformation makes the AI less accurate.
βWhile the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.β
09.01.2025 00:54
π 185
π 88
π¬ 8
π 11
[New digital scribe paper] Expert evaluation of large language
models for clinical dialogue summarization
www.nature.com/articles/s41...
In this study with @dafraile.bsky.social ChatGPT's ability to summarise primary care consultations was impressive but not yet at human skill level.
09.01.2025 00:28
π 5
π 1
π¬ 0
π 0
The government canβt ensure artificial intelligence is safe. This man says he can.
Brian Anderson is ready to shape the future of AI in health care β if Donald Trump will let him.
Who should certify health AI is safe? There is a push in the US for it not to be the FDA but instead, the industry that manufactures the technology.
Do we have good examples of high risk technology applications where there has been effective self regulation?
www.politico.com/news/2025/01...
02.01.2025 23:21
π 1
π 0
π¬ 0
π 0
How to get a PhD in 20 Tweets (Part 2)
Happy Holidays!
23.12.2024 23:25
π 2
π 0
π¬ 0
π 0
How to get a PhD in 20 Tweets (Part 1)
Source: blogs.bmj.com/bmj/2012/02/...
23.12.2024 23:23
π 2
π 0
π¬ 1
π 0
"The important thing is that paper makes it very clear that nobody should ever take LLMs at their word. They can easily tell you one thing and (especially if hooked up as agents) do another β possibly quite contrary to what they have alleged they are doing." - Gary Marcus, from the linked substack.
12.12.2024 23:01
π 13
π 4
π¬ 0
π 1
Always a hostage to fortune when making such predictions! But we now have digital scribes and I reckon we will be close to this world by 2030, if not there. I believe that everything described is now technically possible, except maybe for the curator agents which are a few years away.
12.12.2024 09:43
π 2
π 0
π¬ 0
π 0
And for fun, here is one from the vault circa 2004: "Four rules for the reinvention of health care"
www.bmj.com/content/328/...
12.12.2024 06:45
π 10
π 0
π¬ 1
π 1
Indeed. One research challenge is to take the massive data sets potentially generated in a smart environment and find ways to make them clinically useful. βOld fashionedβ notes maybe said too little but smart environments will likely say too much. Solveable but currently unsolved.
10.12.2024 21:51
π 2
π 0
π¬ 0
π 0
The digital scribe - npj Digital Medicine
npj Digital Medicine - The digital scribe
The 4 stages of digital scribes
1. Human led documentation
2. Mixed-initative documentation
3. Computer-led documentation
4. Intelligent clinical environment
How long until we work in smart, sensor dense, clinical spaces where documentation disappears as a human task?
www.nature.com/articles/s41...
10.12.2024 00:05
π 8
π 2
π¬ 1
π 0
IOS Press Ebooks - Assessing Technology Success and Failure Using Information Value Chain Theory
Sometimes digital health evaluations focus on hard outcomes when process benefits are more likely. It is hard to demonstrate morbidity and mortality changes due to an EHR because so many other things also need to go right eg changes in human decisions and processes
ebooks.iospress.nl/volumearticl...
05.12.2024 00:18
π 1
π 0
π¬ 0
π 0
Interesting discussions on today's CIEHF webinar launching the new human-centred healthcare AI guidance. Questions around relying on AI vs monitoring the outputs critically. And what users need to know - and who's going to support them. ergonomics.org.uk/resource/int...
03.12.2024 18:14
π 7
π 2
π¬ 1
π 1
The cause is likely multiple I suspect. Yes it could be journal acceptance practices leading to a distortion in the pool of published values, it could also be because of researchers "p hacking" ie looking for analyses which produce 'favourable' outcomes. It could how common tools calculate AUCs!
02.12.2024 03:49
π 3
π 0
π¬ 0
π 0
Histogram of AUC mean values
New research from @aidybarnett.bsky.social shows published AUC values for some clinical prediction models are are over-inflated with excesses above 0.7, 0.8 and 0.9 and shortfalls below these thresholds, risking sub-optimal decisions
bmcmedicine.biomedcentral.com/articles/10....
01.12.2024 23:46
π 6
π 0
π¬ 1
π 0
Three models of conformance service: (A) universal conformance: all agents have access to the same global standard (Mx); (B) mediated conformance: adaptors provide an externally situated conformance service to interoperating agents; (C) localized conformance: autonomous adaptive agents internalize their conformance functions. Standards are mandated in (A), incompletely adhered to in (B), and potentially helpful but not necessary in (C).
βWhat would things look like in a zero standards world? .. from the perspective of an autonomous and adaptive entity, we would see standards for what they are - a workaround when entities cannot adapt.β
academic.oup.com/jamia/articl...
29.11.2024 06:10
π 3
π 0
π¬ 0
π 0
Today in Australia *all* of the responsibility for use of ai scribe rests with the clinician. βOnce you accept the scribe note it becomes your noteβ. Manufacturer, integrator, educator and accreditor are unburdened with responsibility. So question is whether that is fair and reasonable?
28.11.2024 00:25
π 1
π 0
π¬ 0
π 0
When the clinical use of AI leads to patient harms, medico legal responsibility should not just fall on the shoulders doctors but all those who can manage or mitigate risk - including software developers. (Tracey Pickett, Avant) #aicare24
27.11.2024 06:05
π 13
π 4
π¬ 2
π 1
Kicking things off with @ecoiera.bsky.social at todayβs health AI conference in Melbourne. Very useful scorecard and update on progress in AI for health in the last year. (Spoiler - itβs a lot of activity!) #digitalhealth
26.11.2024 22:22
π 7
π 1
π¬ 0
π 0
Covers of Science and Nature journals in the past 2 weeks denoting remarkable progress of life science with A.I. tools
A vertical takeoff of life science with #AI LLLMs.
Publication of 10 new foundation models of Proteins, DNA, RNA, methylation, cells, and interactions, evolution, and design in the past couple of weeks!
Unprecedented progress, reviewed in the new Ground Truths
erictopol.substack.com/p/learning-t...
24.11.2024 18:12
π 347
π 86
π¬ 15
π 17