everything is Apacheβ2.0.
head to the @hf.co collection, pick a model, and try it out.
share feedback, and tell me your clinical/biomedical NER needs, your use cases will guide the roadmap.
huggingface.co/collections/...
everything is Apacheβ2.0.
head to the @hf.co collection, pick a model, and try it out.
share feedback, and tell me your clinical/biomedical NER needs, your use cases will guide the roadmap.
huggingface.co/collections/...
coverage includes:
BC4CHEMD,
BC5CDR (chem + disease),
BC2GM,
JNLPBA,
BioNLP 2013 CG,
GELLUS,
FSU,
CLL,
Anatomy (AnatEM),
Linnaeus,
Speciesβ800,
NCBIβDisease.
pick a size to match latency/accuracy needs and your deployment constraints.
π§΅ (5/6)
whatβs in the release: 91 models across sizes (~60M β ~770M; Tiny β XLarge).
domainβadapted for clinical/biomedical text while keeping flexible zeroβshot behavior.
seamless with gliner and the @hf.co ecosystem.
π§΅ (4/6)
why zeroβshot?
define labels at inference, no retraining.
go from βdiseaseβ to βgene mutationβ to βdeviceβ by changing the label list.
perfect for shifting schemas across hospitals, projects,
and ontologies without new annotation cycles.
π§΅ (3/6)
performance snapshot:
across 91 baseβfineβtuned pairs,
average F1 jumps from 0.519 β 0.809 (+0.290, ~80% relative).
consistent gains for chemicals, diseases, anatomy, genes/proteins, and oncology corpora.
π§΅ (2/6)
Introducing 90+ open-source, stateβofβtheβart biomedical and clinical zeroβshot NER models on @hf.co by OpenMed
Apacheβ2.0 licensed and ready to use
Built on GLiNER and covering 12+ biomedical datasets
π§΅ (1/6)
it's being assembled all by hand?
welcome GPT-5-Codex
Unlocking Healthcare AI: I'm Releasing State-of-the-Art Medical Models for Free. Forever.
huggingface.co/blog/Maziyar...
Explore the models, build something amazing, and join the OpenMed community. Let's make healthcare smarter together.
Check it out the blog post: huggingface.co/blog/Maziyar...
These 380+ models aren't just free, they're top-tier, matching or beating paid options. With sizes from 109M to 568M parameters, they're ready for real-world use.
Plus, they integrate seamlessly with Hugging Face and PyTorch.
Healthcare AI has been locked behind paywalls for too long. Costly licenses and limited access have slowed innovation. OpenMed changes that by making advanced models freely available to everyone. No more barriers, just progress!
π Big news in healthcare AI! I'm thrilled to announce the launch of OpenMed on @hf.co, releasing 380+ state-of-the-art medical NER models for free under Apache 2.0.
And this is just the beginning! π§΅
huggingface.co/blog/Maziyar...
Yeah, not joking! Bye until you stop treating Devs as your customers instead of partners.
After 14 years, I'm canceling my Apple Developer membership. I've always believed Apple should pay developers to build apps, not charge them.
iPhone is useless without developers' work. Stop taking money from developers; they already ensure your overpriced devices sell!
RL in LLM training is like adding the right spices to a dish - it just makes everything better!
medium.com/%40ardyadipt...
RLHF is changing the game, making AI more human-like by learning from our feedback, but it's not all easy - getting good, unbiased feedback is tough.
www.lebigdata.fr/tout-savoir-...
Check out Reward-Robust RLHF - it's tackling reward hacking and making LLMs more reliable by focusing on both performance and stability.
medium.com/%40TheDataSc...
New DeepSeek-R1 method boosts LLM reasoning with a cool multi-stage training setup, making AI smarter at problem-solving.
medium.com/%40danushidk...
Now you can use DeepSeek R1 on Azure AI Foundry and GitHub, making top-notch AI tech more accessible to everyone.
Sam Altman admits OpenAI was wrong about open-source AI. Guess they're playing catch-up now!
www.businessinsider.com/sam-altman-o...
Oh very good questions! I do 2 things:
- I use personas to have different responses
- randomness of the temp and seed in combination
I also create 2-3 responses so I can rate them later and only keep the better one.
- huggingface.co/datasets/pro...
- huggingface.co/datasets/arg...
weird thing of a bunch of spammers in my mentions that are all obviously llm replies
all accounts that have existed for less than 24 hours, only replies and no original posts, and oddly fixated on social issues
Building reasoning models is no easy feat!
π DeepSeek-R1βs journey highlights key challenges with PRM (Process Reward Models) and MCTS (Monte Carlo Tree Search). From annotation hurdles to scaling limits, the path to scalable AI reasoning is full of learnings.
#AI #ReinforcementLearning #rl
You donβt have to though! Hide it in the ceiling! π
Thank you! βΊοΈ love a strong dollar! π
The next season of Selling Sunset is going to be interesting!
βcontrol the text generation process itself by directly modifying the probability distribution? Thatβs where logit processing comes into play.β
Read more on @hf.co :
huggingface.co/blog/logits-...