And much more additions, improvements and fixes, that couldn't have been possible without the community support and contributions ππ»
github.com/huggingface/...
And much more additions, improvements and fixes, that couldn't have been possible without the community support and contributions ππ»
github.com/huggingface/...
- π NVIDIA Blackwell support, ready for next-gen GPUs as B200, GB200, or RTX 50-series.
- π Add bidirectional attention support for 3, enabling newer embedding models as Voyage AI by MongoDB.
- π¦ Add support for Meta Llama 2 and 3 architectures with Flash Attention support, enabling embedding models as NVIDIA Llama Embed Nemotron.
- π Add support for Microsoft Deberta V2 and V3, for both feature-extraction (and sentence-similarity) and text-classification, enabling models as Meta Llama Prompt Guard.
More embedding models and an even more reliable inference engine is what you get with @hf.co Text Embeddings Inference v1.9.0 π₯
More in the thread π§΅
github.com/alvarobartt/...
`hf-mem` is all you need to estimate the required VRAM for inference of any model on @huggingface based on Safetensors metadata.
- Written in Python
- Lightweight, only depends on `httpx`
- Runs w/ `uvx` as `uvx hf-mem ...`
- Works with any Safetensors repository
- Output inspired by usgraphics.com
𧨠I built something with #Zig!
`tokeni.zig` is a std-only implementation of the Byte Pair Encoding (BPE) algorithm in Zig for tokenizing sequences of text, used by OpenAI (among many others) to tokenize the text when pretraining their large language models!
github.com/alvarobartt/...
https://alvarobartt.me/how-to-read-and-parse-json-with-zig-0-13
For anyone interested in Zig I wrote a small post titled "How to read and parse JSON with Zig 0.13" that explains how to read JSON from a file with keys with different value types and how to access those values.
love this quote "working smarter helps, but the real superpower is resting smarter"
a highly recommended read!
Right, the point is that on Rust you end up "refactoring" a lot (at least I do), but seems easier to handle, whilst on Zig I don't feel is as easy, not especially complex either, just more cumbersome
π€ Here's a simple script that calculates the required VRAM for serving DeepSeek R1 from @huggingface Hub safetensor's metadata!
P.S. The result of the script above is: "model_id='deepseek-ai/DeepSeek-R1' requires memory=756.716GB"
hmm refactoring in zig is not as easy as it's in rust, even though seems fairly common too, right? or is it just me? π€
stuff that matters takes time
Last moments of closed-source AI πͺ¦ :
Hugging Face is openly reproducing the pipeline of π³ DeepSeek-R1. Open data, open training. open models, open collaboration.
π«΅ Let's go!
github.com/huggingface/...
Check DeepSeek-R1 collection on the Hugging Face Hub, with not just DeepSeek-R1 and DeepSeek-R1-Zero, but also distilled their reasoning patterns to fine-tune smaller models!
huggingface.co/collections/...
π DeepSeek is not on the @hf.co Hub to take part, they are there to take over!
Amazing stuff from the DeepSeek team, ICYMI they recently released some reasoning models (DeepSeek-R1 and DeepSeek-R1-Zero), fully open-source, their performance is on par with OpenAI-o1 and it's MIT licensed!
you can find so much gold in github gists wow, i was not a big fan because the discoverability doesn't seem great, but been exploring gists lately and so much gold stuff in there!
in case anyone missed it, we're running a certified course on ai agents at hugging face starting on feb 2nd; the course is on how to build you own ai agents for different cool use cases built on top of open source!
π you can sign up in the link below, don't miss it!
bit.ly/hf-learn-age...
ok, here we go again π
because it's my native language, anyway it was just an idea, not sure I'll do it anyway π€
Not quite sure yet about how's following me here, but I may consider not just x-posting but also eventually post more random thoughts + content in Spanish, is that something you'd be interested in?
awesome π€
how do I get in there? π€
here we go again!
i work at hugging face and here you can expect posts about machine learning (llms mainly), some rust, some nvim nerdy stuff and anything related to hugging face π€
posting is not easy for me, but iβll try to do better from now on, support is highly appreciated!
Read more about the Serverless Inference API in the documentation!
https://huggingface.co/docs/api-inference
π₯ Finally, if you are willing to get started quickly and experiment with LLMs feel free to give the recently released Inference Playground a try!
https://huggingface.co/playground