Daniรซl de Kok's Avatar

Daniรซl de Kok

@danieldk.eu

Machine Learning, Natural Language Processing, LLM, transformers, macOS, NixOS, Rust, C++, Python, Cycling. Working on inference at Hugging Face ๐Ÿค—. Open source ML ๐Ÿš€.

1,517
Followers
161
Following
48
Posts
10.10.2023
Joined
Posts Following

Latest posts by Daniรซl de Kok @danieldk.eu

Post image

More embedding models and an even more reliable inference engine is what you get with @hf.co Text Embeddings Inference v1.9.0 ๐Ÿ’ฅ

More in the thread ๐Ÿงต

17.02.2026 16:05 ๐Ÿ‘ 3 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Custom Kernels for All from Codex and Claude Weโ€™re on a journey to advance and democratize artificial intelligence through open source and open science.

Kernels now has an agent skill to write custom Hub kernels: huggingface.co/blog/custom-...

Awesome work by @benburtenshaw.bsky.social and Sayak Paul! ๐Ÿ”ฅ

13.02.2026 21:22 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

And degoogle your phone.

11.02.2026 16:29 ๐Ÿ‘ 7 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Release v0.12.0 ยท huggingface/kernels New features Merge of kernels and kernel-builder repositories kernel-builder has been merged into the kernels repository. This makes it easier for us to coordinate changes that affect both the kern...

kernels 0.12 is out! ๐ŸŽ‰

Changes:

* Support for kernel version branches to gracefully roll out kernel API changes.
* Support for PyTorch 2.10.
* kernel-builder is now merged into the kernels repo.
* Initial support for standardized kernel benchmarks.

github.com/huggingface/...

27.01.2026 19:02 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Zed has been great for me, is very fast, and has a single 'turn all AI off' toggle.

25.01.2026 14:07 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
One Year Since the โ€œDeepSeek Momentโ€ A Blog post by Hugging Face on Hugging Face

DeepSeek R1 dropped one year ago ๐Ÿณ and a lot has changed.

With Irene Solaiman, weโ€™re launching a blog series on
@hf.co about how that moment reshaped AI + open source in 2025, starting with strategic shifts and the explosion of new open models in China!

huggingface.co/blog/hugging...

20.01.2026 22:49 ๐Ÿ‘ 21 ๐Ÿ” 4 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

๐Ÿ”ฅI am super excited for the official release of an open-source library we've been working on for about a year!

๐Ÿช„interpreto is an interpretability toolbox for HF language models๐Ÿค—. In both generation and classification!

Why do you need it, and for what?

1/8 (links at the end)

20.01.2026 16:03 ๐Ÿ‘ 20 ๐Ÿ” 9 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 3

T-Head, it uses a fork of the 0.7 draft of the RISC-V Vector extension .

19.12.2025 18:19 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Natural Language Processing How do you build Large Language Models? How do humans experience Natural Language Processing (NLP) applications in their daily lives? And how can we...

๐Ÿ‘€ Look what ๐ŸŽ… has broght just before Christmas ๐ŸŽ: a brand new Research Master in Natural Language Processing at @facultyofartsug.bsky.social @rug.nl

Program: www.rug.nl/masters/natu...

Applications (2026/2027) are open! Come and study with us (you will also learn why we have a ๐Ÿฎ in our logo)

18.12.2025 11:28 ๐Ÿ‘ 25 ๐Ÿ” 15 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

We are currently doing a reading group on RISC-V and its vector extension. I actually got to implement it using the fast inverse square root because the T-Head board that we use does not have the vfrsqrt7.v instruction. So, full-circle I guess.

github.com/danieldk/low...

13.12.2025 11:16 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

It started out as a joke with @kadarakos.bsky.social in 2022 when we worked at @explosion.ai that we should make an activation function using the fast inverse sqrt of Kahan/Walsh and famously used in Quake 3.

13.12.2025 11:14 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0
Benchmarks comparing RISC-V vectorized activation functions on a Milk-V Duo 256M. Dish is the fastest with 110M elements per second, followed by Swish with 57M elements per second and the slowest is the Cook GELU approximation coming in at 39M elements per second.

Benchmarks comparing RISC-V vectorized activation functions on a Milk-V Duo 256M. Dish is the fastest with 110M elements per second, followed by Swish with 57M elements per second and the slowest is the Cook GELU approximation coming in at 39M elements per second.

I finally made a page on my Dish activation function, replacing my deleted Tweet: danieldk.eu/Dish-Activat...

It's a non-monotonic function similar to GELU/SiLU, but does not require elementary functions, making it faster on various hardware.

I'll leave the empirical evaluation to someone else ๐Ÿ˜.

13.12.2025 11:09 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Training LLMs end to end is hard. But way more people should, and will, be doing it in the future.

The @hf.co Research team is excited to share their new e-book that covers the full pipeline:
ยท pre-training,
ยท post-training,
ยท infra.

200+ pages of what worked and what didnโ€™t. โคต๏ธ

02.11.2025 15:17 ๐Ÿ‘ 141 ๐Ÿ” 25 ๐Ÿ’ฌ 4 ๐Ÿ“Œ 1
Graph showing the conversion of Hugging Face repositories from LFS storage to Xet storage.

Graph showing the conversion of Hugging Face repositories from LFS storage to Xet storage.

The Hub is on 100% on Xet. ๐Ÿš€

A little over a year ago, @hf.co acquired XetHub to unlock the next phase of growth in models and datasets. huggingface.co/blog/xethub-...

In April, there were 1,000 Hugging Face repos on Xet. Now every repo (over 6M) on the Hub is on Xet.

03.10.2025 15:16 ๐Ÿ‘ 12 ๐Ÿ” 5 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0
Preview
From Zero to GPU: A Guide to Building and Scaling Production-Ready CUDA Kernels Weโ€™re on a journey to advance and democratize artificial intelligence through open source and open science.

We made a blog post on how you can use kernel-builder to develop and build compute kernels for the @hf.co Kernel Hub:

huggingface.co/blog/kernel-...

19.08.2025 11:06 ๐Ÿ‘ 3 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
kernels-community (kernels-community) Org profile for kernels-community on Hugging Face, the AI community building the future.

Also a huge shout-out to @nixos-org.bsky.social! All the kernels in huggingface.co/kernels-comm... are built using kernel-builder, which uses Nix under the hood to build ABI3 kernels for all the supported Torch configurations (various CUDA/ROCm versions, Metal):

github.com/huggingface/...

06.08.2025 20:42 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Welcome GPT OSS, the new open-source model family from OpenAI! Weโ€™re on a journey to advance and democratize artificial intelligence through open source and open science.

Yesterday we released support for GPT OSS (the new OpenAI open weight model) across the @hf.co ecosystem. The latest Transformers now integrates support for the kernels package and uses kernels from the HF Kernel Hub to run models like GPT OSS as fast as possible. ๐Ÿš€

huggingface.co/blog/welcome...

06.08.2025 20:42 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Hugging Face Kernel Builder Walkthrough | Image to Grayscale CUDA Kernel
Hugging Face Kernel Builder Walkthrough | Image to Grayscale CUDA Kernel YouTube video by David Holtz

David Holz made an introduction video showing how to make your own kernels with kernel-builder:

www.youtube.com/watch?v=HS5P...

26.07.2025 11:37 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The kernel ecosystem is completely open: you can make your own kernels with kernel-builder, upload them to the hub, and register a mapping using the kernels package and they get used by transformers.

github.com/huggingface/...
github.com/huggingface/...

26.07.2025 11:37 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Release v4.54.0: Kernels, Transformers Serve, Ernie, Voxtral, LFM2, DeepSeek v2, ModernBERT Decoder... ยท huggingface/transformers Important news! In order to become the source of truth, we recognize that we need to address two common and long-heard critiques about transformers: transformers is bloated transformers is slow O...

Transformers 4.54.0 is out! This release adds support for compute kernels hosted on the Hub. When enabled, transformers can replace PyTorch layer implementations by fast, specialized kernels from the hub.

github.com/huggingface/...

26.07.2025 11:34 ๐Ÿ‘ 7 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
GitHub - koaning/mktestdocs: Run pytest against markdown files/docstrings. Run pytest against markdown files/docstrings. Contribute to koaning/mktestdocs development by creating an account on GitHub.

Just released a new version of mktestdocs. It now also supports huggingface docstrings!

github.com/koaning/mkt...

26.07.2025 10:00 ๐Ÿ‘ 4 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Some of the ModernBERT team is back with new encoder models: Ettin, ranging from tiny to small: 17M, 32M, 68M, 150M, 400M & 1B parameters. They also trained decoder models & checked if decoders could classify & if encoders could generate.

Details in ๐Ÿงต:

17.07.2025 15:23 ๐Ÿ‘ 7 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Your open-source companion - Reachy Mini
Your open-source companion - Reachy Mini YouTube video by Pollen Robotics

So excited to finally release our first robot today: Reachy Mini

A dream come true: cute and low priced, hackable yet easy to use, powered by open-source and the infinite community.

Read more and order now at huggingface.co/blog/reachy-...

09.07.2025 10:09 ๐Ÿ‘ 82 ๐Ÿ” 17 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 7
Preview
SUSE Refines, Releases Open-Source LLM to Fuel Community Collaboration Today, SUSE has released a new fine-tuned version of the language model, Cavil-Qwen3-4B, as open source on openSUSEโ€™s Hugging Face in order to make legal com...

SUSE has released Cavil-Qwen3-4B, a fine-tuned, #opensource #LLM on #HuggingFace. Built to detect #legal text like license declarations, it empowers #devs to stay #compliant. #fast #efficiently. #openSUSE #AI #Licenses news.opensuse.org/2025/06/24/s...

24.06.2025 13:59 ๐Ÿ‘ 11 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Learn the Hugging Face Kernel Hub in 5 Minutes Weโ€™re on a journey to advance and democratize artificial intelligence through open source and open science.

Over the past few months, we have worked on the @hf.co Kernel Hub. Kernel Hub allows you to get cutting-edge compute kernels directly from the hub in a few lines of code.

David Holz made a great writeup of how you can use kernels in your projects: huggingface.co/blog/hello-h...

17.06.2025 07:47 ๐Ÿ‘ 9 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Hi Berlin people! @hugobowne.bsky.social is in town & we're celebrating by hosting a meetup together ๐ŸŽ‰ This one is all about building with AI & we'll also open the floor for lightning talks. If you're around, come hang out with us!

๐Ÿ“† June 16, 18:00
๐Ÿ“ Native Instruments (Kreuzberg)
๐ŸŽŸ๏ธ lu.ma/d53y9p2u

02.06.2025 07:48 ๐Ÿ‘ 9 ๐Ÿ” 4 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1
Preview
Release v3.3.1 ยท huggingface/text-generation-inference This release updates TGI to Torch 2.7 and CUDA 12.8. What's Changed change HPU warmup logic: seq length should be with exponential growth by @kaixuanliu in #3217 adjust the round_up_seq logic to a...

TGI v3.3.1 is released! This version switches to Torch 2.7 and CUDA 12.8. This should improve support for GPUs with compute capabilities 10.0 (B200) and 12.0 (RTX50x0 and NVIDIA RTX PRO Blackwell GPUs).

github.com/huggingface/...

22.05.2025 13:40 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

@aob.nl mooie tijdslijn van de stakingen in het onderwijsblad, alleen de staking van 18 maart bij de @rug.nl vergeten, wel een beetje jammer!

17.05.2025 11:51 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Release v3.3.0 ยท huggingface/text-generation-inference Notable changes Prefill chunking for VLMs. What's Changed Fixing Qwen 2.5 VL (32B). by @Narsil in #3157 Fixing tokenization like https://github.com/huggingface/text-embeddinโ€ฆ by @Narsil in #3156...

We just released text-generation-inference 3.3.0. This release adds prefill chunking for VLMs ๐Ÿš€. We have also Gemma 3 faster & use less VRAM by switching to flashinfer for prefills with images.

github.com/huggingface/...

09.05.2025 15:39 ๐Ÿ‘ 2 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0