I had an amazing experience attending @fastcompany.com Most Innovative Companies Summit. Proud to represent Red Hat as one of the most innovative companies with my colleague @terrytangyuan.xyz
I had an amazing experience attending @fastcompany.com Most Innovative Companies Summit. Proud to represent Red Hat as one of the most innovative companies with my colleague @terrytangyuan.xyz
Check out the new episode Technically Speaking w/ Chris Wright - Scaling AI inference with open source ft. Brian Stevens red.ht/4dJiBLc
FP8-quantized version of Llama 4 Maverick can be downloaded from HuggingFace: huggingface.co/collections/...
The official release by Meta includes an FP8-quantized version of Llama 4 Maverick 128E supported by Red Hatβs LLM Compressor library, enabling the 128 expert model to fit on a single NVIDIA 8xH100 node, resulting in more performance with lower costs.
Thanks to the Meta AI team for close collaboration with the vLLM community, enabling developers to experiment with Llama 4 immediately. Our blog shares more details of the Llama 4 release, and how to get started with inferencing in vLLM today: developers.redhat.com/articles/202...
This is really nice! Thank you @stu.bsky.social