Taneem's Avatar

Taneem

@taneem-ibrahim

Tinkering with vLLM @RedHat

110
Followers
96
Following
5
Posts
21.11.2024
Joined
Posts Following

Latest posts by Taneem @taneem-ibrahim

Post image

I had an amazing experience attending @fastcompany.com Most Innovative Companies Summit. Proud to represent Red Hat as one of the most innovative companies with my colleague @terrytangyuan.xyz

06.06.2025 05:17 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Preview
Technically Speaking | Scaling AI inference with open source Explore the critical role of production-quality AI inference, the power of open source projects like vLLM, and the future of the enterprise AI stack.

Check out the new episode Technically Speaking w/ Chris Wright - Scaling AI inference with open source ft. Brian Stevens red.ht/4dJiBLc

06.06.2025 01:10 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Llama 4 - a meta-llama Collection Llama 4 release

FP8-quantized version of Llama 4 Maverick can be downloaded from HuggingFace: huggingface.co/collections/...

05.04.2025 20:22 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The official release by Meta includes an FP8-quantized version of Llama 4 Maverick 128E supported by Red Hat’s LLM Compressor library, enabling the 128 expert model to fit on a single NVIDIA 8xH100 node, resulting in more performance with lower costs.

05.04.2025 20:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Llama 4 herd is here with Day 0 inference support in vLLM | Red Hat Developer Discover the new Llama 4 Scout and Llama 4 Maverick models from Meta, with mixture of experts architecture, early fusion multimodality, and Day 0 model support.

Thanks to the Meta AI team for close collaboration with the vLLM community, enabling developers to experiment with Llama 4 immediately. Our blog shares more details of the Llama 4 release, and how to get started with inferencing in vLLM today: developers.redhat.com/articles/202...

05.04.2025 20:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

This is really nice! Thank you @stu.bsky.social

22.11.2024 05:47 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0