Home New Trending Search
About Privacy Terms
#
#DiffusionLLM
Posts tagged #DiffusionLLM on Bluesky

⚡ Inception Labs: su LLM de difusión es 10 veces más rápido que Claude, ChatGPT y Gemini

Mercury 2, un modelo de lenguaje basado en difusión, promete una velocidad revolucionaria.

thenewstack.io/inception-labs-mercury-2...

#DiffusionLLM #LargeLanguageModels #AI #RoxsRoss

0 0 0 0
Preview
Inception Ships Mercury 2 - A Diffusion LLM That Hits 1,009 Tokens Per Second Inception Labs launches Mercury 2, the first diffusion-based reasoning language model, generating over 1,000 tokens per second on Blackwell GPUs at a fraction of the cost of conventional autoregressive models.

Inception Ships Mercury 2 - A Diffusion LLM That Hits 1,009 Tokens Per Second

awesomeagents.ai/news/inception-mercury-2...

#InceptionLabs #Mercury2 #DiffusionLlm

0 0 0 0
Post image

This small chart tells you why #Mercury2 by #Inception is a big deal and how it is a leap of 11x against #Claude-Haiku4.5 and 14x over #GPT5-mini on real world testing comparisons without hardware upgrades.
#ChatGPT #Anthropic #dLLM #LLM #AI #ML #DiffusionLLM

1 1 2 0
Hidden Semi-Autoregressive Experts Enhance Diffusion LLM Inference

Hidden Semi-Autoregressive Experts Enhance Diffusion LLM Inference

HEX, a training‑free method that ensembles multiple generation schedules, boosts diffusion LLM accuracy on GSM8K from 24.72% to 88.10%, and improves TruthfulQA to 57.46%. Read more: getnews.me/hidden-semi-autoregressi... #hex #diffusionllm #reasoning

0 0 0 0
ParallelBench Reveals Limits of Parallel Decoding in Diffusion LLMs

ParallelBench Reveals Limits of Parallel Decoding in Diffusion LLMs

ParallelBench, the first diffusion‑LLM benchmark, evaluates parallel decoding on tasks like arithmetic and list sorting, showing quality drops despite speed gains. Read more: getnews.me/parallelbench-reveals-li... #parallelbench #diffusionllm

0 0 0 0
New RL Algorithm Boosts Reasoning in Diffusion Language Models

New RL Algorithm Boosts Reasoning in Diffusion Language Models

AGRPO, an on‑policy RL method for diffusion LLMs, improved GSM8K accuracy by up to 7.6% over LLaDA‑8B‑Instruct and gave a 3.8× boost on the Countdown benchmark. Read more: getnews.me/new-rl-algorithm-boosts-... #diffusionllm #agrpo

0 0 0 0
Rainbow Padding Improves Length Robustness in Diffusion LLMs

Rainbow Padding Improves Length Robustness in Diffusion LLMs

Rainbow Padding cycles through seven distinct padding tokens to curb early termination in diffusion LLMs, achieving length robustness after a single epoch LoRA fine‑tuning. Read more: getnews.me/rainbow-padding-improves... #rainbowpadding #diffusionllm

0 0 0 0
Quant-dLLM: 2‑Bit Post‑Training Compression for Diffusion LLMs

Quant-dLLM: 2‑Bit Post‑Training Compression for Diffusion LLMs

Quant-dLLM introduces a framework for 2‑bit post‑training quantization of diffusion large language models, preserving performance. The code and pretrained models will be open‑sourced on GitHub. getnews.me/quant-dllm-2-bit-post-tr... #quantdllm #diffusionllm

0 0 0 0
Step-Aware Policy Optimization Improves Reasoning in Diffusion LLMs

Step-Aware Policy Optimization Improves Reasoning in Diffusion LLMs

SAPO adds step‑level rewards to diffusion language models, aligning each denoising iteration with a hierarchical reasoning plan and boosting benchmark performance. Read more: getnews.me/step-aware-policy-optimi... #diffusionllm #sapoinnovation

0 0 0 0
AdaBlock-dLLM: Adaptive Block Sizing Boosts Diffusion LLM Speed

AdaBlock-dLLM: Adaptive Block Sizing Boosts Diffusion LLM Speed

AdaBlock-dLLM, a training-free scheduler that adjusts diffusion LLM block sizes on the fly, improves accuracy by up to 5.3% while maintaining the same throughput. getnews.me/adablock-dllm-adaptive-b... #adablockdllm #diffusionllm #semiautoregressive

0 0 0 0
Freedave Enables Lossless Parallel Decoding for Diffusion LLMs

Freedave Enables Lossless Parallel Decoding for Diffusion LLMs

Freedave enables lossless parallel decoding for diffusion LLMs, delivering up to 2.8× higher throughput without accuracy loss, per a paper posted 30 Sep 2025. Read more: getnews.me/freedave-enables-lossles... #freedave #diffusionllm #ai

0 0 0 0
Learn2PD Accelerates Diffusion LLMs with Adaptive Parallel Decoding

Learn2PD Accelerates Diffusion LLMs with Adaptive Parallel Decoding

Learn2PD, a lightweight post‑training filter for diffusion LLMs, boosts decoding speed up to 22.58× (57.51× with KV‑Cache) without quality loss. Read more: getnews.me/learn2pd-accelerates-dif... #learn2pd #diffusionllm #paralleldecoding

0 0 0 0
Spiffy Speculative Decoding Boosts Diffusion LLM Speed by Up to 7.9×

Spiffy Speculative Decoding Boosts Diffusion LLM Speed by Up to 7.9×

Spiffy speculative decoding speeds diffusion language models by 2.8–3.1×, up to 7.9× with other tricks, while preserving output distribution, according to researchers. Read more: getnews.me/spiffy-speculative-decod... #spiffy #diffusionllm

0 0 0 0
Inpainting-Guided Policy Optimization Boosts Diffusion LLM Performance

Inpainting-Guided Policy Optimization Boosts Diffusion LLM Performance

IGPO adds brief verified reasoning fragments to diffusion language models. Tested on GSM8K, Math500 and AMC, it achieved new state-of-the-art accuracy on these math benchmarks. getnews.me/inpainting-guided-policy... #inpainting #diffusionllm

0 0 0 0