Finally, this project was made possible by the INCITE program of the DoE, who sponsored our compute on the OLCF Frontier supercomputer. Without them, we could not have done open research at this scale!
Finally, this project was made possible by the INCITE program of the DoE, who sponsored our compute on the OLCF Frontier supercomputer. Without them, we could not have done open research at this scale!
Thank you to all of my collaborators, @sean-mcleish.bsky.social , Neel Jain, jwkirchenbauer.bsky.social, Siddharth Singh, Brian Bartoldson, Bhavya Kailkhura, Abhinav Bhatele and especially Tom Goldstein, for doing this.
This really was a long project for us, with initial starts in Summer '23!
You can find the model here: huggingface.co/tomg-group-u...
The code here: github.com/seal-rg/recu...
and the tech report here: www.arxiv.org/abs/2502.05171
What is it doing when it thinks longer?
We find evidence for pretty advanced structures in latent space, such as the tendency to use orbitals (see picture) to compute arithmetic tasks and reasoning about sentence structure
So, this model really is rotating shapes in a high-dimensional space?
What is pretty exciting is that simply by training with our arch and objective, a separation emerges from scale - the model's latents converge quicker for some tokens in a sentence than others,
In this figure the model takes more time to think about the key parts of the text:
We had enough compute for only a single shot to train at scale (and that is the model we've published).
On reasoning tasks like GSM8k, the model is pretty competitive, even compared to other pretrained open-source models, even though we have done no post/mid-training...
First, the model (with 3.5B params), even though trained semi-optimally, and for 800B tokens, is competive with 7B open-source models trained for 2-3T tokens (OLMo-v1) - but we can't beat the new OLMo data recipe (yet)
This is pretty exciting, for our first large-scale run
has something for everyone, new model architecture, optimizer details, AMD training (we trained on 4096 AMD GPUs), our data pipeline, and lots of analysis!
Here are a few of my highlights:
Ok, so I can finally talk about this!
We spent the last year (actually a bit longer) training an LLM with recurrent depth at scale.
The model has an internal latent space in which it can adaptively spend more compute to think longer.
I think the tech report ...π¦ββ¬
New open source reasoning model!
Huginn-3.5B reasons implicitly in latent space π§
Unlike O1 and R1, latent reasoning doesnβt need special chain-of-thought training data, and doesn't produce extra CoT tokens at test time.
We trained on 800B tokens π
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach πππ
arxiv.org/abs/2502.05171
I'm at NeurIPS in Vancouver right now! Feel free to reach out to talk about anything in LLM safety or efficiency research.
Also, our new ELLIS institute TΓΌbingen is hiring new faculty, the deadline is next week - reach out to us in person and at our booth for more info πͺπΊπͺπΊπͺπΊ