refresh button for address bar visiting openreview.net
meet me at this button friends
refresh button for address bar visiting openreview.net
meet me at this button friends
LMAO, openreview down point 9pm UCT when ICLR is supposed to be releasing. Coincidence?
Introducing ππ΅πΌππ΄π΅ππ―ππ―π―πΉπ²π: a *fully unsupervised* LM for input-adaptive parallel latent reasoning
β
Learn yourself a reasoning model with normal pretraining
β
Better perplexity compared to fixed thinking tokens
No fancy loss, no chain of thought labels π
I'm really excited about this. Because this model is trained with literally nothing but LM loss, it helps create a new reasoning paradigm where reasoning capabilities are baked right in at pretraining, unifying train and test time behaviors.
Look ma, no distribution shift! π
Better yet, without us teaching the model to do this at all, it learned to allocate more compute at tokens of higher entropy (even as measured by an independently trained model of the same architecture), and use less compute where there's either too little or too much entropy. π€―
By just using our approach, you don't have to do any extra work to get pretraining gains! We show across scale AND computation match that our approach performs better in pretraining perplexity than both regular transformers and manually inserting non-adaptive thinking tokens. π₯³
We design an transformer variant that uses a score-attenuated "forking" mechanism to clone useful residuals the model wants to update and attend to, thus creating a π―ππ―π―πΉπ² of latent computation for those highly-informative tokens.
Current approaches in scaling inference-time compute require supervising with explicit chain-of-thought data, which limits thoughts to be sequential and in human language only. π
Wouldn't it be nice if you can do normal pretraining, and somehow get latent thinking for free? π€
Joint work with my wonderful collaborators @shikharmurty.bsky.social, @robertcsordas.bsky.social, and @chrmanning.bsky.social.
Paper: arxiv.org/abs/2510.00219.
Code and Package: github.com/stanfordnlp/....
Introducing ππ΅πΌππ΄π΅ππ―ππ―π―πΉπ²π: a *fully unsupervised* LM for input-adaptive parallel latent reasoning
β
Learn yourself a reasoning model with normal pretraining
β
Better perplexity compared to fixed thinking tokens
No fancy loss, no chain of thought labels π
New Paper Day! For EMNLP findingsβin LM red-teaming, we show you have to optimize for **both** perplexity and toxicity for high-probability, hard to filter, and natural attacks!
Thanks to @schmidtsciences.bsky.social and Lambda Labs for generously supporting our work :)
π€ think this is all too much? No worries, we are also dropping a **PACKAGE** to do this for you. Check it out: github.com/sisl/astra-rl
βοΈ And so.... You should optimize for **BOTH** attack success and perplexity to get the most effective attacks!
Even across baseline methods, low-perplexity prompts result in more effective attacks, but optimizing for attack success alone results in high-perplexity prompts.
In fact, our method allows us to discover a Pareto tradeoff (π€―) between attack success and prompt likelihood; tuning a single parameter in our method travels along the Pareto-optimal front.
Using the Adaptive Stress Testing (AST) framework as a reward signal for an online DPO-based optimization, we present a method to discover **both** high-probability prompts that are also successful in attacks.
Most approaches in gradient-based red-teaming result in very low-probability prompts, which previous work have shown are both easier to filter and bad negative examples for downstream hardening.
Done at Stanford Intelligent Systems Laboratory βΒ my joint first author Amelia Hardy, along with our wonderful collaborators Allie Griffith, @bernardlange.bsky.social, Duncan Eddy, Mykel Kochenderfer.
Paper:
arxiv.org/pdf/2407.09447
Python package to do this for yourself:
github.com/sisl/astra-rl
New Paper Day! For EMNLP findingsβin LM red-teaming, we show you have to optimize for **both** perplexity and toxicity for high-probability, hard to filter, and natural attacks!
The list of accepted papers at #FOCS2025 is up!
focs.computer.org/2025/accepte...
You're not too dumb for Haskell, you just need a reason to practice. :)
Just published in JOSS: 'Turftopic: Topic Modelling with Contextual Representations from Sentence Transformers' https://doi.org/10.21105/joss.08183
OCaml @ocaml.org is in The Economist!
Weβre proud to announce three new tenure-track assistant professors joining TTIC in Fall 2026: Yossi Gandelsman, Will Merrill, and Nick Tomlin (@nickatomlin.bsky.social). Meet them here: buff.ly/JH1DFtT
New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719
π€― Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?
w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social πππ
New Paper Day! For ACL 2025 Findings:
You should **drop dropout** when you are training your LMs AND MLMs!
I think there's a very cool training dynamics situation going on here; if you are pretraining with webtext and driving loss down to 0, disbursed representations matter a LOT less since your training corpus already regularizes decently.
Through a π€ pinch of interp, we show that model editing success gets degraded by pretraining with dropout.
Dispersed representations built by dropout => less consistent representation of the world => worse models.
BERTs and encoder models are not saved from this either, with MLM and SQuAD performance being degraded by just turning on 10% dropout.