Check out our recent work offering a principled way to perform parallel prediction (few-step generation) in Diffusion LLMs with minimal performance degradation!
Check out our recent work offering a principled way to perform parallel prediction (few-step generation) in Diffusion LLMs with minimal performance degradation!
π Looking for PhD students, postdocs & interns!
Iβm recruiting for my new lab at NUS School of Computing, focusing on generative modeling, reasoning, and tractable inference.
π‘ Interested? Learn more here: liuanji.github.io
ποΈ PhD application deadline: June 15, 2025
What happens if we tokenize cat as [ca, t] rather than [cat]?
LLMs are trained on just one tokenization per word, but they still understand alternative tokenizations. We show that this can be exploited to bypass safety filters without changing the text itself.
#AI #LLMs #tokenization #alignment
Want to turn your state-of-the-art diffusion models into ultra-fast few-step generators? π
Learn how to optimize your time discretization strategyβin just ~10 minutes! β³β¨
Check out how it's done in our Oral paper at ICLR 2025 π
π Fully funded PhD in Neurosymbolic AI for Finance π°π€
π Work at Imperial College London on cutting-edge AI for financial applications!
[1/3]
Oh sorry, I just realized π€£
Could you please add me πββοΈ