Another intern opening on our team, for a project Iβll be involved in (deadline soon!)
Another intern opening on our team, for a project Iβll be involved in (deadline soon!)
Last month I co-taught a class on diffusion models at MIT during the IAP term: www.practical-diffusion.org
In the lectures, we first introduced diffusion models from a practitioner's perspective, showing how to build a simple but powerful implementation from the ground up (L1).
(1/4)
Our main results study when projective composition is achieved by linearly combining scores.
We prove it suffices for particular independence properties to hold in pixel-space. Importantly, some results extend to independence in feature-space... but new complexities also arise (see the paper!) 5/5
We formalize this idea with a definition called Projective Composition β based on projection functions that extract the βkey featuresβ for each distribution to be composed. 4/
What does it mean for composition to "work" in these diverse settings? We need to specify which aspects of each distribution we care aboutβ i.e. the βkey featuresβ that characterize a hat, dog, horse, or object-at-a-location. The "correct" composition should have all the features at once. 3/
Part of challenge is, we may want compositions to be OOD w.r.t. the distributions being composed. For example in this CLEVR experiment, we trained diffusion models on images of a *single* object conditioned on location, and composed them to generate images of *multiple* objects. 2/
Paperπ§΅ (cross-posted at X): When does composition of diffusion models "work"? Intuitively, the reason dog+hat works and dog+horse doesnβt has something to do with independence between the concepts being composed. The tricky part is to formalize exactly what this means. 1/
finally managed to sneak my dog into a paper: arxiv.org/abs/2502.04549
nice idea actually lol: βPeriodic cooking of eggsβ : www.nature.com/articles/s44...
Reminder of a great dictum in research, one of 3 drilled into us by my PhD supervisor: "Don't believe anything obtained only one way", for which the actionable dictum is "immediately do a 2nd independent test of something that looks interesting before in any way betting on it". Its a great activity!
Iβve been in major denial about how powerful LLMs are, mainly bc I know of no good reason for it to be true. I imagine this was how deep learning felt to theorists the first time around π¬
Last year, we funded 250 authors and other contributors to attend #ICLR2024 in Vienna as part of this program. If you or your organization want to directly support contributors this year, please get in touch! Hope to see you in Singapore at #ICLR2025!
Happy for you Peli!!
The thing about "AI progress is hitting a wall" is that AI progress (like most scientific research) is a maze, and the way you solve a maze is by constantly hitting walls and changing directions.
for example I never trust an experiment in a paper unless (a) I know the authors well or (b) Iβve reproduced the results myself
imo most academics are skeptical of papers? Itβs well-known that many accepted papers are overclaimed or just wrongβ thereβs only a few papers people really pay attention to despite the volume
Thrilled to share the latest work from our team at
@Apple
where we achieve interpretable and fine-grained control of LLMs and Diffusion models via Activation Transport π₯
π arxiv.org/abs/2410.23054
π οΈ github.com/apple/ml-act
0/9 π§΅
π’ My team at Meta (including Yaron Lipman and Ricky Chen) is hiring a postdoctoral researcher to help us build the next generation of flow, transport, and diffusion models! Please apply here and message me:
www.metacareers.com/jobs/1459691...
Giving a short talk at JMM soon, which might finally be the push I needed to learn Leanβ¦
This optimal denoiser has a closed-form for finite train sets, and notably does not reproduce its train set; it can sort of "compose consistent patches." Good exercise for reader: work out the details to explain Figure 3.
Just read this, neat paper! I really enjoyed Figure 3 illustrating the basic idea: Suppose you train a diffusion model where the denoiser is restricted to be "local" (each pixel i only depends on its 3x3 neighborhood N(i)). The optimal local denoiser for pixel i is E[ x_0[i] | x_t[ N(i) ] ]...cont
Neat, Iβll take a closer look! (I think I saw an earlier talk you gave on this as well)
LLMs dont have motives, goals or intents, and so they wont lie or deceive in order to obtain them. but they are fantastic at replicating human culture, and there, goals, intents and deceit abound. so yes, we should also care about such "behaviors" (outputs) in deployed systems.
One #postdoc position is still available at the National University of Singapore (NUS) to work on sampling, high-dimensional data-assimilation, and diffusion/flow models. Applications are open until the end of January. Details:
alexxthiery.github.io/jobs/2024_di...
βShould you still get a PhD given o3β feels like a weird category error. Yes, obviously you should still have fun and learn things in a world with capable AI. What else are you going to do, sit around on your hands?
sites.google.com/view/m3l-202...
Catch our talk about CFG at the M3L workshop Saturday morning @ Neurips! Iβll also be at the morning poster session, happy to chat
Found slides by Ankur Moitra (presented at a TCS For All event) on "How to do theoretical research." Full of great advice!
My favourite: "Find the easiest problem you can't solve. The more embarrassing, the better!"
Slides: drive.google.com/file/d/15VaT...
TCS For all: sigact.org/tcsforall/
It's located near the west entrance to the west side of the conference center, on the first floor, in case that helps!
When a bunch of diffusers sit down and talk shop, their flow cannot be matchedπ
It's time for the #NeurIPS2024 diffusion circle!
πJoin us at 3PM on Friday December 13. We'll meet near this thing, and venture out from there and find a good spot to sit. Tell your friends!