Another year, another Thesis Fast Forward, fantastic set of work! Thanks Ruben for keeping it going!
@vdeschaintre
Doing research at Adobe in Computer Graphics/Vision/ML on appearance & content authoring and generation. I also like photography, and baking, but I try to keep it under control! https://valentin.deschaintre.fr
Another year, another Thesis Fast Forward, fantastic set of work! Thanks Ruben for keeping it going!
The #SIGGRAPH Thesis Fast Forward 2026 is here!
Learn about the future of CG in 9 PhD theses: from Monte Carlo PDE solvers and photorealistic 3D avatars to fantasy-based rehab games and modular umbrella meshes. πβ¨
youtu.be/FlrTZEJeEXs
I asked my husband if there is a French equivalent of "sticks and stones will break my bones" and he gave me "the toad's drool doesn't reach the white dove" and "the train of your insults is rolling on the rails of my indifference" and now I would like both of these on a t-shirt
The implementation of our paper called "An evaluation of SVBRDF Prediction from Generative Image Models for Appearance Modeling of 3D Scenes" is now available at:
github.com/graphdeco-in...
Project Page:
repo-sam.inria.fr/nerphys/svbr...
Julia Guerrero Viu @juliagviu.bsky.social, investigadora aragonesa recibe el premio SCIE-Zonta-Sngular 2025, que impulsa la visibilidad de las mujeres en la informΓ‘tica π€©https://shre.ink/oeUz
Β‘Enhorabuena! Orgullosos de nuestra #genteunizar π«Ά
I was at #AdobeMAX this week to present #projectSurfaceSwap! Our surface selection and replacement techs!
Check it out here: youtu.be/Xg4n60hYfhA?...
What a blast the Sneaks were!
The #SIGGRAPH Thesis Fast Forward 2026 submissions are now open! Deadline: Nov 7th 2025. Let the world know what you've been working on in your PhD! More info at research.siggraph.org/thesisff/ and check out last year's iteration youtube.com/watch?v=8esF...
Who's Adam?
Sneak peak into the live event. That's the most important part of the FF, the other 2.5 hours pale in comparison.
Can we meet to have beer/coffee and talk about field trend, open scientific questions, the future of our species (this may get mildly depressing) and the meaning of life instead?
If you're confused, I uploaded the presentation as well www.youtube.com/watch?v=jv7o...
Our fast forward for SIGGRAPH 2025. Keep SIGGRAPH weird! www.youtube.com/watch?v=sYel...
I will be presenting in the 10.30 session in room 208-209. I will share thoughts about implicit/generative and explicit appearance representations. See you there if that sounds interesting!
Come check out what the newest generation has been up to!
It's in Vancouver ;)
π§βπI have the pleasure to present at the Best of Eurographics session at #SIGGRAPH2025 next week! Iβll be looking back at our work on appearance authoring, editing & understanding β come by and say hi! π s2025.conference-schedule.org/presentation...
Mark your calendars for the #SIGGRAPH 2025 Thesis Fast Forward in-person event. Recent PhD graduates will present their thesis work and answer questions on Monday 11 August, 10:30-11:30am: s2025.conference-schedule.org/presentation...
Folks in the #SIGGRAPH community:
You may or may not be aware of the controversy around the next #SIGGRAPHAsia location, summarized here www.cs.toronto.edu/~jacobson/we...
If you're concerned consider signing this letter docs.google.com/document/d/1...
via this form
docs.google.com/forms/d/e/1F...
Attending @cvprconference.bsky.social and looking for a PhD or postdoc position in the area of 3d reconstruction (Gaussian splatting, nerfs, scene understanding, etc.)? Find me or drop me an email ;)
Editing appearance, geometry, lighting with precision is easy with a 3D scene representation. But it's so much more difficult with just an image or photograph.
Enter IntrinsicEdit: Precise generative image manipulation in intrinsic space (SIGGRAPH 2025)!
intrinsic-edit.github.io
β Image editing without paired data, delivering on the promise of RGB<->X! Edit in Intrinsic space, get back your image with the desired modification!
The method is quite fun too, conditional token optimization and noise inversion.
I feel like "j'veux dire..." would be a pretty solid equivalent
Lots of ad for this in Old Street tube stop. Who is this ad for. Is the goal negative buzz?
Thanks! And congrats on the paper award!
Thanks for diving in the crazy worlds of materials with me!
Thanks Ana, hope to see you in Vancouver?
Thanks Vova!
Brilliant insights from @michael-j-black.bsky.social on the importance of data and 3D+ for 4D foundation models that understand humans, and the future of embodied intelligence in the last keynote talk of #Eurographics2025!
See you next year in Aachen :)
Awesome look into the future of humanoid robots and what we can learn from character animation from Karen Liuβs keynote at #Eurographics2025!
Amazing keynote by Alyosha Efros on the role of data in visual computing at #Eurographics2025!
Thought-provoking insights from generative models to 3D perception :)