Antoine Guédon's Avatar

Antoine Guédon

@antoine-guedon

Postdoctoral researcher in computer vision at Ecole polytechnique. I'm interested in 3D Reconstruction, Radiance Fields, Gaussian splatting, 3D Scene Rendering, 3D Scene Understanding, etc. Webpage: https://anttwo.github.io/

503
Followers
379
Following
35
Posts
25.11.2024
Joined
Posts Following

Latest posts by Antoine Guédon @antoine-guedon

I’ll be at #SIGGRAPHAsia2025 next week presenting our paper MILo! Join the Neural Fields and Surface Reconstruction session on Tuesday, December 16.

If you’ll be in Hong Kong and would like to discuss research, or grab a coffee ☕️ feel free to reach out.

14.12.2025 06:37 👍 7 🔁 3 💬 1 📌 0
Post image

We introduce MIRO: a new paradigm for T2I model alignment integrating reward conditioning into pretraining, eliminating the need for separate fine-tuning/RL stages. This single-stage approach offers unprecedented efficiency and control.

- 19x faster convergence ⚡
- 370x less FLOPS than FLUX-dev 📉

31.10.2025 11:24 👍 61 🔁 14 💬 3 📌 5
2025 ICCV Program Committee

Familiar names among #ICCV2025 Outstanding Reviewers from our team 😇
Antoine Guédon @antoine-guedon.bsky.social
Sinisa Stekovic
Renaud Marlet
👏
@iccv.bsky.social
iccv.thecvf.com/Conferences/...

04.10.2025 15:12 👍 22 🔁 5 💬 0 📌 0
MILo Project page for MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction

11/11 📚 Resources:
📄 Paper: arxiv.org/abs/2506.24096
💻 Code: github.com/Anttwo/MILo
🌐 Project Page: anttwo.github.io/milo/

Huge thanks to my amazing co-authors and the supporting institutions! 🙏

08.09.2025 11:35 👍 1 🔁 0 💬 0 📌 0
MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction
MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction YouTube video by Antoine Guédon

10/n📺Video:
See MILo in action!
Our presentation video showcases the differentiable pipeline and reconstruction results across various scenes.

🔗 YouTube video: www.youtube.com/watch?v=rOBs...

08.09.2025 11:35 👍 1 🔁 0 💬 1 📌 0
MILo Project page for MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction

9/n🎮Interactive Gallery:
Check out our interactive, online 3D viewer with both mesh and Gaussian representations!

🔗 Gallery: anttwo.github.io/milo/#galler...

08.09.2025 11:35 👍 0 🔁 0 💬 1 📌 0
Post image No depth order loss.

No depth order loss.

With depth order loss.

With depth order loss.

8/n📈Optional depth-order regularization:
For even cleaner backgrounds, we propose an optional loss using DepthAnythingV2 that enforces depth ordering consistency.

This drastically improves background geometry quality!

08.09.2025 11:35 👍 0 🔁 0 💬 1 📌 0
Post image Post image

7/n🎨Animation & Editing:
Since Gaussians align with the extracted mesh surface, any mesh modification can easily be propagated to the Gaussians!

We include in the code a Blender addon for easy editing and animation - no coding required.

08.09.2025 11:35 👍 0 🔁 0 💬 1 📌 0

6/n🔧Plug-and-play design:
MILo can be integrated into any Gaussian Splatting pipeline!

We provide simple differentiable functions that take Gaussian parameters as input and return meshes.

Perfect for adding differentiable surface processing to your 3DGS projects!

08.09.2025 11:35 👍 0 🔁 0 💬 1 📌 0
Post image Post image Post image

5/n🎯Scalability advantage:
MILo reconstructs full scenes including all background elements, not just foregrounds.

To achieve this efficiency, we select only surface-likely Gaussians by repurposing the importance sampling from Mini-Splatting2.

08.09.2025 11:35 👍 1 🔁 0 💬 1 📌 0
Post image

4/n📊Results:
✅ Higher quality meshes with significantly fewer vertices
✅ 60-350MB mesh sizes (vs GBs in other methods)
✅ Complete scene reconstruction (including backgrounds)
✅ Better performance on benchmarks

Efficiency meets quality!

08.09.2025 11:35 👍 0 🔁 0 💬 1 📌 0
Post image

3/n🏗️How MILo works:
1️⃣ Each Gaussian spawns pivots
2️⃣ Delaunay triangulation connects pivots
3️⃣ SDF values assigned to pivots
4️⃣ Differentiable Marching Tetrahedra extracts mesh

The pipeline is differentiable, enabling mesh supervision to improve Gaussian configurations!

08.09.2025 11:35 👍 0 🔁 0 💬 1 📌 0
Post image Post image

2/n🔗Key innovation: differentiable mesh extraction at every training iteration

Unlike previous methods, MILo extracts vertex locations and connectivity purely from Gaussian parameters, allowing gradient flow from mesh back to Gaussians. This creates a powerful feedback loop!

08.09.2025 11:35 👍 1 🔁 0 💬 1 📌 0
Video thumbnail

1/n🚀Gaussians > Differentiable function > Mesh?
Check out our new work: MILo: Mesh-In-the-Loop Gaussian Splatting!

🎉Accepted to SIGGRAPH Asia 2025 (TOG)
MILo is a novel differentiable framework that extracts meshes directly from Gaussian parameters during training.

🧵👇

08.09.2025 11:35 👍 23 🔁 7 💬 3 📌 1

I'm at #CVPR2025 to present our paper 🍵MAtCha Gaussians🍵, today Friday afternoon, Hall D, Poster 53!

If you're in Nashville and want to discuss detailed 3D mesh reconstruction from sparse or dense RGB images, let's connect!

@kyotovision.bsky.social

13.06.2025 13:46 👍 8 🔁 3 💬 0 📌 0

Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!

cvpr.thecvf.com/Conferences/...

10.05.2025 13:59 👍 48 🔁 12 💬 0 📌 19
Post image

#CVPR2025 Fri June 13 (PM) ✨ Highlight
🍵 MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views
@antoine-guedon.bsky.social @kyotovision.bsky.social
📄 pdf: arxiv.org/abs/2412.06767
🌐 webpage: anttwo.github.io/matcha/

30.04.2025 13:04 👍 9 🔁 2 💬 1 📌 0
Video thumbnail

I actually saw him dancing on a bench 😱
anttwo.github.io/frosting/

03.04.2025 15:58 👍 2 🔁 0 💬 0 📌 0

And the fact that this pipeline makes it possible to get sharp meshes from sparse unposed imgs means 2 things:

1. MASt3R-SfM is so good, it's crazy... I love it.

2. The regularization we introduce seems to really help the representation to stabilize, even though the constraints are very sparse

03.04.2025 14:13 👍 1 🔁 0 💬 0 📌 0

You're entirely right!
And actually MASt3R-SfM does the tougher part of the job, clearly 😁
I just meant that both can be used in a unified pipeline for getting sharp meshes from unposed images.

03.04.2025 14:13 👍 2 🔁 0 💬 1 📌 0
MAtCha Project page for MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views

This work was done in collaboration with Tomoki Ichikawa, Kohei Yamashita and Professor Ko Nishino from @kyotovision.bsky.social well as @imagineenpc.bsky.social

🌐Website: anttwo.github.io/matcha/

💻Code: github.com/Anttwo/MAtCha

03.04.2025 10:33 👍 0 🔁 0 💬 0 📌 0
Video thumbnail

🔑 Key point #2: Inspired by Gaussian Opacity Fields, we developed a new mesh extraction method for 2DGS.

It properly handles both foreground and background geometry while being lightweight if needed (only 150-350MB).

No post-processing mesh decimation is required!

03.04.2025 10:33 👍 0 🔁 0 💬 1 📌 0
Post image Post image Post image Post image

MAtCha introduces a novel surface representation that reconstructs high-quality 3D meshes with photorealistic rendering from just a handful of images.

💡Our key idea: model scene geometry as an Atlas of Charts and refine it with 2D Gaussian surfels.

03.04.2025 10:33 👍 0 🔁 0 💬 1 📌 0
Post image Post image Post image Post image

💻We've released the code for our #CVPR2025 paper MAtCha!

🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...

...While also working with dense-view datasets (hundreds of images)!

03.04.2025 10:33 👍 39 🔁 16 💬 4 📌 1
Video thumbnail

🔑 Key point #3: We also introduce a novel “depth-order” regularization that leverages depth maps estimated with a monodepth estimator.

The depth maps can be multi-view inconsistent, no problem!

MAtCha still gets smooth, detailed background while preserving foreground details.

03.04.2025 10:33 👍 1 🔁 0 💬 1 📌 0
Post image Post image Post image Post image

🔑 Key point #1: Our novel optimization pipeline is robust to sparse-view inputs (as few as 3 to 10 images) but also scales to dense-view scenarios (hundreds of views).

No more choosing between sparse or dense methods!

03.04.2025 10:33 👍 0 🔁 0 💬 1 📌 0
Post image Post image

🔥🔥🔥 CV Folks, I have some news! We're organizing a 1-day meeting in center Paris on June 6th before CVPR called CVPR@Paris (similar as NeurIPS@Paris) 🥐🍾🥖🍷

Registration is open (it's free) with priority given to authors of accepted papers: cvprinparis.github.io/CVPR2025InPa...

Big 🧵👇 with details!

21.03.2025 06:43 👍 136 🔁 51 💬 7 📌 11

Starter pack including some of the lab members: go.bsky.app/QK8j87w

14.03.2025 10:34 👍 24 🔁 11 💬 0 📌 1
Post image

1/13 🐊 Introducing our latest work on improving relative camera pose regression with a novel pre-training approach Alligat0R (arxiv.org/abs/2503.07561)!
@gbourmaud.bsky.social @vincentlepetit.bsky.social

11.03.2025 10:51 👍 20 🔁 5 💬 4 📌 0

MASt3R-SfM is really an awesome follow-up to DUSt3R!
What I find truly impressive is how well it performs with (1) sparse views as well as (2) captures with no camera translations but only camera rotations (this is unthinkable with COLMAP).

20.12.2024 19:35 👍 6 🔁 0 💬 0 📌 0