Home
Martin Trapp - Assistant Professor in Machine Learning at KTH Royal Institute of Technology.
Want to work on Trustworthy AI? π
I'm seeking exceptional candidates to apply for the Digital Futures Postdoctoral Fellowship to work with me on Uncertainty Quantification, Bayesian Deep Learning, and Reliability of ML Systems.
The position will be co-advised by Hossein Azizpour or Henrik BostrΓΆm.
02.10.2025 14:46
π 11
π 4
π¬ 1
π 0
DeSplat: Decomposed Gaussian Splatting for Distractor-Free Rendering
Gaussian splatting enables fast novel view synthesis in static 3D environments. However, reconstructing real-world environments remains challenging as distractors or occluders break the multi-view con...
Paper, videos, and code (nerfstudio) is available!
π arxiv.org/abs/2411.19756
π aaltoml.github.io/desplat/
Big ups to Yihao Wang, @maturk.bsky.social, Shuzhe Wang, Juho Kannala, and @arnosolin.bsky.social for making this possible during my time at @aalto.fi ππ€
#AaltoUniversity #CVPR2025
[8/8]
13.06.2025 08:04
π 3
π 1
π¬ 0
π 0
DeSplat has the same FPS and training time as vanilla 3DGS with some additional overhead for storing distractor Gaussians. Extend with MLPs or other models can also be done. Altering DeSplat to video remains to be explored, as distractors barely moving across images can be mistaken as static. [7/8]
13.06.2025 08:01
π 0
π 0
π¬ 1
π 0
This decomposed splatting (DeSplat) approach explicitly separates distractors from static parts. Earlier methods (e.g. SpotlessSplats, WildGaussians) use loss masking of detected distractors to avoid overfitting, while DeSplat instead jointly reconstructs distractor elements.
[6/8]
13.06.2025 07:59
π 0
π 0
π¬ 1
π 0
Knowing how 3DGS treats distractors, we initialize a set of Gaussians close to every camera view for reconstructing view-specific distractors. The Gaussians initialized from the point cloud should reconstruct static stuff. These separately rendered images are alpha-blended during training.
[5/8]
13.06.2025 07:58
π 0
π 0
π¬ 1
π 0
In a viewer, you can see that these spurious artefacts are thin and are located close to the camera view. For the scene-overfitting approach in 3DGS, this makes sense since an object only appearing in one view must be located as close to the camera such that no other camera view can see it.
[4/8]
13.06.2025 07:56
π 0
π 0
π¬ 1
π 0
This BabyYoda scene from RobustNeRF is similar to a crowdsourced scenario, where a set of static toys appear together with inconsistently-placed toys between the frames.
Vanilla 3DGS is quite robust here, but some views end up being rendered with spurious artefacts (right image).
[3/8]
13.06.2025 07:55
π 0
π 0
π¬ 1
π 0
Our goal is to learn a scene representation from images that include non-static objects we refer to as distractors. An example is crowdsourced images where different people appear at different locations in the scene, which creates multi-view inconsistencies between the frames.
[2/8]
13.06.2025 07:53
π 0
π 0
π¬ 1
π 0
πInterested in Gaussian splatting and removing dynamic content from images?
Our DeSplat is presented today at #CVPR2025 at Poster Session 1, ExHall D Poster #52.
Yihao will be there to present our fully splatting-based method for separating static and dynamic stuff in images.
π§΅[1/8]
13.06.2025 07:52
π 6
π 1
π¬ 1
π 1
You woke up early in the morning jet-lagged and having a hard time deciding for a workshop today @cvprconference.bsky.social ?
Here's a reliable choice for you: our workshop on π Uncertainty Quantification for Computer Vision!
ποΈ Day: Wed, Jun 11
πRoom: 102 B
#CVPR2025 #UNCV2025
11.06.2025 11:33
π 9
π 3
π¬ 0
π 0
KTH | Postdoc in robotics with specialization in visual domain adaptation
KTH jobs is where you search for jobs at www.kth.se.
KTH is looking for a *Postdoc* to work on visual domain adaptation for mobile robot perception in a joint project with Ericsson in Stockholm.
Apply by May 15 if you are interested in working with computer vision applied to real robots!
More info: www.kth.se/lediga-jobb/...
23.04.2025 10:08
π 2
π 1
π¬ 0
π 0
UNCV Workshop @ CVPR 2025
CVPR 2025 Workshop on Uncertainty Quantification for Computer Vision.
Submission deadline is extended to March 20 for submitting your paper to our #CVPR2025 workshop on Uncertainty Quantification for Computer Vision.
Looking forward to see your submissions on recognizing failure scenarios and enabling robust vision systems!
More info: uncertainty-cv.github.io/2025/
17.03.2025 17:29
π 11
π 5
π¬ 0
π 0
There is still time to submit your papers to our #CVPR2025 workshop on Uncertainty Quantification for Computer Vision, which is part of the workshop lineup at the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) in Nashville, Tennessee.
08.03.2025 15:08
π 13
π 6
π¬ 2
π 2
Our Workshop on Uncertainty Quantification for Computer Vision goes to @cvprconference.bsky.social this year!
We have a super line-up of speakers and a call for papers.
This is a chance for your paper to shine at #CVPR2025
β²οΈ Submission deadline: 14 March
π» Page: uncertainty-cv.github.io/2025/
28.02.2025 07:28
π 33
π 7
π¬ 0
π 0
I will present βοΈ BDU workshop papers @ NeurIPS: one by Rui Li (looking for internships) and one by Anton Baumann.
π to extended versions:
1. π "How can we make predictions in BDL efficiently?" π arxiv.org/abs/2411.18425
2. π "How can we do prob. active learning in VLMs" π arxiv.org/abs/2412.06014
10.12.2024 15:18
π 18
π 4
π¬ 1
π 1