Marcus Klasson's Avatar

Marcus Klasson

@marcusklasson

Perception Researcher at Ericsson, Sweden. https://marcusklasson.github.io/

212
Followers
211
Following
10
Posts
04.12.2024
Joined
Posts Following

Latest posts by Marcus Klasson @marcusklasson

Home Martin Trapp - Assistant Professor in Machine Learning at KTH Royal Institute of Technology.

Want to work on Trustworthy AI? πŸš€

I'm seeking exceptional candidates to apply for the Digital Futures Postdoctoral Fellowship to work with me on Uncertainty Quantification, Bayesian Deep Learning, and Reliability of ML Systems.

The position will be co-advised by Hossein Azizpour or Henrik BostrΓΆm.

02.10.2025 14:46 πŸ‘ 11 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Preview
DeSplat: Decomposed Gaussian Splatting for Distractor-Free Rendering Gaussian splatting enables fast novel view synthesis in static 3D environments. However, reconstructing real-world environments remains challenging as distractors or occluders break the multi-view con...

Paper, videos, and code (nerfstudio) is available!
πŸ“„ arxiv.org/abs/2411.19756
🎈 aaltoml.github.io/desplat/

Big ups to Yihao Wang, @maturk.bsky.social, Shuzhe Wang, Juho Kannala, and @arnosolin.bsky.social for making this possible during my time at @aalto.fi πŸ’™πŸ€

#AaltoUniversity #CVPR2025
[8/8]

13.06.2025 08:04 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

DeSplat has the same FPS and training time as vanilla 3DGS with some additional overhead for storing distractor Gaussians. Extend with MLPs or other models can also be done. Altering DeSplat to video remains to be explored, as distractors barely moving across images can be mistaken as static. [7/8]

13.06.2025 08:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

This decomposed splatting (DeSplat) approach explicitly separates distractors from static parts. Earlier methods (e.g. SpotlessSplats, WildGaussians) use loss masking of detected distractors to avoid overfitting, while DeSplat instead jointly reconstructs distractor elements.
[6/8]

13.06.2025 07:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Knowing how 3DGS treats distractors, we initialize a set of Gaussians close to every camera view for reconstructing view-specific distractors. The Gaussians initialized from the point cloud should reconstruct static stuff. These separately rendered images are alpha-blended during training.

[5/8]

13.06.2025 07:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In a viewer, you can see that these spurious artefacts are thin and are located close to the camera view. For the scene-overfitting approach in 3DGS, this makes sense since an object only appearing in one view must be located as close to the camera such that no other camera view can see it.

[4/8]

13.06.2025 07:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

This BabyYoda scene from RobustNeRF is similar to a crowdsourced scenario, where a set of static toys appear together with inconsistently-placed toys between the frames.

Vanilla 3DGS is quite robust here, but some views end up being rendered with spurious artefacts (right image).
[3/8]

13.06.2025 07:55 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Our goal is to learn a scene representation from images that include non-static objects we refer to as distractors. An example is crowdsourced images where different people appear at different locations in the scene, which creates multi-view inconsistencies between the frames.
[2/8]

13.06.2025 07:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸ‘‹Interested in Gaussian splatting and removing dynamic content from images?

Our DeSplat is presented today at #CVPR2025 at Poster Session 1, ExHall D Poster #52.

Yihao will be there to present our fully splatting-based method for separating static and dynamic stuff in images.

🧡[1/8]

13.06.2025 07:52 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 1

You woke up early in the morning jet-lagged and having a hard time deciding for a workshop today @cvprconference.bsky.social ?

Here's a reliable choice for you: our workshop on πŸ›Ÿ Uncertainty Quantification for Computer Vision!

πŸ—“οΈ Day: Wed, Jun 11
πŸ“Room: 102 B
#CVPR2025 #UNCV2025

11.06.2025 11:33 πŸ‘ 9 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
KTH | Postdoc in robotics with specialization in visual domain adaptation KTH jobs is where you search for jobs at www.kth.se.

KTH is looking for a *Postdoc* to work on visual domain adaptation for mobile robot perception in a joint project with Ericsson in Stockholm.

Apply by May 15 if you are interested in working with computer vision applied to real robots!

More info: www.kth.se/lediga-jobb/...

23.04.2025 10:08 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
UNCV Workshop @ CVPR 2025 CVPR 2025 Workshop on Uncertainty Quantification for Computer Vision.

Submission deadline is extended to March 20 for submitting your paper to our #CVPR2025 workshop on Uncertainty Quantification for Computer Vision.

Looking forward to see your submissions on recognizing failure scenarios and enabling robust vision systems!

More info: uncertainty-cv.github.io/2025/

17.03.2025 17:29 πŸ‘ 11 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0
Post image

There is still time to submit your papers to our #CVPR2025 workshop on Uncertainty Quantification for Computer Vision, which is part of the workshop lineup at the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) in Nashville, Tennessee.

08.03.2025 15:08 πŸ‘ 13 πŸ” 6 πŸ’¬ 2 πŸ“Œ 2
Post image Post image Post image

Our Workshop on Uncertainty Quantification for Computer Vision goes to @cvprconference.bsky.social this year!
We have a super line-up of speakers and a call for papers.
This is a chance for your paper to shine at #CVPR2025

⏲️ Submission deadline: 14 March
πŸ’» Page: uncertainty-cv.github.io/2025/

28.02.2025 07:28 πŸ‘ 33 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0

I will present ✌️ BDU workshop papers @ NeurIPS: one by Rui Li (looking for internships) and one by Anton Baumann.

πŸ”— to extended versions:

1. πŸ™‹ "How can we make predictions in BDL efficiently?" πŸ‘‰ arxiv.org/abs/2411.18425

2. πŸ™‹ "How can we do prob. active learning in VLMs" πŸ‘‰ arxiv.org/abs/2412.06014

10.12.2024 15:18 πŸ‘ 18 πŸ” 4 πŸ’¬ 1 πŸ“Œ 1