I've added the ply export, hopefully it works!
I've added the ply export, hopefully it works!
We haven't been running it without viz apart from evaluation so didn't think about this too much haha.
I'll add ply export or something soon.
Weβve had fun testing the limits of MASt3R-SLAM on in-the-wild videos. Hereβs the drone video of a Minnesota bowling alley that weβve always wanted to reconstruct! Different scene scales, dynamic objects, specular surfaces, and fast motion.
MASt3R-SLAM code release!
github.com/rmurai0610/M...
Try it out on videos or with a live camera
Work with
@ericdexheimer.bsky.social*,
@ajdavison.bsky.social (*Equal Contribution)
Thank you! And yes same as pointmap matching (fig2, theta) itβs minimising alpha not beta (since rays are normalised, we can use eq 3)
Thank you, and thanks for catching the typo!
Thank you! wouldn't have been possible without MASt3R/MASt3R-SfM.
This new paradigm has been inspiring!
Thanks! We're planning on releasing the code early next year
For more please visit:
Website: edexheim.github.io/mast3r-slam/
Video: youtu.be/wozt71NBFTQ
For robustness, MASt3R-SLAM performs relocalisation allowing it to handle the kidnapped robot problem.
As a purely monocular SLAM, it loses track when the cameraβs view is obstructed, but as soon as the view is unblocked, it immediately relocalises and resumes mapping.
We use MASt3R's two-view prior as our only network with no fine-tuning.
By leveraging this 3D prior and making minimal assumptions on the camera model, we can handle dynamically changing zoom.
Efficient test-time optimisation and loop closure enable large-scale consistency.
Introducing MASt3R-SLAM, the first real-time monocular dense SLAM with MASt3R as a foundation.
Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map.
With @ericdexheimer.bsky.social* @ajdavison.bsky.social (*Equal Contribution)