SECOND CALL: SHREC'26 Challenge on 3D Reconstruction
Our dataset features intricate geometries, ideal for benchmarking of high-frequency detail recovery.
All participating will co-author a joint paper submitted to Computers & Graphics.
Track Details shapevision.dcc.uchile.cl/cllull-shrec2026
27.02.2026 16:53
π 0
π 1
π¬ 0
π 0
#CVPR2026 reviews are slowly being dispatched over email. Good luck!
22.01.2026 18:33
π 8
π 1
π¬ 0
π 1
Screenshot of a paper discussion page titled βmHC: Manifold-Constrained Hyper-Connectionsβ. At the top is a card showing the paper title, a βView on arXivβ link, and indicators for 7 posts and 7 researchers. Below are social-style posts referencing the paper: one from βNT 5.2 Pyongyang Officialβ’β linking to arXiv with the caption βTHE WHALE IS BACK BABYYYYβ and an arXiv preview image, and another from βHacker Newsβ linking to the same arXiv paper. The interface resembles a research discussion or social feed layout.
new year and @mariaa.bsky.social and I have some fun new things cooking for the atproto ecosystem...
01.01.2026 15:58
π 116
π 14
π¬ 10
π 4
Hey this worked! Import all of your old Twitter posts over to Bluesky (for a few bucks, depending on how irrepressible you were). Now I look like I've been posting on this platform longer than it has existed.
22.10.2024 13:42
π 114
π 24
π¬ 11
π 8
Choosing the right colormap is tricky, too often, they hide subtle details or distort the data. Our new method transforms colormaps to boost local contrast and reveal just noticeable differences, all while keeping the visualization perceptually accurate and accessible.
dl.acm.org/doi/10.1145/...
15.08.2025 15:44
π 46
π 9
π¬ 1
π 1
1/ Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research.
21.07.2025 14:47
π 83
π 21
π¬ 2
π 3
How can one reconstruct the complete 3D interior of a wood block using only photos of its surfaces? πͺ΅
At SIGGRAPH'25 (Thursday!), Maria Larsson will present *Mokume*: a dataset of 190 diverse wood samples and a pipeline that solves this inverse texturing challenge. π§΅π
08.08.2025 11:53
π 76
π 15
π¬ 2
π 1
New 3D foundation model dropped.
Note: Seems they might have messed up their image matching metrics (seems like acc rather than auc), but should be at least as good as mast3r.
24.07.2025 22:50
π 11
π 2
π¬ 2
π 0
Turns out that by default huggingface models run on the CPU...
20.07.2025 12:10
π 1
π 0
π¬ 0
π 1
Awesome initiative π
This leaves me wondering though: how come authors attending #EurIPS still have to register for the main #NeurIPS (in the Americas) for their paper to be considered accepted?
You stopped so short of actually allowing ML researchers to fly less!
17.07.2025 14:12
π 32
π 5
π¬ 5
π 2
A meme where Anakin and Padme discuss the logics of allowing a NeurIPS event in Europe while forcing authors to also present in the US for publication
Sofar it doesnβt look good: neurips.cc/FAQ/AuthorRe...
βAt least one author of each accepted paper must register for the main conference. A βVirtual Only Passβ is not sufficient.β
17.07.2025 07:32
π 7
π 2
π¬ 1
π 0
WeTransfer just changed their TOS giving themselves permission to train AI on any content you transfer and produce derivative works based on content you transfer that they are allowed to monetize and you are not allowed payment for.
Stop using WeTransfer.
14.07.2025 23:05
π 7590
π 5277
π¬ 128
π 463
The code for our #CVPR2025 paper, PRaDA: Projective Radial Distortion Averaging, is now out!
Turns out distortion calibration from multiview 2D correspondences can be fully decoupled from 3D reconstruction, greatly simplifying the problem
arxiv.org/abs/2504.16499
github.com/DaniilSinits...
09.07.2025 13:54
π 12
π 5
π¬ 1
π 0
π¦ We present βFeed-Forward SceneDINO for Unsupervised Semantic Scene Completionβ. #ICCV2025
π: visinf.github.io/scenedino/
π: arxiv.org/abs/2507.06230
π€: huggingface.co/spaces/jev-a...
@jev-aleks.bsky.social @fwimbauer.bsky.social @olvrhhn.bsky.social @stefanroth.bsky.social @dcremers.bsky.social
09.07.2025 13:17
π 24
π 10
π¬ 1
π 1
We just released COLMAP v3.12, which adds long-awaited, end-to-end support for multi-camera rigs and 360Β° panoramas π COLMAP just got better at handling your robotics, AR/VR, or 360 data - try it yourself and let us know! github.com/colmap/colma... Kudos to Johannes & team for this great work π
01.07.2025 16:33
π 22
π 6
π¬ 1
π 0
Dense Match Summarization for Faster Two-view Estimation
Jonathan Astermark, Anders Heyden, Viktor Larsson
tl;dr: use clustering to reduce RANSAC time when using dense methods like RoMa.
Kudos for eval on WxBS.
P.S. now the same, but for BA?
arxiv.org/abs/2506.028...
24.06.2025 12:22
π 12
π 2
π¬ 2
π 1
π€ Iβm excited to share our recent work: TwoSquared: 4D Reconstruction from 2D Image Pairs.
π₯ Our method produces geometry, texture-consistent, and physically plausible 4D reconstructions
π° Check our project page sangluisme.github.io/TwoSquared/
β€οΈ @ricmarin.bsky.social @dcremers.bsky.social
23.04.2025 16:48
π 9
π 3
π¬ 0
π 1
Can we match vision and language representations without any supervision or paired data?
Surprisingly, yes!Β
Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.
β¬οΈ 1/4
03.06.2025 09:27
π 27
π 12
π¬ 1
π 0
Can you train a model for pose estimation directly on casual videos without supervision?
Turns out you can!
In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!
β¬οΈ
13.05.2025 08:11
π 25
π 10
π¬ 1
π 1
We also found that this allows the CTM to decide to spend less time thinking on simpler images, thus saving energy. When identifying a gorilla, for example, the CTMβs attention moves from eyes to nose to mouth in a pattern remarkably similar to human visual attention.
12.05.2025 02:42
π 18
π 2
π¬ 1
π 0
π’ New paper CVPRβ―25!
Can meshes capture fuzzy geometry? Volumetricβ―Surfaces uses adaptive textured shells to model hair, furβ―without the splatting / volume overhead. Itβs fast, looks great, and runs in real time even on budget phones.
π autonomousvision.github.io/volsurfs/
π arxiv.org/pdf/2409.02482
05.05.2025 13:00
π 32
π 21
π¬ 1
π 1
ZurichCV #9 | ZurichAI
Linus Scheibenreif (ETH Zurich) will talk about self-supervised learning for satellite imagery, and Pascal Chang (ETH Zurich/Disney Research) will present his recent work (topic to be announced).
8th ZurichCV is on the 29th of April. We have two fantastic speakers: Linus Scheibenreif (ETH Zurich) will talk about self-supervised learning for satellite imagery, and Pascal Chang (Disney Research) will give us a preview of his soon-to-be-published work.
RSVP: www.zurichai.ch/events/zuric...
20.04.2025 06:31
π 17
π 3
π¬ 0
π 0
No meal has ever sustained me for more than a few hours, a mere blip on the timeline of my life, 0.001% of my expected lifespan. So therefore I'll no longer be paying at restaurants
17.04.2025 11:53
π 74
π 24
π¬ 2
π 0
The Visual Recognition Group at CTU in Prague organizes the 49th Pattern Recognition and Computer Vision Colloquium with D. Karatzas, M. Masana, T. Tommasi, P. Mettes @pascalmettes.bsky.social , E. Brachmann @ericbrachmann.bsky.social and V. Stojnic @stojnicv.xyz
cmp.felk.cvut.cz/colloquium/#...
07.04.2025 13:57
π 34
π 10
π¬ 2
π 2
3D Gaussian splatting relies on depth-sorting of splats, which is costly and prone to artifacts (e.g., "popping"). In our latest work, "StochasticSplats", we replace sorted alpha blending by stochastic transparency, an unbiased Monte Carlo estimator from the real-time rendering literature.
07.04.2025 07:56
π 52
π 13
π¬ 2
π 2
π ππ π ππΉπΌπ΄: Robots & self-driving cars rely on scene understanding, but AI models for understanding these scenes need costly human annotations. Daniel Cremers & his team introduce π₯€π₯€ CUPS: a scene-centric unsupervised panoptic segmentation approach to reduce this dependency. π mcml.ai/news/2025-04...
03.04.2025 09:45
π 6
π 1
π¬ 0
π 1