With only a few seconds of data, the resulting geometry and dynamics predictions are more accurate that vision-only methods, including deep shape reconstruction. Videos on the website, and details in the paper.
arxiv.org/abs/2504.18719
With only a few seconds of data, the resulting geometry and dynamics predictions are more accurate that vision-only methods, including deep shape reconstruction. Videos on the website, and details in the paper.
arxiv.org/abs/2504.18719
This paper, led by Bibit Bianchini and Minghan Zhu, has been a labor of love, and we're excited to share it at @roboticsscisys.bsky.social RSS 2025.
We combine BundleSDF (Vision) with our prior work on contact-rich learning (PLL), where each component feeds back useful insights to the others.
Vision and contact dynamics are both heavily influenced by geometry, so why do we treat them as separate problems? By combining vision with physics, "Vysics," each informs the other and we generate accurate shape reconstructions despite major visual occlusions.
vysics-vision-and-physics.github.io
Congratulations to Dr. Michael Posa @michaelposa.bsky.social on receiving the 2024 Best Paper Award at the IEEE RAS TC on Model-based Optimization for Robotics!
More info here!
www.tcoptrob.org/news/2025-04...
#GRASP #GRASPLab #BestPaperAward #IEEE2024
I'm recruiting multiple Ph.D. students across all departments (ME, EE, or CS). Growing research projects focus on dexterous manipulation in novel settings, combining control and learning with visuotactile sensing and non-smooth dynamics.
Deadline December 16
gradadm.seas.upenn.edu/doctoral/