π βHearing the Slide: Acoustic-Guided Constraint Learning for Fast Non-Prehensile Transportβ
By: @yuemin-mao.bsky.social, @bardienus.bsky.socialβ¬, Moonyoung Lee, @jeff-ichnowski.bsky.socialβ¬
arXiv: arxiv.org/abs/2506.09169
π₯ Website + videos: fast-non-prehensile.github.io
π§΅7/7
12.06.2025 14:11
π 1
π 0
π¬ 0
π 0
π Results:
We test the method on a UR5e robot with 12 different object configurations and compare it to the standard Coulomb friction model.
π’ Our method reduces object displacement by up to 86% π
π§΅6/7
12.06.2025 14:11
π 3
π 0
π¬ 1
π 0
We train a friction model that maps:
object slipping β tray speed + acceleration β real-world βfriction coefficientβ .
This becomes a dynamic constraint in a time-optimized motion planner. Now the robot knows how fast it can move without losing the object π€γ
π§΅5/7
12.06.2025 14:11
π 3
π 0
π¬ 1
π 0
Sliding makes noise π. When an object slips, it vibrates the trayβand we can hear it π. By attaching a contact mic ποΈ, we capture these signals at high frequency, low latency, and low cost. From acoustic data, we learn how robot speed and acceleration affect friction βοΈ
π§΅4/7
12.06.2025 14:11
π 2
π 0
π¬ 1
π 0
Most planners rely on the standard Coulomb friction model to prevent object sliding.
That works in theoryβ¦But in practice π€? At high speeds, robot vibrations and subtle dynamics cause objects to move well before the model predicts π«¨.
π§΅3/7
12.06.2025 14:11
π 1
π 0
π¬ 1
π 0
Using a tray as the robotβs end-effector opens the door to:
β
Moving multiple objects at once
β
Handling large, fragile, or oddly-shaped items
β
Lower cost (no gripper = fewer moving parts)
π§΅2/7
12.06.2025 14:11
π 0
π 0
π¬ 1
π 0
π€π¦ Want to move many items FAST with your robot? Use a tray. But at high speeds, objects may fall off π₯.
Introducing our new method: it hears sliding π§, learns dynamic friction π₯, and plans time-optimized motions to transport objects π.
fast-non-prehensile.github.io/
π§΅1/7
12.06.2025 14:11
π 7
π 0
π¬ 1
π 1
Imagine if robots could fill in the blanks in cluttered scenes.
β¨ Enter RaySt3R: a single masked RGB-D image in, complete 3D out.
It infers depth, object masks, and confidence for novel views, and merges the predictions into a single point cloud. rayst3r.github.io
06.06.2025 13:51
π 24
π 3
π¬ 1
π 2
πExcited to share that our paper was a finalist for best paper at #HRI2025! We introduce MOE-Hair, a soft robot system for hair care ππ»ππΌ that uses mechanical compliance and visual force sensing for safe, comfortable interaction. Check our work: moehair.github.io @cmurobotics.bsky.social π§΅1/7
17.03.2025 16:02
π 10
π 5
π¬ 1
π 1
ππ»ββοΈ
25.11.2024 19:15
π 1
π 0
π¬ 0
π 0
Thanks for creating the list! Can you add me please :) I work on dynamic manipulation.
23.11.2024 04:26
π 1
π 0
π¬ 0
π 0
Proud to be part of this small but mighty group - let me know if I missed you or any women to add!
go.bsky.app/BuDHCYF
22.11.2024 05:54
π 90
π 22
π¬ 13
π 0