2HandedAfforder
Marvin Heidinger*, Snehal Jauhri*, Vignesh Prasad, and Georgia ChalvatzakiPEARL Lab, TU Darmstadt, Germany
* Equal contribution
International Conference on Computer Vision (ICCV) 2025
More details and results in the paper, and stay tuned for the 2HANDS dataset & code release!
πPaper: arxiv.org/abs/2503.09320
π Website: sites.google.com/view/2handedafforder
Work done with Marin Heidinger, Vignesh Prasad & @georgiachal.bsky.social
See you in Hawaii at #ICCV2025! π΄
14.07.2025 04:03
π 2
π 1
π¬ 0
π 0
We can then use our high-quality dataset to train or fine-tune a VLM that takes in the activity/task text prompt as input and predicts bimanual affordance masks (for a left and right robot hand)
4/5
14.07.2025 04:03
π 1
π 0
π¬ 1
π 0
We extract bimanual affordance masks from egocentric RGB video datasets using video-based hand inpainting and object reconstruction.
No manual labeling is required. The narrations from egocentric datasets also provide free-form text supervision! (Eg. "pour milk into bowl")
3/5
14.07.2025 04:03
π 0
π 0
π¬ 1
π 0
The Problem:
Most affordance detection methods just segment object parts & do not predict actionable regions for robots!
Our solution?
Use egocentric bimanual human videos to extract precise affordance regions considering object relationships, context, & hand coordination!
2/5
14.07.2025 04:03
π 1
π 0
π¬ 1
π 0
π’ PSA for the robotics community:
Stop labeling affordances or distilling them from VLMs.
Extract affordances from bimanual human videos instead!
Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! π
π§΅1/5
14.07.2025 04:03
π 1
π 0
π¬ 1
π 0
Thank you to all the speakers & attendees for making the EgoAct workshop a great success!
Congratulations to the winners of the Best Paper Awards: EgoDex & DexWild!
The full recording is available at: youtu.be/64yLApbBZ7I
Some highlights:
23.06.2025 01:03
π 1
π 0
π¬ 0
π 0
Web home for EgoAct: 1st Workshop on Egocentric Perception and Action for Robot Learning @ RSS2025
Learn more at the workshop website: egoact.github.io/rss2025
Happy to be organizing this with @georgiachal.bsky.social, Yu Xiang, @danfei.bsky.social and @galasso.bsky.social!
06.04.2025 21:24
π 3
π 2
π¬ 0
π 1
Call for Contributions:
Weβre inviting contributions in the form of:
π Full papers OR
π 4-page extended abstracts
ποΈ Submission Deadline: April 30, 2025
π Best Paper Award, sponsored by Meta!
06.04.2025 21:24
π 0
π 0
π¬ 1
π 0
Core workshop topics include:
π₯½ Egocentric interfaces for robot learning
π§ High-level action & scene understanding
π€ Human-to-robot transfer
π§± Foundation models from human activity datasets
π οΈ Egocentric world models for high-level planning & low-level manipulation
06.04.2025 21:24
π 0
π 0
π¬ 1
π 0
π’ Excited to announce EgoAct π₯½π€: the 1st Workshop on Egocentric Perception and Action for Robot Learning at #RSS2025 in LA!
Weβre bringing together researchers exploring how egocentric perception can drive next-gen robot learning!
π Full info: egoact.github.io/rss2025
@roboticsscisys.bsky.social
06.04.2025 21:24
π 0
π 0
π¬ 1
π 1
I'm working on Robot learning and perception : )
23.11.2024 07:36
π 3
π 0
π¬ 1
π 0