Snehal Jauhri's Avatar

Snehal Jauhri

@snehaljauhri

Robot Perception & Learning | Research Intern @AllenAI | PhD candidate, PEARL lab, TU Darmstadt with Georgia Chalvatzaki | https://pearl-lab.com/people/snehal-jauhri

720
Followers
119
Following
11
Posts
18.11.2024
Joined
Posts Following

Latest posts by Snehal Jauhri @snehaljauhri

Preview
2HandedAfforder Marvin Heidinger*, Snehal Jauhri*, Vignesh Prasad, and Georgia Chalvatzaki PEARL Lab, TU Darmstadt, Germany * Equal contribution International Conference on Computer Vision (ICCV) 2025

More details and results in the paper, and stay tuned for the 2HANDS dataset & code release!

πŸ“„Paper: arxiv.org/abs/2503.09320
🌐 Website: sites.google.com/view/2handedafforder

Work done with Marin Heidinger, Vignesh Prasad & @georgiachal.bsky.social

See you in Hawaii at #ICCV2025! 🌴

14.07.2025 04:03 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

We can then use our high-quality dataset to train or fine-tune a VLM that takes in the activity/task text prompt as input and predicts bimanual affordance masks (for a left and right robot hand)

4/5

14.07.2025 04:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We extract bimanual affordance masks from egocentric RGB video datasets using video-based hand inpainting and object reconstruction.

No manual labeling is required. The narrations from egocentric datasets also provide free-form text supervision! (Eg. "pour milk into bowl")

3/5

14.07.2025 04:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

The Problem:
Most affordance detection methods just segment object parts & do not predict actionable regions for robots!

Our solution?
Use egocentric bimanual human videos to extract precise affordance regions considering object relationships, context, & hand coordination!

2/5

14.07.2025 04:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸ“’ PSA for the robotics community:
Stop labeling affordances or distilling them from VLMs.
Extract affordances from bimanual human videos instead!

Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! πŸŽ‰

🧡1/5

14.07.2025 04:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Thank you to all the speakers & attendees for making the EgoAct workshop a great success!

Congratulations to the winners of the Best Paper Awards: EgoDex & DexWild!

The full recording is available at: youtu.be/64yLApbBZ7I

Some highlights:

23.06.2025 01:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Web home for EgoAct: 1st Workshop on Egocentric Perception and Action for Robot Learning @ RSS2025

Learn more at the workshop website: egoact.github.io/rss2025

Happy to be organizing this with @georgiachal.bsky.social, Yu Xiang, @danfei.bsky.social and @galasso.bsky.social!

06.04.2025 21:24 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 1

Call for Contributions:
We’re inviting contributions in the form of:
πŸ“ Full papers OR
πŸ“ 4-page extended abstracts
πŸ—“οΈ Submission Deadline: April 30, 2025
πŸ† Best Paper Award, sponsored by Meta!

06.04.2025 21:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Core workshop topics include:
πŸ₯½ Egocentric interfaces for robot learning
🧠 High-level action & scene understanding
🀝 Human-to-robot transfer
🧱 Foundation models from human activity datasets
πŸ› οΈ Egocentric world models for high-level planning & low-level manipulation

06.04.2025 21:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸ“’ Excited to announce EgoAct πŸ₯½πŸ€–: the 1st Workshop on Egocentric Perception and Action for Robot Learning at #RSS2025 in LA!

We’re bringing together researchers exploring how egocentric perception can drive next-gen robot learning!

πŸ”— Full info: egoact.github.io/rss2025

@roboticsscisys.bsky.social

06.04.2025 21:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

I'm working on Robot learning and perception : )

23.11.2024 07:36 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0