Screenshot of the paper. Teaser figure: five-panel teaser showing a shift from block-by-block to abstract tower descriptions. Panel 1 shows the first L-shaped tower made from three LEGO blocks (blue base, two red blocks stacked). A speech bubble says Put a blue block on the front side of the grid, with a hand precisely placing an imaginary block on a two-by-two grid. Panel 2 shows a speech bubble saying a red block on top of the blue, on the left side, with a hand holding an imaginary block vertically above the previous position. Panel 3 shows a speech bubble saying then another red block on top of that, with the right hand stacking another imaginary block. Panel 4 shows a speech bubble saying like an L shape. Two hands depict an L-shape gesture representing tower shape without position or orientation. The final panel shows the same tower in a different position and orientation, with a speech bubble reading Put a backward L-shape tower on the back of the grid and a hand indicating the back row of the grid.
If you saw @judithfan.bsky.social present our poster at #CogSci2025, the full paper will appear at #CHI2026:
“Gesturing Toward Abstraction: Multimodal Convention Formation in Collaborative Physical Tasks”
🔗 multimodal-conventions.github.io
📄 arxiv.org/pdf/2602.08914
@princetonhci.bsky.social
4/4
19.02.2026 20:43
👍 5
🔁 0
💬 0
📌 2
🤖 Our findings suggest strategies for convention-aware multimodal agents:
(1) learn users’ chunked conventions as they emerge
(2) shift to abstract-first instructions over time
(3) adapt modality to evolving user preferences
(4) use redundancy to highlight changes from prior interactions
3/4
19.02.2026 20:43
👍 3
🔁 1
💬 1
📌 0
We study how multimodal communication evolves in repeated physical collaboration. We use #AR to isolate speech and gestures during communication.
We extend the Rational Speech Act (RSA) framework to multimodal settings, building a computational model that simulates the behaviors we observe.
2/4
19.02.2026 20:43
👍 3
🔁 1
💬 1
📌 0
Top: Modality shift in block instruction: R1: Take the green block and put it on the left side of the grid. A hand is holding an imaginary piece toward the left column of a 2×2 grid; label reads Redundant position and orientation. R4: the green block pointing this way. A hand is pointing near the bottom left cell with an arrow showing movement toward the top left cell; label reads Complementary position and orientation. Target tower: a 3 block green and red C-shape tower on a 2x2 grid. Bottom: Modality shift in tower instruction: R1: they are going to form a C-shape. A c-shape hand pose with the index and thumb is shown far from the grid; the label reads No information about position or orientation. R4: Put the C on the left side, facing away from you. Right hand shows the C shape facing away, and left hand with the palm open indicates placement on the left side; labels read Redundant position and orientation.
People form ad hoc conventions, establishing linguistic & gestural abstractions, and shift information across speech and gesture to communicate more efficiently over time.
We study this in our #CHI2026 paper, led by Kiyosu Maeda with @judithfan.bsky.social @rdhawkins.bsky.social and team
🧵👇
1/4
19.02.2026 20:43
👍 21
🔁 3
💬 1
📌 0
Thanks @pedrolopes.org!! Hope to see you at HRI or CHI soon!
13.02.2026 00:33
👍 1
🔁 0
💬 0
📌 0
Check out “Explainable OOHRI: Communicating Robot Capabilities and Limitations as AR Affordances” at #HRI2026 for more details!
🔗 Project page: xoohri.github.io
📄 Paper: arxiv.org/abs/2601.14587
Led by Lauren Wang, in collaboration with Mo Kari at Princeton HCI.
#HCI #AR #Robotics #HRI
12.02.2026 20:23
👍 2
🔁 1
💬 0
📌 0
Beyond pick-and-place, X-OOHRI exposes abstract robot actions via a radial menu after selecting a real-world object. Users then manipulate virtual twins to specify missing spatial parameters.
This can also support remote teleoperation 🎮
12.02.2026 20:23
👍 1
🔁 0
💬 1
📌 0
X-OOHRI is a mixed-initiative UI!
Users manipulate life-size, colocated virtual twins of objects in AR to issue precise robot instructions 🪄
When problems arise, robots simulate resolutions or suggest alternatives, and users can help through virtual interactions or physical actions ✨
12.02.2026 20:23
👍 1
🔁 0
💬 1
📌 0
A fundamental challenge in human-robot interaction is that capabilities and limitations are often opaque to users.
In our upcoming #HRI2026 paper, X-OOHRI, led by the brilliant Lauren Wang, we use AR to make robot capabilities and limits visible during object-oriented interactions 🤖
12.02.2026 20:23
👍 7
🔁 2
💬 2
📌 0
Thanks! So glad you liked it!!
12.02.2026 20:11
👍 1
🔁 0
💬 0
📌 0
Princeton Robotics Seminar - Parastoo Abtahi
YouTube video by Princeton University Robotics
First was an amazing talk by @parastooabtahi.bsky.social on haptic illusions in VR and object-oriented interactions in AR at Princeton Robotics. Highly recommend www.youtube.com/watch?v=hlo9... (2/8)
14.01.2026 03:53
👍 2
🔁 1
💬 2
📌 0
📣 I’m recruiting 1–2 #HCI PhD students interested in spatial computing, #AR, and #HRI. More information: parastooabtahi.com/applicants
If you’re interested, apply to Princeton CS by Dec 15 and mention my name in your application.
12.12.2025 19:20
👍 5
🔁 2
💬 0
📌 0
Princeton HCI at UIST 2025. Three featured works: Paper ‘Reality Promises’ on virtual-physical decoupling illusions with invisible robots (Wed, 11:24 AM, Sydney room, best paper); Paper ‘Capybara’ on block-based programming in AR and GenAI-assisted creation (Tue, 4:42 PM, Miami room); Poster ‘Ghost Objects’ on real-world lasso and co-located virtual twin manipulation for robot instruction (Tue, 6:30 PM, Ballroom Lobby). Princeton HCI recruitment session for PhD or Postdoc applicants (Wed, 1:30 PM, Paradise Hotel Garden).
28.09.2025 13:28
👍 6
🔁 0
💬 0
📌 0
Thrilled that Reality Promises received a best paper award at #UIST2025.
Come see Mo Kari’s talk on the last day of the conference!
📍Wed, at 11:00 AM, in the Sydney room
27.09.2025 07:29
👍 3
🔁 0
💬 0
📌 0
Makeability Lab - How to Figures
How to figures makeabilitylab.cs.uw.edu
With the CHI deadline fast approaching, I'm resharing our lab's resource on making figures for HCI papers: docs.google.com/presentation...
New content suggestions always appreciated. Don't be shy to promote your own work!
02.09.2025 20:10
👍 13
🔁 6
💬 1
📌 0
Researchers made a robot that can make deliveries to VR. They call it Skynet.
Details here: www.uploadvr.com/invisible-mo...
23.08.2025 05:36
👍 13
🔁 5
💬 2
📌 3
“Reality Promises: Virtual-Physical Decoupling Illusions in Mixed Reality via Invisible Mobile Robots”
Paper: hci.princeton.edu/wp-content/u...
Full Video: youtu.be/SdDXvIB79j0
Project Page: mkari.de/reality-prom...
See you in Busan! 🇰🇷
#HCI #HRI
21.08.2025 13:45
👍 2
🔁 0
💬 0
📌 0
In #AR, using real-time on-device 3D Gaussian splatting, we create the illusion that physical changes occur instantaneously, while a hidden robot fulfills the “reality promise” moments later, updating the physical world to match what users already perceive visually. 🤖
21.08.2025 13:45
👍 3
🔁 3
💬 1
📌 0
Even virtual agents’ actions can have physical effects, with motion paths that divert attention from the hidden robot. 🐝
21.08.2025 13:45
👍 0
🔁 0
💬 1
📌 0
Beyond materializing physical objects (seemingly out of thin air), users can manipulate out-of-reach objects via RealityGoGo — creating the illusion of telekinesis. 🪴
21.08.2025 13:45
👍 0
🔁 0
💬 1
📌 0
In #VR, users can experience “magical” interactions, such as moving distant virtual objects with the Go-Go technique. How might we similarly extend people’s abilities in the physical world? 🪄
Excited to share Reality Promises, our #UIST2025 paper, led by the amazing Mo Kari ✨
21.08.2025 13:45
👍 2
🔁 1
💬 1
📌 1
Check out Lauren Wang’s #UIST2025 poster on GhostObjects: life-size, world-aligned virtual twins for fast and precise robot instruction, with real-world lasso selection, multi-object manipulation, and snap-to-default placement.
This is the first piece in her ongoing work on #AR for #HRI 🤖👓
19.08.2025 16:11
👍 0
🔁 1
💬 0
📌 0
Poster for the Cognitive Tools Lab at CogSci 2025, scheduled for Thursday, July 31. The poster is titled “Using gesture and language to establish multimodal conventions in collaborative physical tasks.” It features an image of a hand pointing to a 2×2 grid, with an arrow indicating movement from the bottom-left square to the top-left square. A quote reads, “... the green block pointing this way,” and the gesture is labeled “Complementary position & orientation.” Headshots of the four authors, Maeda, Tsai, Fan, and Abtahi, appear at the bottom. The session is listed as Poster Session 1 at 1:00 pm.
📢 Find Judy Fan (@judithfan.bsky.social) at #CogSci2025 during Poster Session 1 (⏰Tomorrow, 1–2:15 PM | 📍Salon 8) to learn about our work on understanding multimodal communication and how people form linguistic and gestural abstractions in collaborative physical tasks.
30.07.2025 22:33
👍 8
🔁 1
💬 0
📌 0
Sunnie standing in front of her presentation celebrating the successful defense 🎉
Vera, Andrés, Sunnie, Olga, and Jenn (on Sunnie’s laptop screen) celebrating
Group photo of everyone who joined Sunnie’s dissertation defense
Lauren, Sunnie, and Jeff (photo taken at CHI 2025)
📢 I successfully defended my PhD dissertation! Huge thanks to my committee (Olga @andresmh.com @jennwv.bsky.social @qveraliao.bsky.social @parastooabtahi.bsky.social) & everyone who supported me ❤️
📢 Next I'll join Apple as a research scientist in the Responsible AI team led by @jeffreybigham.com!
07.05.2025 20:46
👍 60
🔁 5
💬 6
📌 1
Tue April 29: I'll be cheering Indu Panigrahi present our LBW on interactive AI explanations (w/ Amna, Rohan, Olga, Ruth, @parastooabtahi.bsky.social) in the 10:30-11:10am and 3:40-4:20pm poster sessions (North 1F)
🧵 bsky.app/profile/para...
📌 programs.sigchi.org/chi/2025/pro...
25.04.2025 00:09
👍 3
🔁 2
💬 1
📌 0
In collaboration with @sunniesuhyoung.bsky.social, Amna Liaqat, Rohan Jinturkar, Olga Russakovsky, and Ruth Fong.
Excited to share that Indu will be starting as a PhD student at UIUC this fall! 🎉
18.04.2025 21:14
👍 3
🔁 0
💬 0
📌 0
A 3×4 grid showing bird images with visual explanations for Static, Filtering, Overlays, and Counterfactuals across three types: Heatmap, Concept, and Prototype.
Heatmap row:
Color heatmaps over birds with labels “More Important” and “Less Important.” Filtering separates “Most Important Areas” and “Least Important Areas” with a “Show More” slider. Overlays add a tooltip: “The bird part that you are hovering near is: grey bill.” Counterfactuals include prediction text—“‘Heermann’s gull’”—and editable attributes like “Back Pattern” and “Bill Color.”
Concept row:
Bar charts show the importance of features like “black bill” and “white tail.” Filtering splits “Positive” and “Negative Concepts” with sliders. Overlays label parts like “spotted belly” and “grey wing.” Counterfactuals show the prediction “pine grosbeak” with concept bars and edit options like “Tail Color.”
Prototype row:
Birds are overlaid with patches showing similarity scores (e.g., “0.98 similar”). Filtering compares “Prototypes” and “Criticisms.” Overlays highlight areas with tooltips like “grey crown.” Counterfactuals include the label “Eastern towhee” and editable features like “Belly Color” and “Wing Color.”
This is a qualitative study of how simple interactive mechanisms—filtering, overlaid annotations, and counterfactual image edits—might address existing challenges with static CV explanations, such as information overload, semantic-pixel gap, and limited opportunities for exploration.
18.04.2025 21:14
👍 2
🔁 0
💬 1
📌 0
Title: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.” Authors: Indu Panigrahi, Sunnie S. Y. Kim*, Amna Liaqat*, Rohan Jinturkar, Olga Russakovsky, Ruth Fong, Parastoo Abtahi. Logos: Princeton University, NSF, OpenPhil, Princeton HCI, Open Glass Lab, and Princeton Visual AI Lab. CHI 2025, April 26–May 1, 2025, Yokohama, Japan, including illustrations of Yokohama’s skyline, ferris wheel, and a pink sailboat labeled “CHI.”
Check out Indu Panigrahi’s LBW at #CHI2025: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.”
🔗 Project Page: ind1010.github.io/interactive_XAI
📄 Extended Abstract: arxiv.org/abs/2504.10745
18.04.2025 21:14
👍 8
🔁 3
💬 1
📌 1
Boosting this up for a last chance to come join us at #CHI2025
to assist with being an associate chairs (ACs) for the @chi.acm.org Late Breaking Work program! Please forward to anyone that you know might be interested.
30.11.2024 22:29
👍 11
🔁 5
💬 0
📌 0
HCI researchers starter pack. Lets you follow a bunch of HCI people at once (which the HCI list didn't let you do).
Again, ask to be added if I missed you.
go.bsky.app/p3TLwt
13.11.2024 11:05
👍 31
🔁 11
💬 27
📌 1