IEEE Transactions on Robotics (T-RO)'s Avatar

IEEE Transactions on Robotics (T-RO)

@ieeetro

The IEEE Transactions on Robotics (T-RO) publishes major advances in the state-of-the-art in all areas of robotics including theory, design, experimental studies, analysis, algorithms, and integration and application case studies.

121
Followers
43
Following
86
Posts
03.02.2025
Joined
Posts Following

Latest posts by IEEE Transactions on Robotics (T-RO) @ieeetro

Vehicle configurations in flight. Video is available online.1 (a) Quadrotor. (b) Hexarotor. (c) 6DOF Hexarotor. (d) Tetrahedron Quadrotor. (e) Tetrahedron Decarotor. (f) Tetrahedron Hexadecarotor.

Vehicle configurations in flight. Video is available online.1 (a) Quadrotor. (b) Hexarotor. (c) 6DOF Hexarotor. (d) Tetrahedron Quadrotor. (e) Tetrahedron Decarotor. (f) Tetrahedron Hexadecarotor.

The #Dodecacopterβ€”a modular UAV made of regular dodecahedron modules that can assemble into 3D, fully actuated configurations beyond flat drone arrays. A prototype flies in multiple shapes, showing versatility and adaptability for #AerialRobotics.

https://ieeexplore.ieee.org/document/11265804

05.03.2026 08:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Robot waving and text that reads Thank you!

Robot waving and text that reads Thank you!

T-RO is delighted to welcome our many new editorial board members. We thank you for your commitment and dedication to the journal. T-RO would not be the journal it is without the incredible leadership and expertise of our entire editorial board

www.ieee-ras.org/publications...
#IEEEras #Robotics

03.03.2026 23:29 πŸ‘ 0 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
A cube with graphics showing the initial condition (left) and final steady state at t=3 s (right).

A cube with graphics showing the initial condition (left) and final steady state at t=3 s (right).

#Irrotational Contact Fields, a framework that generates convex, physically accurate approximations of complex contact & enables differentiable, artifact-free simulation in Drake, supporting robust sim-to-real transfer for contact-rich robotics tasks
https://ieeexplore.ieee.org/document/11203247

19.02.2026 08:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Graphical overview of the article's main part.

Graphical overview of the article's main part.

Physics-Informed #NeuralNetworks used to build generalizable, fast surrogate models of articulated #SoftRobot dynamics with accuracy across domains while speeding up prediction by ~466Γ— versus first-principles models, for real-time MPC in hardware
https://ieeexplore.ieee.org/document/11242009


18.02.2026 09:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Robot Assisted Medical Imaging special collection submissions window closes February 15.

Robot Assisted Medical Imaging special collection submissions window closes February 15.

FINAL CALL: Robot Assisted Medical Imaging (RAMI) Special Collection. Submissions close February 15

For information: https://www.ieee-ras.org/publications/t-ro/special-issues/robot-assisted-medical-imaging/

#RoboticCT #SurgicalRobotics #SurgicalSoftRobotics #RoboticLaparoscopy #RoboticImaging

12.02.2026 09:05 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Experiment of Unitree Go1 and Go2 on risky gap terrains, including (a) Single plank bridge, with the narrowest traversable width being 18 cm, validates the center-of-gravity control under narrow support. (b)–(c) Balance beams, where the narrowest beam width is 9 cm, to test the robot’s stability in response to height variations, inclination changes, and edge perception. (d)–(e) Large gaps to demonstrate the capability to traverse gaps of varying widths (up to 65 cm in the real-world experiment).

Experiment of Unitree Go1 and Go2 on risky gap terrains, including (a) Single plank bridge, with the narrowest traversable width being 18 cm, validates the center-of-gravity control under narrow support. (b)–(c) Balance beams, where the narrowest beam width is 9 cm, to test the robot’s stability in response to height variations, inclination changes, and edge perception. (d)–(e) Large gaps to demonstrate the capability to traverse gaps of varying widths (up to 65 cm in the real-world experiment).

MARG, a #DRL controller that combines terrain maps and proprioception from a single #LiDAR to traverse risky gap terrains (65 cm wide, narrow planks) with zero-shot sim-to-real transferβ€”boosting stability & foothold choice without extra sensors
https://ieeexplore.ieee.org/document/11196002

11.02.2026 08:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
3-D reconstruction from a run of OKVIS2-X on the Spagna sequence of the VBR dataset [1]. Reconstruction with a LiDAR sensor (top) or with a depth network (bottom) to showcase the versatility of the presented system to different sensor modalities. The estimated trajectory is visualized in black. Furthermore, different colors per submap are used.

3-D reconstruction from a run of OKVIS2-X on the Spagna sequence of the VBR dataset [1]. Reconstruction with a LiDAR sensor (top) or with a depth network (bottom) to showcase the versatility of the presented system to different sensor modalities. The estimated trajectory is visualized in black. Furthermore, different colors per submap are used.

Authors introduce OKVIS2-X, a real-time multi-sensor #SLAM system that tightly fuses visual, inertial, #GNSS, depth or #LiDAR measurements into dense volumetric maps that scale from city to natural environments with high accuracy and robustness.
https://ieeexplore.ieee.org/document/11196039




05.02.2026 08:05 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Multi-LiDARs are equipped in our heavy vehicles to avoid self-occlusion. (a) shows an example placement with six LiDARs. The point colors in (b–c) correspond to the LiDAR from which the points are captured. (b) illustrates the distortion of static structure due to fast-moving ego vehicle. Raw shows the raw data, w. egc shows the ego-motion compensation results. (c) demonstrates distortion caused by motion of other objects, which depends on the velocity of the said objects. In such case, ego-motion compensation alone (w. ego-motion comp.) is insufficient. In comparison, our HiMu pipeline (w. HiMo motion comp.) successfully undistorts the point clouds completely, resulting in an accurate representation of the objects. (a) LiDAR placement illustration. (b) Static structure. (c) Dynamic agents.

Multi-LiDARs are equipped in our heavy vehicles to avoid self-occlusion. (a) shows an example placement with six LiDARs. The point colors in (b–c) correspond to the LiDAR from which the points are captured. (b) illustrates the distortion of static structure due to fast-moving ego vehicle. Raw shows the raw data, w. egc shows the ego-motion compensation results. (c) demonstrates distortion caused by motion of other objects, which depends on the velocity of the said objects. In such case, ego-motion compensation alone (w. ego-motion comp.) is insufficient. In comparison, our HiMu pipeline (w. HiMo motion comp.) successfully undistorts the point clouds completely, resulting in an accurate representation of the objects. (a) LiDAR placement illustration. (b) Static structure. (c) Dynamic agents.

HiMo β€” a pipeline that compensates for #MotionDistortions caused by other moving vehicles in #LiDAR scans by repurposing scene flow estimation to correct non-ego motion, improving geometric consistency and boosting downstream 3D detection & segmentation
https://ieeexplore.ieee.org/document/11196030

04.02.2026 08:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Ad for Robot Assisted Medical Imaging Special Collection. Submission window closes February 15.

Ad for Robot Assisted Medical Imaging Special Collection. Submission window closes February 15.

Final call-for-papers for the Robot Assisted Medical Imaging special collection. Submissions close February 15.

https://www.ieee-ras.org/publications/t-ro/special-issues/robot-assisted-medical-imaging/

#RoboticCT #SurgicalRobotics #SurgicalSoftRobotics #RoboticLaparoscopy #RoboticImaging

29.01.2026 08:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

T-RO authors present a learning-based low-level #quadcopter controller which is trained entirely in simulation, but generalizes across different dynamics and even adapts to real-world disturbances
https://ieeexplore.ieee.org/document/11025148

#Quadrotors #AutonomousAerialVehicles


28.01.2026 08:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Snapshots of a robotic simulation using our multicontact solver. Top: Bolt-nut assembly. Bottom: dish piling. Although intensive contact formation and stiff interactions make these scenarios challenging to simulate, our solvers successfully complete the simulations less than a ms of time budget per step.

Snapshots of a robotic simulation using our multicontact solver. Top: Bolt-nut assembly. Bottom: dish piling. Although intensive contact formation and stiff interactions make these scenarios challenging to simulate, our solvers successfully complete the simulations less than a ms of time budget per step.

Introducing CANAL & SubADMM, new multi-contact solvers based on augmented Lagrangian: CANAL for high-precision contact resolution; SubADMM for massively parallel hardware. They improve simulation accuracy & speed
https://ieeexplore.ieee.org/document/11027548

#RobotKinematics #HeuristicAlgorithms

22.01.2026 08:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Configuration of OceanVoy (left) and the onboard energy diagram (right), power distribution marked in red.

Configuration of OceanVoy (left) and the onboard energy diagram (right), power distribution marked in red.

T-RO authors propose #EeLsT, an energy-efficient long-short-term observer framework that adaptively balances control decisions for sailboat #actuators under environmental disturbances (waves, currents). Saves ~30% energy in sim & ~27% in real #sailing
https://ieeexplore.ieee.org/document/11024557


21.01.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Conceptual illustration exemplifying the dynamic scaling of physical and cognitive-grounded safety zones based on human awareness.

Conceptual illustration exemplifying the dynamic scaling of physical and cognitive-grounded safety zones based on human awareness.

PRO-MIND, a human-in-the-loop framework that tunes #RobotMotion based on human attention, stress & safety. It adapts safety zones & paths using B-splines & multi-objective optimization to balance comfort, execution time, & smoothness
https://ieeexplore.ieee.org/document/10912779
#CollaborativeRobots

15.01.2026 08:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Intricate, stable structure generated by our object placement planner. The surface is shaded according to robustness to perturbation by external forces; we use this measure to inform our proposed planner.

Intricate, stable structure generated by our object placement planner. The surface is shaded according to robustness to perturbation by external forces; we use this measure to inform our proposed planner.

A planner that reverses the typical pose-sampling approach: picking robust #ContactPoints, then finding a placement pose that satisfies them. The method runs ~20Γ— faster thanks to a stability #heuristic, and works well even in cluttered robot scenes.
https://ieeexplore.ieee.org/document/11027417


14.01.2026 08:04 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Globally consistent 3-D maps reconstructed by CURL-SLAM. Point cloud maps with different resolutions are continuously reconstructed using the same CURL map, which is ultracompact (0.26% of the 3.2-GB raw point clouds).

Globally consistent 3-D maps reconstructed by CURL-SLAM. Point cloud maps with different resolutions are continuously reconstructed using the same CURL map, which is ultracompact (0.26% of the 3.2-GB raw point clouds).

CURL-SLAM, a #LiDAR #SLAM system that builds ultra-compact implicit maps using spherical harmonics and CURL representation, handling loop closures & bundle adjustment β€” in real time (10 Hz) on a laptop~0.26% of the original point-cloud size.

https://ieeexplore.ieee.org/document/11078155

08.01.2026 08:04 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Seven images of green rope upright at different angles. Caption: MSRA motion for (a) 40 ∘ , (b) 50 ∘ , (c) 60 ∘ , (d) 0 ∘ in position and orientation control. MSRA motion comparison for the task (e) 50 ∘ a20 ∘ , (f) 50 ∘ , and (g) 50 ∘ c20 ∘ .

Seven images of green rope upright at different angles. Caption: MSRA motion for (a) 40 ∘ , (b) 50 ∘ , (c) 60 ∘ , (d) 0 ∘ in position and orientation control. MSRA motion comparison for the task (e) 50 ∘ a20 ∘ , (f) 50 ∘ , and (g) 50 ∘ c20 ∘ .

Authors propose a planning + control framework for modular #SoftRobot arms that uses biLSTMs and only coarse internal sensing feedback. It handles position & orientation control, obstacle avoidance & online interaction.
ieeexplore.ieee.org/document/110...

#NeuralNetwork

07.01.2026 22:51 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
MSRA motion for (a) 40 \$^{\circ }\$ , (b) 50 \$^{\circ }\$ , (c) 60 \$^{\circ }\$ , (d) 0 \$^{\circ }\$ in position and orientation control. MSRA motion comparison for the task (e) 50 \$^{\circ }\$ a20 \$^{\circ }\$ , (f) 50 \$^{\circ }\$ , and (g) 50 \$^{\circ }\$ c20 \$^{\circ }\$ .

MSRA motion for (a) 40 \$^{\circ }\$ , (b) 50 \$^{\circ }\$ , (c) 60 \$^{\circ }\$ , (d) 0 \$^{\circ }\$ in position and orientation control. MSRA motion comparison for the task (e) 50 \$^{\circ }\$ a20 \$^{\circ }\$ , (f) 50 \$^{\circ }\$ , and (g) 50 \$^{\circ }\$ c20 \$^{\circ }\$ .

Authors propose a planning + control framework for modular #SoftRobot arms that uses biLSTMs and only coarse internal sensing feedback. It handles position & orientation control, obstacle avoidance & online interaction.
https://ieeexplore.ieee.org/document/11049035

#NeuralNetwork

06.01.2026 08:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

A model-simplification framework for efficient deformable-object manipulation: task-conditioned action-space reduction plus simplified #dynamics lets faster planning for cloth folding & rope shaping.
ieeexplore.ieee.org/document/110...

#ComputationalModeling #PathPlanning #Robots

30.12.2025 20:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
A graphic overview of the iterative model simplification and motion planning framework for a cloth side folding task, with closed-loop robot execution in the real world. Initially, a simplified geometric model is identified and used to extract key picking points in the reduced action space. A simplified dynamics model is then built and utilized to plan a trajectory in a significantly shorter time. The trajectory is executed on the original model, and if the goal is not reached, the loop iterates, refining the simplified model until a satisfactory trajectory is found. Once a valid trajectory is identified, it is executed on the robot, with the perception system continuously tracking the deformation during manipulation.

A graphic overview of the iterative model simplification and motion planning framework for a cloth side folding task, with closed-loop robot execution in the real world. Initially, a simplified geometric model is identified and used to extract key picking points in the reduced action space. A simplified dynamics model is then built and utilized to plan a trajectory in a significantly shorter time. The trajectory is executed on the original model, and if the goal is not reached, the loop iterates, refining the simplified model until a satisfactory trajectory is found. Once a valid trajectory is identified, it is executed on the robot, with the perception system continuously tracking the deformation during manipulation.

A model-simplification framework for efficient deformable-object manipulation: task-conditioned action-space reduction plus simplified #dynamics lets faster planning for cloth folding & rope shaping.
ieeexplore.ieee.org/document/110...

#ComputationalModeling #PathPlanning #Robots

23.12.2025 12:58 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image
16.12.2025 19:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Text ad that reads, "Call for papers. Robot-Assisted Medical Imaging, a special collection".

Text ad that reads, "Call for papers. Robot-Assisted Medical Imaging, a special collection".

A T-RO call-for-papers: Robot-Assisted Medical Imaging special collection. Submissions due by February 15.
www.ieee-ras.org/publications...

#RoboticCT #SurgicalRobotics #SurgicalSoftRobotics #RoboticLaparoscopy #RoboticImaging

05.12.2025 21:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Call-for-Papers. T-RO Special Collection Foundation Models deadline has been extended to December 12.

Read more: www.ieee-ras.org/publications...

#FoundationModelsforRobotics #TactileSensing #RobotEmbodiments

05.12.2025 21:44 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Snapshots and visualization results when the quadrotor flies through the 3-D tunnel case 2 shown in Fig. 1(e). The markers are same as previous figures and the quadrotor positions in the snapshots are labeled in the visualization. (a) Visualization. (b) Flying into the entrance. (c) Flying down the vertical section. (d) Flying upward along the slope. (e) Traversing the dark rectangular tunnel section. (f) Flying out of the exit.

Snapshots and visualization results when the quadrotor flies through the 3-D tunnel case 2 shown in Fig. 1(e). The markers are same as previous figures and the quadrotor positions in the snapshots are labeled in the visualization. (a) Visualization. (b) Flying into the entrance. (c) Flying down the vertical section. (d) Flying upward along the slope. (e) Traversing the dark rectangular tunnel section. (f) Flying out of the exit.

This system lets drones fly autonomously through tunnels as narrow as 0.5 m in diameter; combining virtual omni-directional perception + a motion planner that handles low-light, sparse visual features, and airflow disturbances beating human pilots.
ieeexplore.ieee.org/document/109...

#Quadrotors

25.11.2025 17:55 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Schematic diagram of the closed-loop motion control experiment based on flow perception.

Schematic diagram of the closed-loop motion control experiment based on flow perception.

FlowSight: a fish-inspired #ArtificialLateralLine that lets #UnderwaterRobots β€˜feel’ flow in real-time. A vision system watches a #biomimetic tentacle deform, then AI estimates flow vector; enabling closed-loop control and opening new avenues.
ieeexplore.ieee.org/document/109...

21.11.2025 17:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Generalized setup of transformations to express various sensor modalities such as vectors for the magnetic field or the velocity and transformations for 3-DoF and 6-DoF sensor measurements and calibrations. This work aims to process a gray-box sensor signal together with a reliable system state to identify a corresponding sensor model and its properties.

Generalized setup of transformations to express various sensor modalities such as vectors for the magnetic field or the velocity and transformations for 3-DoF and 6-DoF sensor measurements and calibrations. This work aims to process a gray-box sensor signal together with a reliable system state to identify a corresponding sensor model and its properties.

Authors introduce a method that automatically selects sensor models and relevant state variables from runtime dataβ€”no prior knowledge needed. It integrates into localization frameworks with built-in false-positive checks
ieeexplore.ieee.org/document/110...

#RobotSensingSystems

07.11.2025 23:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Overview of the robot planning framework. The robot adapts its task planning based on task goals (e.g., completion time), and its estimates of the human’s leading/following preferences and performance. The framework continuously observes human actions and updates its estimates accordingly, allowing the robot to dynamically adjust its role and task allocation decisions.

Overview of the robot planning framework. The robot adapts its task planning based on task goals (e.g., completion time), and its estimates of the human’s leading/following preferences and performance. The framework continuously observes human actions and updates its estimates accordingly, allowing the robot to dynamically adjust its role and task allocation decisions.

Authors introduce a #TaskPlanning system that adapts to both a human partner’s preferences (whether they like to lead or follow) and performance; jointly handling task allocation + scheduling
ieeexplore.ieee.org/document/110...

#HumanRobotInteraction

07.11.2025 23:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
A group of about 35 people smiling.

A group of about 35 people smiling.

A table top of beautiful food.

A table top of beautiful food.

A group of eight people smiling at a dinner table.

A group of eight people smiling at a dinner table.

Thank you to the T-RO editorial board for gathering either virtually or in-person in Hangzhou, China last week for our bi-annual board meeting. Our time together included great discussions, new initiatives, and for those in-person a memorably fun dinner at the Hangzhou Cuisine Museum.

#IEEETRO

30.10.2025 09:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

RAG, a resource-aware greedy algorithm for #robot #MeshNetworks that only uses neighbor info for decision making. RAG scales linearly with network size, cuts communication/computation loads & yields near-optimal coordination in info gathering

ieeexplore.ieee.org/document/109...

13.10.2025 04:16 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Experimental results are presented for the robot's response to external forces applied along both the negative x- and y-directions.

Experimental results are presented for the robot's response to external forces applied along both the negative x- and y-directions.

Authors present a unified #ModelPredictiveControl framework for #humanoid balance, integrating ankle, hip, and stepping strategies, with variable angular momentum weighting, optimized step timing, and an HQP whole-body controller.
ieeexplore.ieee.org/document/109...

#LeggedLocomotion

08.10.2025 19:17 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Graphical illustration of the informal problem definition for designing an AV for urban driving tasks, based on a catalog of hardware and software components with an emphasis on minimizing resources.

Graphical illustration of the informal problem definition for designing an AV for urban driving tasks, based on a catalog of hardware and software components with an emphasis on minimizing resources.

Authors propose a task‑driven co‑design framework that optimizes sensors, planners & compute under safety, cost, energy & weight constraintsβ€”demonstrated to show how task complexity affects autonomy stack choices
ieeexplore.ieee.org/document/109...

#RobotSensingSystem #AutonomousVehicle

08.10.2025 19:07 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0