[ICRA26] Vi-TacMan: Articulated Object Manipulation via Vision and Touch

Abstract

Autonomous manipulation of articulated objects remains a fundamental challenge for robots in human environments. Vision-based methods can infer hidden kinematics but can yield imprecise estimates on unfamiliar objects. Tactile approaches achieve robust control through contact feedback but require accurate initialization. This suggests a natural synergy: vision for global guidance, touch for local precision. Yet no framework systematically exploits this complementarity for generalized articulated manipulation. Here we present Vi-TacMan, which uses vision to propose grasps and coarse directions that seed a tactile controller for precise execution. By incorporating surface normals as geometric priors and modeling directions via von Mises-Fisher distributions, our approach achieves significant gains over baselines (all p<0.0001). Critically, manipulation succeeds without explicit kinematic models—the tactile controller refines coarse visual estimates through real-time contact regulation. Tests on more than 50,000 simulated and diverse real-world objects confirm robust cross-category generalization. This work establishes that coarse visual cues suffice for reliable manipulation when coupled with tactile feedback, offering a scalable paradigm for autonomous systems in unstructured environments.

Publication
In Proceedings of the IEEE International Conference on Robotics and Automation
Leiyao Cui
Leiyao Cui
Ph.D. '24, co-advised with Prof. Zhi Han

My research interests include scene understanding, robotics, etc.

Zihang Zhao
Zihang Zhao
Ph.D. '22

My research interests include robotics, mechatronics, and tactility-related robot cognition, etc.

Sirui Xie
Sirui Xie
Undergrad '21

My research interests include computer vision, robot manipulation, task and motion planning, etc.

Yixin Zhu
Yixin Zhu
Assistant Professor

I build humanlike AI.

Related