Center for Vision, Cognition, Learning, and Autonomy, UCLA1
Massachusetts Institute of Technology2
UCLA Department of Psychology3
Beijing Institute for General Artificial Intelligence (BIGAI)4
Aligning humans' assessment of what a robot can do with its true capability is crucial for establishing a common ground between human and robot partners when they collaborate on a joint task. In this work, we propose an approach to calibrate humans' estimate of a robot's reachable workspace through a small number of demonstrations before collaboration. We develop a novel motion planning method, REMP (Reachability-Expressive Motion Planning), which jointly optimizes the physical cost and the expressiveness of robot motion to reveal the robot's motion capability to a human observer. Our experiments with human participants demonstrate that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth. We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations in a subsequent joint task.
Gao, X., Yuan, L., Shu, T., Lu, H., & Zhu, S. C. (2022). Show Me What You Can Do: Capability Calibration on Reachable Workspace for Human-Robot Collaboration. IEEE Robotics and Automation Letters, 7(2), 2644-2651.
@article{gao2022show, title={Show Me What You Can Do: Capability Calibration on Reachable Workspace for Human-Robot Collaboration}, author={Gao, Xiaofeng and Yuan, Luyao and Shu, Tianmin and Lu, Hongjing and Zhu, Song-Chun}, journal={IEEE Robotics and Automation Letters}, volume={7}, number={2}, pages={2644--2651}, year={2022}, publisher={IEEE} }