|Prof. Yu Xiang
|Assistant Professor, University of Texas at Dallas
One of the core problems in robot manipulation is enabling robots to manipulate objects and use objects to perform tasks. Compared to end-to-end approaches that take sensory input and output control commands, I believe enabling robots to understand objects from sensory input and conditioning planning and control on top of object perception can achieve generalizable robot manipulation. In this talk, I will highlight our efforts on object-centric perception for robot manipulation. I will start with methods that enable robots to segment unseen objects from input images, and then illustrate how robots can leverage interactions with objects to improve object segmentation. Since segmentation separates objects from each other and from the background, it enables learning object-centric representations for object perception. I will talk about few-shot learning approaches that enable robots to recognize segmented objects, and an implicit representation of objects that incorporates grasps from multiple robotic hands. These object perception techniques facilitate robotic grasping and grasp transfer among robotic grippers and humans.
Yu Xiang is an Assistant Professor in the Department of Computer Science at the University of Texas at Dallas from Fall 2021. Before joining UT Dallas, he was a Senior Research Scientist at NVIDIA. He received his Ph.D. in Electrical Engineering from the University of Michigan at Ann Arbor in 2016. He was a Postdoctoral Researcher at Stanford University and at the University of Washington from 2016 to 2017 and was a visiting student researcher in the Stanford Artificial Intelligence Laboratory from 2013 to 2016. He received M.S. degree in Computer Science from Fudan University in 2010 and B.S. degree in Computer Science from Fudan University in 2007.Yu’s research focuses on robotics and computer vision. He is interested in studying how robots can acquire various skills in perception, planning and control through learning, and integrate these skills in a systematic way in order to conduct tasks in human environments autonomously. He regularly publishes in top-tier robotics and computer vision journals and conferences including T-RO, RA-L, RSS, ICRA, CoRL, IJCV, CVPR, ICCV and ECCV. He serves as an associate editor for RA-L and served as an area chair for CoRL 2022-2023 and an associate editor for IROS 2022-2023.