Loading...
Representation learning for long-term collaborative autonomy
Han, Fei
Han, Fei
Citations
Altmetric:
Advisor
Editor
Date
Date Issued
2018
Date Submitted
Collections
Research Projects
Organizational Units
Journal Issue
Embargo Expires
2018-11-09
Abstract
Autonomy has attracted a lot of research attention over the past few decades, since it is the key capability of all autonomous systems, including unmanned aerial vehicles (UAV), unmanned ground vehicles (UGV), unmanned surface vehicles (USV), humanoid robots, etc. Those fully or partially autonomous systems have been transforming the way people work, live, and communicate nowadays, e.g. automated AC systems in the building, robot arms manufacturing cars in the factory, etc. On the other hand, robots or intelligent agents usually do not work alone, such as assistance robots, coaching robots, self-driving cars, etc. They need to observe, learn from, and reflect with human beings. When robots enable to interact and collaborate with humans autonomously, we call it collaborative autonomy. Collaborative autonomy is a very challenging problem, which requires robots to have both great perception and decision making capabilities. It becomes even more challenging when this collaborative autonomy can be continuously performed in a long-term period, since there would be strong appearance variations of the environment, such as changes of illumination, weather, and vegetation conditions across months or even seasons. Humans can easily identify the same object and place in different times of the day, months, and seasons. However, this critical long-term perception capability is very challenging for real-world robots though it is the key to enable long-term autonomy. This research investigates the perception problems for long-term collaborative autonomy. In this dissertation, several representation learning approaches are introduced to improve the real-time perception performance of robots in the long-term period. Firstly, I introduce a 3D human skeletal representation learning approach to enable real-time robot awareness of human behaviors, which is invariant to viewpoint, human body scale and motion speed. Then, multiple representation learning approaches are presented for the long-term place recognition problem, which enables the life-long relocalization of robots with a single camera. Finally, we demonstrate that the learned representation using the approaches proposed in this dissertation can be integrated in the online robotic decision making system and enables the long-term collaborative autonomy capability.
Associated Publications
Rights
Copyright of the original work is retained by the author.