Representation learning for human-robot teaming with multi-robot systems
Abstract
Human-robot teaming is a critical capability, enabling humans and robots to work alongside each other as teammates. Robots perform a variety of tasks alongside humans, and seamless collaboration enables robots to to increase the efficiency, productivity and safety of humans across a wide spectrum of jobs and lifestyles, while allowing humans to rely on robots augment their work and improve their lives. Due to the complexities of multi-robot systems and the difficulty in identifying human intentions, effective and natural human-robot teaming is a challenging problem. When deploying a multi-robot system to explore and monitor a complex environment, a human operator may struggle to properly account for the various communication links, capabilities, and current spatial distribution of the multi-robot system. When a robot attempts to aid its human teammates with a task, it may struggle to properly account for the context given by the environment and other teammates. To address scenarios like these, representations are needed that allow humans to understand their robot teammates and robots to understand their human teammates. This research addresses this challenge by learning representations of humans and multi-robot systems, primarily through regularized optimization. These introduced representations allow humans to both understand and effectively control multi-robot systems, while also enabling robots to understand and interact with their human teammates. First, I introduce approaches to learn representations of multi-robot structure, incorporating multiple relationships within the system in order to allow humans to divide or distribute the multi-robot system in an environment. Next, I propose multiple representation learning approaches to enable control of multi-robot systems. These representations, such as weighted utilities or strategic games, enable multi-robot systems to lead followers, navigate to goals, and collaboratively perceive objects, without detailed human intervention. Finally, I introduce representations of individual human activities or team intents, enabling robots to incorporate context from the environment and the entire team to more effectively interact with human teammates. These proposed representation learning approaches not only address specific tasks such as sensor coverage and human behavior understanding, and application scenarios such as search and rescue, disaster response, and homeland security, but conclusively show the value of representations that can encode complicated team structures and multisensory observations.Rights
Copyright of the original work is retained by the author.Collections
Related items
Showing items related by title, author, creator and subject.
-
Weederbot - fusion, characterization, and evaluation of low-cost sensors for mobile robot navigationSteele, John P. H.; Baird, Kegan J. (Colorado School of Mines. Arthur Lakes Library, 2008)
-
Influence of robotic welding parameters on the fatigue strength of cast iron to steel weldmentsJones, Jerald E.; Flinn, Brian D., 1961- (Colorado School of Mines. Arthur Lakes Library, 1986)
-
Weederbot - fusion, characterization, and evaluation of low-cost sensors for mobile robot navigationSteele, John P. H.; Baird, Kegan J. (Colorado School of Mines. Arthur Lakes Library, 2008)