• Login
    View Item 
    •   Home
    • Theses & Dissertations
    • 2021 - Mines Theses & Dissertations
    • View Item
    •   Home
    • Theses & Dissertations
    • 2021 - Mines Theses & Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Mines RepositoryCommunitiesPublication DateAuthorsTitlesSubjectsThis CollectionPublication DateAuthorsTitlesSubjects

    My Account

    Login

    Mines Links

    Arthur Lakes LibraryColorado School of Mines

    Statistics

    Display Statistics

    Representation learning for human-robot teaming with multi-robot systems

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    Reily_mines_0052E_12135.pdf
    Size:
    9.395Mb
    Format:
    PDF
    Download
    Thumbnail
    Name:
    supplemental.zip
    Size:
    305.9Kb
    Format:
    Unknown
    Download
    Author
    Reily, Brian J.
    Advisor
    Zhang, Hao
    Date issued
    2021
    Keywords
    multi-agent
    representation Learning
    human-robot teaming
    robotics
    multi-robot
    
    Metadata
    Show full item record
    URI
    https://hdl.handle.net/11124/176439
    Abstract
    Human-robot teaming is a critical capability, enabling humans and robots to work alongside each other as teammates. Robots perform a variety of tasks alongside humans, and seamless collaboration enables robots to to increase the efficiency, productivity and safety of humans across a wide spectrum of jobs and lifestyles, while allowing humans to rely on robots augment their work and improve their lives. Due to the complexities of multi-robot systems and the difficulty in identifying human intentions, effective and natural human-robot teaming is a challenging problem. When deploying a multi-robot system to explore and monitor a complex environment, a human operator may struggle to properly account for the various communication links, capabilities, and current spatial distribution of the multi-robot system. When a robot attempts to aid its human teammates with a task, it may struggle to properly account for the context given by the environment and other teammates. To address scenarios like these, representations are needed that allow humans to understand their robot teammates and robots to understand their human teammates. This research addresses this challenge by learning representations of humans and multi-robot systems, primarily through regularized optimization. These introduced representations allow humans to both understand and effectively control multi-robot systems, while also enabling robots to understand and interact with their human teammates. First, I introduce approaches to learn representations of multi-robot structure, incorporating multiple relationships within the system in order to allow humans to divide or distribute the multi-robot system in an environment. Next, I propose multiple representation learning approaches to enable control of multi-robot systems. These representations, such as weighted utilities or strategic games, enable multi-robot systems to lead followers, navigate to goals, and collaboratively perceive objects, without detailed human intervention. Finally, I introduce representations of individual human activities or team intents, enabling robots to incorporate context from the environment and the entire team to more effectively interact with human teammates. These proposed representation learning approaches not only address specific tasks such as sensor coverage and human behavior understanding, and application scenarios such as search and rescue, disaster response, and homeland security, but conclusively show the value of representations that can encode complicated team structures and multisensory observations.
    Rights
    Copyright of the original work is retained by the author.
    Collections
    2021 - Mines Theses & Dissertations

    entitlement

     

    Related items

    Showing items related by title, author, creator and subject.

    • Thumbnail

      Cognitive comprehension framework for human-centered situation learning and adaptation in robotics, A

      Zhang, Xiaoli; Liu, Rui; Zhang, Hao; King, Jeffrey C.; Stebner, Aaron P. (Colorado School of Mines. Arthur Lakes Library, 2018)
      Human-centered environment, which is defined by robots, human, and environmental conditions, is a key part of robot task executions. Accurate understanding of human-centered environment is the precondition of successful robot executions in real-world situations. However, in practical situations, there are a lot of environment uncertainties, such as task execution dynamics, tool/human user varieties, temporal/spatial limitations and scenario unstructured characteristics. Robot task execution performances have been largely undermined when robot task execution goes from controlled lab environments to uncontrolled practical environments. To improve robot execution performances in practical human-centered environments, in this dissertation, a three-layer cognitive framework is designed to support comprehensive robot understandings for dealing with environment uncertainties, making robot to “think” like a human, instead of merely to “act” like a human. With the cognitive comprehension framework, mainly three contributions have been made: 1). by abstracting low-level executions and real-world observations of human behaviors, robot behaviors, and environment conditions, high-level cognitive understanding is generated from a human perspective, endowing robots with abstract understanding of human-centered situations, 2). by flexibly decomposing a high-level abstract goal into low-level execution details, robots are able to flexibly make plans and revise plans according to human requirements and environment condition limitations, and 3). the three-layer cognitive framework is updated and evolved as more robot commonsense knowledge is learned. In this dissertation research, this framework is cooperated with efficient robot knowledge learning methods, such as web-mining supported knowledge collection and learning from demonstrations, supporting adaptive robot executions with different domain knowledge.
    • Thumbnail

      Novel intuitive human-robot interaction using 3D gaze

      Zhang, Xiaoli; Li, Songpo; Zhang, Hao; Hoff, William A.; Steele, John P. H. (Colorado School of Mines. Arthur Lakes Library, 2017)
      Human-centered robotics has become a new trend in robotic research in which robots closely work around humans or even directly/indirectly make contact with humans. Human-centered robotics not only requires the robot to successfully and safely accomplish the given task, but also requires it to establish a rapport with humans by considering human factors. Human-robot interaction (HRI) has been an essential component in human-centered robotics due to the fundamental information exchange between the human and the robot, which plays an essential role in the task success and rapport establishment. In this dissertation, human gaze, which indicates where a person is looking, is scientifically studied as an intuitive and effective HRI modality. The gaze modality is natural and effortless to utilize, and from gaze modality, rich information about a user's mental state can be revealed. Despite the promise of gaze modality, applying gaze as an interaction modality is significantly limited by the virtual gaze tracking technology available and low-level gaze interpretation. Three-dimensional (3D) gaze tracking in real environments, which measures the 3D Cartesian location of where a person is looking, is highly desirable for intuitive and effective HRI in human-centered robotics. Employing 3D gaze as an interaction modality not only indicates the manipulation target, but also reports the location of the target and suggests how to perform the manipulation on it. The goal of this dissertation is to achieve the novel 3D-gaze-based HRI modality, with which a user can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. In working toward this goal, the investigation concentrates on 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. This new interaction modality is expected to benefit users who have impaired mobility in their daily living as well as able-bodied users who need an additional hand in general working scenarios.
    • Thumbnail

      Effects of proactive explanations by autonomous systems on human-robot trust

      Williams, Thomas; Zhu, Lixiao; Zhang, Hao; Mehta, Dinesh P. (Colorado School of Mines. Arthur Lakes Library, 2020)
      Human-Robot Interaction (HRI) seeks understanding, designing, and evaluating of robots for human-robot teams. Previous research has indicated that the performance of human-robot teams depends on human-robot trust, which in turn depends on appropriate robot-to-human transparency. In this thesis, we explore one strategy for improving robot transparency, proactive explanations, and its effect on the human-robot trust. We also introduce a resource management testbed, in which human participants engage in a resource management exercise while a robot teammate performs an assistive task. Our results suggest that there is a positive relationship between providing proactive explanations and human-robot trust.
    DSpace software (copyright © 2002 - 2022)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.