Novel intuitive human-robot interaction using 3D gaze
dc.contributor.advisor | Zhang, Xiaoli | |
dc.contributor.author | Li, Songpo | |
dc.date.accessioned | 2017-06-08T16:04:37Z | |
dc.date.accessioned | 2022-02-03T12:59:41Z | |
dc.date.available | 2017-06-08T16:04:37Z | |
dc.date.available | 2022-02-03T12:59:41Z | |
dc.date.issued | 2017 | |
dc.identifier | T 8268 | |
dc.identifier.uri | https://hdl.handle.net/11124/170993 | |
dc.description | Includes bibliographical references. | |
dc.description | 2017 Spring. | |
dc.description.abstract | Human-centered robotics has become a new trend in robotic research in which robots closely work around humans or even directly/indirectly make contact with humans. Human-centered robotics not only requires the robot to successfully and safely accomplish the given task, but also requires it to establish a rapport with humans by considering human factors. Human-robot interaction (HRI) has been an essential component in human-centered robotics due to the fundamental information exchange between the human and the robot, which plays an essential role in the task success and rapport establishment. In this dissertation, human gaze, which indicates where a person is looking, is scientifically studied as an intuitive and effective HRI modality. The gaze modality is natural and effortless to utilize, and from gaze modality, rich information about a user's mental state can be revealed. Despite the promise of gaze modality, applying gaze as an interaction modality is significantly limited by the virtual gaze tracking technology available and low-level gaze interpretation. Three-dimensional (3D) gaze tracking in real environments, which measures the 3D Cartesian location of where a person is looking, is highly desirable for intuitive and effective HRI in human-centered robotics. Employing 3D gaze as an interaction modality not only indicates the manipulation target, but also reports the location of the target and suggests how to perform the manipulation on it. The goal of this dissertation is to achieve the novel 3D-gaze-based HRI modality, with which a user can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. In working toward this goal, the investigation concentrates on 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. This new interaction modality is expected to benefit users who have impaired mobility in their daily living as well as able-bodied users who need an additional hand in general working scenarios. | |
dc.format.medium | born digital | |
dc.format.medium | doctoral dissertations | |
dc.language | English | |
dc.language.iso | eng | |
dc.publisher | Colorado School of Mines. Arthur Lakes Library | |
dc.relation.ispartof | 2010-2019 - Mines Theses & Dissertations | |
dc.rights | Copyright of the original work is retained by the author. | |
dc.subject | assistive robot | |
dc.subject | human-robot interaction | |
dc.subject | 3D gaze | |
dc.subject | intention awareness | |
dc.subject | attention awareness | |
dc.title | Novel intuitive human-robot interaction using 3D gaze | |
dc.type | Text | |
dc.contributor.committeemember | Zhang, Hao | |
dc.contributor.committeemember | Hoff, William A. | |
dc.contributor.committeemember | Steele, John P. H. | |
thesis.degree.name | Doctor of Philosophy (Ph.D.) | |
thesis.degree.level | Doctoral | |
thesis.degree.discipline | Mechanical Engineering | |
thesis.degree.grantor | Colorado School of Mines |