Skip to content

Commit

Permalink
a cleaned up intro
Browse files Browse the repository at this point in the history
  • Loading branch information
ekarulf committed May 8, 2011
1 parent 475c388 commit f3ed84b
Show file tree
Hide file tree
Showing 2 changed files with 22 additions and 8 deletions.
4 changes: 2 additions & 2 deletions karulf-thesis/thesis-conclusion.tex
Expand Up @@ -7,7 +7,7 @@ \section{Future Work}
\label{section:futurework}
Observing the user interface use during the user studies revealed several weak points in our implementation. We believe that small enhancements to RIDE will greatly benefit the end user experience.

In Section~\ref{sub:info_exchange} we introduced Nielsen's prior work designing ecological interfaces for robotics. While the paper was a large influence on our design work, we feel our interface would benefit from additional visual cues. In their proposed UI, Nielsen extruded the known elements from the coverage map vertically to create an approximate three dimensional representation. \cite{Nielsen_Teleoperation} In our original design, we rejected this design element as we believed it misrepresented the third dimension to the user; However, during user studies users often mentioned it was difficult to position the camera for teleoperation while keeping the map visible.
In Section~\ref{sub:info_exchange} we introduced Nielsen's prior work designing ecological interfaces for robotics. While the paper was a large influence on our design work, we feel our interface would benefit from additional visual cues. In their proposed UI, Nielsen extruded the known elements from the coverage map vertically to create an approximate three dimensional representation \cite{Nielsen_Teleoperation}. In our original design, we rejected this design element as we believed it misrepresented the third dimension to the user; However, during user studies users often mentioned it was difficult to position the camera for teleoperation while keeping the map visible.

During our user studies we also found that many users, especially users without prior robotics experience, found the robots to behave in a counter intuitive manner. As an example, the path planning algorithm we used in our study is weighted in a way that often chooses a longer, more circuitous route to avoid obstacles than a more direct route that passes near obstacles. We believe that exposing more behavioral information will reduce the cognitive load on users. Future versions of the RIDE application would benefit from adapting the prior work in visualizing autonomous navigation systems to the field of robotics.

Expand All @@ -21,7 +21,7 @@ \section{Future User Studies}

In order to evaluate the sliding autonomy approach, we will run a randomized three-condition within subject experiment. Instead of using notifications we will use the control modes as our independent variable. This gives us three conditions to test: only direct control, only supervisory control, and blended control. This would allow us to directly compare the given modes and record metrics on the blended control interfaces such as time spent in each mode and the number of mode switches.

As mentioned above while we can correlate successful task completion with improved usability, we would like to utilize a more direct metric. Researchers at the United States National Aeronautics and Space Administration (NASA) developed a metric for describing a human's workload while accomplishing a task. The metric, known as the ``Task Load Index'', is computed from a user's perceived workload on six sub-scales: mental demand, physical demand, temporal demand, performance, effort, and frustration. \cite{NASA_TLX} Our revised user study would ask subjects to rate their workload on each of the Task Load Index's six sub-scales after each experiment. At the conclusion of the user study, we would ask users to answer the fifteen questions to determine the weighting of the sub-scales when computing the task load index \cite{NASA_TLX20}.
As mentioned above while we can correlate successful task completion with improved usability, we would like to utilize a more direct metric. Researchers at the United States National Aeronautics and Space Administration (NASA) developed a metric for describing a human's workload while accomplishing a task. The metric, known as the ``Task Load Index'', is computed from a user's perceived workload on six sub-scales: mental demand, physical demand, temporal demand, performance, effort, and frustration \cite{NASA_TLX}. Our revised user study would ask subjects to rate their workload on each of the Task Load Index's six sub-scales after each experiment. At the conclusion of the user study, we would ask users to answer the fifteen questions to determine the weighting of the sub-scales when computing the task load index \cite{NASA_TLX20}.

Given the context of testing the control modes using the NASA Task Load Index, we decided to modify the subjects' tasks to more accurately represent our use cases. These new - more challenging - tasks would require a larger simulated world, more robots, and a different scenario. A weakness in the preliminary user study's scenario, is that the robots were able to complete their task immediately. This model does not match a use case in the real world, where most actions will require a non-zero amount of time to complete. Such a study, we believe, will show the benefits of mixed-mode interfaces across the whole population when even experienced users cannot realistically monitor all robots at once.

26 changes: 20 additions & 6 deletions karulf-thesis/thesis-introduction.tex
@@ -1,11 +1,25 @@
%!TEX root = karulf-thesis.tex
\chapter{Introduction}
Steve Cousins, CEO of the robotics company \emph{Willow Garage}, suggests that ``in the next ten to twenty years our culture will experience a personal robotics revolution much like the personal computing revolution of the 1980's and 1990's.'' \cite{Cousins} While we agree with Dr. Cousins, we believe the advancement of robotics in our society is limited by control interfaces. The emphasis of robot control software focuses on teleoperation of a single robot by a single human. In this paper, we explore the limitations of existing robot human-robot interfaces, and we present a new paradigm for human-robot interaction.
Steve Cousins, CEO of the robotics company \emph{Willow Garage}, suggests that ``in the next ten to twenty years our culture will experience a personal robotics revolution much like the personal computing revolution of the 1980's and 1990's'' \cite{Cousins}. While we agree with Dr. Cousins, we believe the advancement of robotics in our society is limited by control interfaces. There is a growing need for robot control interfaces that allows a single user to effectively control more than one remote robot. The increasing levels of autonomy demonstrated by our robots allow them to be controlled at increasingly higher levels of abstraction. Such systems are not perfect, and human intervention is often times required when unanticipated circumstances arise, or an object beyond the capability of the perception systems must be evaluated.

%% 3-4 years of history w/ sliding autonomy in HRI
We believe there is a growing need for robot control interfaces that allows a single user to control multiple robots simultaneously. In Section~\ref{sub:hri_prior_work}, we describe several existing interfaces designed for a single human to control one robot. We believe these existing interfaces will not scale to support more complex, real-world tasks. We propose a paradigm for human-robot interaction known as sliding autonomy. Sliding autonomy grants the human operator a high-level abstraction for providing tasks to a robot's artificial intelligence while retaining the flexibility to directly teleoperate a robot. This dynamic level of autonomy grants users the ability to control multiple robots concurrently. While the concept of sliding autonomy is new to human-robot interaction, it is well established within other disciplines.
The requirement for occasional intervention suggests that our interfaces should be capable of both high-level (task-based) and low-level (direct teleoperation) control of remote robots, with the ability to easily switch between these modes, as necessitated by the situation. The goal of a single operator controlling many robots also suggests that a system that allows an individual to alert he operator when help is needed would be useful. If there are too many robots for the operator to attend to at once, there is a distinct possibility that a robot may sit idle, waiting for help, for a prolonged period of time, without such a notification system.

We argue that interfaces, based on elements from computer video games, are effective tools for the control of large robot teams. In Section~\ref{sec:video_game_interfaces} we introduce several genres of video games. We describe how to apply elements from these various video game genres in robot control interfaces. We present RIDE, the Robot Interactive Display Environment, as an example of such an interface. RIDE contains two different modes: a supervisory control mode and a direct control mode; these modes represent the two extremes of autonomy. The supervisory mode borrows display and control elements from real-time strategy games to allow task-oriented control of many robots. The direct mode borrows visual elements from racing games for a more fine-grained control of a single robot. Additional details of RIDE's design - including the transition between the two modes - is described in Section~\ref{sec:ride_user_interface}.
Many of the problems of controlling remote robots are similar to the problems of controlling characters in video games. Real-time strategy games involve controlling many tens or hundreds of heterogeneous units at once. First-and third-person games involve the detailed direct control of a single character. We believe that the interfaces used for these games make ideal candidates for interfaces for mobile robot control.

%% TODO 2011 : What are the conditions?
We conducted a formal user study to evaluate the effectiveness of the RIDE interface, and specifically the notification system. Our two-condition experiment asked subjects to perform a search task using two simulated robots in a small house. We instrumented the application to record the user's behavior and timing information. The results of this data, along with the pre-experiment and post-experiment questionnaires, are presented in Section~\ref{sec:study-results}.
Our motivation for drawing from video games is simple. Video games with easy-to-use, effective interfaces will be played more often, and will sell better than games with poor interfaces. Almost four decades of market forces have refined interfaces for many genres of games, resulting in systems that are intuitive, easy to use, and effective. Additionally, since many people play these games, they are are already familiar with the interfaces. We hypothesize that these facts will make robot control interfaces based on computer game interfaces highly effective tools.


% The emphasis of robot control software focuses on teleoperation of a single robot by a single human. In this paper, we explore the limitations of existing robot human-robot interfaces, and we present a new paradigm for human-robot interaction. We believe there is a growing need for robot control interfaces that allows a single user to control multiple robots simultaneously. In Section~\ref{sub:hri_prior_work}, we describe several existing interfaces designed for a single human to control one robot. We believe these existing interfaces will not scale to support more complex, real-world tasks.


% We propose an existing paradigm of human-robot interaction known as sliding autonomy. Sliding autonomy grants the human operator a high-level abstraction for providing tasks to a robot's artificial intelligence while retaining the flexibility to directly teleoperate a robot. This dynamic level of autonomy grants users the ability to control multiple robots concurrently. While the concept of sliding autonomy is new to human-robot interaction, it is well established within other disciplines.

% We argue that interfaces, based on elements from computer video games, are effective tools for the control of large robot teams. In Section~\ref{sec:video_game_interfaces} we introduce several genres of video games. We describe how to apply elements from these various video game genres in robot control interfaces.

VERSION 1:
We present RIDE, the Robot Interactive Display Environment, a control interface for robots that draws heavily on computer game interfaces for inspiration. RIDE integrates several types of interface into one adaptive interface: a supervisory control mode and a direct control mode; these modes represent the two extremes of autonomy. The supervisory mode borrows display and control elements from real-time strategy games to allow task-oriented control of many robots. The direct mode borrows visual elements from racing games for a more fine-grained control of a single robot. Additional details of RIDE’s design—including the transition between the two modes—is described in Section~\ref{sec:ride_user_interface}.

VERSION 2:
We present RIDE, the Robot Interactive Display Environment, a control interface for robots that draws heavily on computer game interfaces for inspiration. RIDE combines aspects from a number of computer game genres, allowing the operator to switch between direct and supervisory control, as the situation requires. Unlike existing interfaces in literature, RIDE allows both the user and the robot to negotiate the level of autonomy. RIDE achieves this negotiable level of autonomy by integrating several types of interfaces: a supervisory control mode and a direct control mode. The supervisory mode borrows display and control elements from real-time strategy games to allow task-oriented control of many robots. The direct mode borrows visual elements from racing games for a more fine-grained control of a single robot. Additional details of RIDE’s design—including the transition between the two modes—is described in Section~\ref{sec:ride_user_interface}.

To evaluate the effectiveness of the RIDE interface, specifically the notification system, we conducted a formal user study. Our experiment asked subjects to perform a search task using two simulated robots in a small house. The experiment had two conditions: one with notifications displayed, and one with notifications hidden. We instrumented the application to record the user’s behavior and timing information. The results of this data, along with the pre-experiment and post-experiment questionnaires, are presented in Section~\ref{sec:study-results}. In addition to discussing video game interfaces and presenting RIDE, we present the results of our initial user studies with the interface, which suggests it is well suited to search tasks with multiple robots.

0 comments on commit f3ed84b

Please sign in to comment.