Human-Systems Interfaces of Space Robotic

The ultimate efficacy of space systems depends greatly upon the interfaces that humans use to operate them. The current state of the art in human system interfaces is summarized below along with some of the advances that are expected in the next 25 years. Human operation of most systems today is accomplished in a simple pattern reminiscent of the classic “Sense – Plan – Act” control paradigm for robotics and remotely operated systems. The human observes the state of the system and its environment, forms a mental plan for its future action, and then commands the robot or machine to execute that plan. Most of the recent work in this field is focused on providing tools to more effectively communicate state to the human and capture commands for the robot, each of which is discussed in more detail below.

Current human-system interfaces typically include software applications that communicate internal system state via abstract gauges and readouts reminiscent of aircraft cockpits or overlays on realistic illustrations of the physical plant and its components. Information from sensors is available in its native form (for instance, a single image from a camera) and aggregated into a navigable model of the environment that may contain data from multiple measurements and sensors. Some interfaces are adapted to immersive displays, mobile devices, or allow multiple distributed operators to monitor the remote system simultaneously.

Future interfaces will communicate state through increased use of immersive displays, creating“Holodeck”-like virtual environments that can be naturally explored by the human operator with “Avatar”-like telepresence. These interfaces will also more fully engage the aural and tactile senses of the human to communicate more information about the state of the robot and its surroundings. As robots grow increasingly autonomous, improved techniques for communicating the “mental state” of robots will be introduced, as well as mechanisms for understanding the dynamic state of reconfigurable robots and complex sensor data from swarms.

Current human-robot interfaces typically allow for two types of commands. The first are simple, brief directives, sometimes sent via specialized control devices such as joysticks, which interrupt
existing commands and immediately affect the state of the robot. A few interfaces allow the issuance of these commands through speech and gestures.

No comments:

Post a Comment