The robot’s dynamic model is required in the implementation of most advanced model-based control schemes. The dynamic model is crucial because it can be used to linierize the non-linier system in both joint space and task space. Since the robot’s dynamic parameters are normally not available for industrial manipulators, proper procedures should be carried out to identify these parameters.
One way to identify the dynamic parameter is to dismantle the robot and measure link by link. However, it is obvious that his approach is not always feasible in practice. Another problem, with dismantling approach is that it does not account for the effects of joint friction.
In order to account for joint friction, several methods were proposed. These methods can be roughly divided into two groups: to identify joint friction and rigid body dynamic separately or to identify joint friction and rigid body dynamics simultaneously. The former first identify the rigid body dynamic parameters using the identified friction parameters. Since friction parameters are identified joint by joint, non-linier dynamic friction models such as Stribeck and/or hysteresis effects can be considered.
The main drawback of this method comes from the fact that friction can be much time-varying. Moreover, friction forces/torques are always coupled to the inertial forces/torques, thus, one can not be precisely identified without the other.
Blog about Robotics Introduction, news of the robot, information about the robot, sharing knowledge about the various kinds of robots, shared an article about robots, and others associated with the robot
3D Vision-Based Control on an Industrial Robot
Industrial robots are designed for tasks such as pick and place, welding, and painting. The environment and the working conditions for those tasks are well set. If the working conditions changed, those robots may not be able to work properly. Therefore, external sensors are necessary to enhance the robot’s capability to work in a dynamic environment. A vision sensor is an important sensor that can be used to extend the robot’s capabilities. The image of objects of interest can be extracted from their environment, and then information from these images can be computed to control the robot. The control that uses the images as feed back signals is known as vision based control. Recently, vision-based control has become a major research field in robotics.
Vision-based control can be classified into two main categories. The first approach, feature based visual control, uses image features of a target object from image (sensor) space to compute error signals directly. The error signals are then used to compute the required actuation signals for the robot. The control law is also expressed in the image space. Many researchers in this approach use a mapping function (Jacobian) from the image space to the Cartesian space.
The image Jacobian, generally, is a function of the focal length of the lens of the camera, depth, and the image features.
Vision-based control can be classified into two main categories. The first approach, feature based visual control, uses image features of a target object from image (sensor) space to compute error signals directly. The error signals are then used to compute the required actuation signals for the robot. The control law is also expressed in the image space. Many researchers in this approach use a mapping function (Jacobian) from the image space to the Cartesian space.
The image Jacobian, generally, is a function of the focal length of the lens of the camera, depth, and the image features.
Overview the Robotic Future
• The industry faces many of the same challenges that the personal computer business faced 30 years ago. Because of a lack of common standards and platforms, designers usually have to start from scratch when building their machines.
• Another challenge is enabling robots to quickly sense and react to their environments. Recent decreases in the cost of processing power and sensors are following researchers to tackle these problems.
• Robot builders can also take advantage of new software tools that make it easier to write programs that work with different kinds of hardware. Networks of wireless robots can tap into the power of desktop PCs to handle tasks such as visual recognition and navigation.
The word ROBOT was popularized in 1921 by Czech playwright Karel Capek, but people have envisioned creating robot like devices for thousands of years. In Greek and Roman mythology, the gods of metalwork built mechanical servants made from gold. In the first century A.D, Heron of Alexandria, the great engineer credited with inventing the first steam engine.
Over the past century, anthropomorphic machines have become familiar figures in popular culture through books such as Isaac Asimov I, Robot, movies such as Star Wars and television shows such as Star Trek. The popularity the robots in fiction indicates that people are receptive to the idea that these machines will one day walk among us as helpers and even as companions.
• Another challenge is enabling robots to quickly sense and react to their environments. Recent decreases in the cost of processing power and sensors are following researchers to tackle these problems.
• Robot builders can also take advantage of new software tools that make it easier to write programs that work with different kinds of hardware. Networks of wireless robots can tap into the power of desktop PCs to handle tasks such as visual recognition and navigation.
The word ROBOT was popularized in 1921 by Czech playwright Karel Capek, but people have envisioned creating robot like devices for thousands of years. In Greek and Roman mythology, the gods of metalwork built mechanical servants made from gold. In the first century A.D, Heron of Alexandria, the great engineer credited with inventing the first steam engine.
Over the past century, anthropomorphic machines have become familiar figures in popular culture through books such as Isaac Asimov I, Robot, movies such as Star Wars and television shows such as Star Trek. The popularity the robots in fiction indicates that people are receptive to the idea that these machines will one day walk among us as helpers and even as companions.
Trends in Intelligent Robotic Industry Development
According to the definition of the International Federation of Robotics (IFR), robotics can be classified into two categories: industrial robotics and service robotics. Survey reports from Japan Robot Association (JARA) reveal that as industrial robotics are only used in precision manufacturing and the industrial robotics market is becoming gradually saturated in the mid+ and long term, there is limited room for future growth.
In contrast, there is great potential for family/personal service robotics. As the populations of most developed countries are aging and birth rates are low, there is an increasing demand for home care for the elderly, child education and entertainment.
In the beginning, at a time when there was almost no demand for applications of any kind, prototype intelligent service robots were mostly research models developed by academic research institutions. In recent years, Japanese auto makers, especially Honda and Toyota, have expanded the scale of service robotics R&D and endeavored to achieve market objectives for product commercialization; they have also developed and been utilizing their respective ASIMO and tour guide robots.
The United States puts more emphasis on AI and control technology R&D. US manufacturers and academic institutions have all been endeavoring to develop AI robots, launching products such as Davensi robotics system for surgical operations, tour guide robots, and the Roomba. The industry trend shows that the US is manufacturing robotic components for diversified uses, and is involves in the establishment of the robotics value chain.
In contrast, there is great potential for family/personal service robotics. As the populations of most developed countries are aging and birth rates are low, there is an increasing demand for home care for the elderly, child education and entertainment.
In the beginning, at a time when there was almost no demand for applications of any kind, prototype intelligent service robots were mostly research models developed by academic research institutions. In recent years, Japanese auto makers, especially Honda and Toyota, have expanded the scale of service robotics R&D and endeavored to achieve market objectives for product commercialization; they have also developed and been utilizing their respective ASIMO and tour guide robots.
The United States puts more emphasis on AI and control technology R&D. US manufacturers and academic institutions have all been endeavoring to develop AI robots, launching products such as Davensi robotics system for surgical operations, tour guide robots, and the Roomba. The industry trend shows that the US is manufacturing robotic components for diversified uses, and is involves in the establishment of the robotics value chain.
Industrial Robots to Replace Human Workers
According to the Robotic Industries Association, an industrial robot is an automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes which may be either fixed in place or mobile for use in industrial automation applications. The first industrial robot, manufactured by Unimate, was installed by General Motors in 1961. Thus industrial robots have been around for over four decades.
According to the International Federation of Robotics, another professional organization, a service robot is a robot which operates semi or fully autonomously to perform services useful to the well being of humans and equipment, excluding manufacturing operations.
Many industrial automation tasks like assembly tasks are repetitive and tasks like painting are dirty. Robots can sometimes easily perform these tasks. Human workers often don’t require intelligence or exercise any decision making skills. Many of these dumb tasks like vacuum cleaning or loading packages onto pallets can be executed perfectly by robots with a precision and reliability that humans may lack.
As our population ages and the number of wage earners becomes a smaller fraction of our population, it is clear that robots have to fill the void in society. Industrial, and to a greater extent, service robots have the potential to fill this void in the coming years.
A second reason for the deployment of industrial robots is the trend toward small product volumes and an increase in product variety. As the volume of products being produced decreases, hard automation becomes a more expensive proposition, and robotics is the only alternative to manual production.
According to the International Federation of Robotics, another professional organization, a service robot is a robot which operates semi or fully autonomously to perform services useful to the well being of humans and equipment, excluding manufacturing operations.
Many industrial automation tasks like assembly tasks are repetitive and tasks like painting are dirty. Robots can sometimes easily perform these tasks. Human workers often don’t require intelligence or exercise any decision making skills. Many of these dumb tasks like vacuum cleaning or loading packages onto pallets can be executed perfectly by robots with a precision and reliability that humans may lack.
As our population ages and the number of wage earners becomes a smaller fraction of our population, it is clear that robots have to fill the void in society. Industrial, and to a greater extent, service robots have the potential to fill this void in the coming years.
A second reason for the deployment of industrial robots is the trend toward small product volumes and an increase in product variety. As the volume of products being produced decreases, hard automation becomes a more expensive proposition, and robotics is the only alternative to manual production.
State of the Art in Theory and Practice of Industrial Robots
Today industrial robots present a mature technology. They are capable of lifting hundreds of pounds of payload and positioning the weight with accuracy to a fraction of a millimeter. Sophisticated control algorithms are used to perform positioning tasks exceptionally well in structure environments.
FANUC, the leading manufacturer of industrial robots, has an impressive array of industrial robot products ranging from computerized numerical control (CNC) machines with 1 nm Cartesian resolution and 10-5 degrees angular resolution to robots with 450 kg payloads and 0.5 mm repeatability. Some of their robots include such features as collision detection, compliance control, and payload inertia/weight identification. The control software supports networking and continuous coordinated control of two arms. Force feedback is sometimes used for assembly tasks.
The nature robotic workcell has changed since early days of robotics. Instead of having a single robot synchronized with material handling equipment like conveyors, robots now work together in a cooperative fashion eliminating mechanized transfer devices. Human workers can be seen in closer proximity to robots and human-robots cooperation is closer to becoming a reality.
However, industrial robots still do not have the sensing, control and decision making capability that is required to operate in unstructured, 3D environments. Cost effective, reliable force sensing for assembly still remains a challenge. Finally we still lack the fundamental theory and algorithms for manipulation in unstructured environments, and industrial robots currently lack dexterity in their end-effectors and hands.
FANUC, the leading manufacturer of industrial robots, has an impressive array of industrial robot products ranging from computerized numerical control (CNC) machines with 1 nm Cartesian resolution and 10-5 degrees angular resolution to robots with 450 kg payloads and 0.5 mm repeatability. Some of their robots include such features as collision detection, compliance control, and payload inertia/weight identification. The control software supports networking and continuous coordinated control of two arms. Force feedback is sometimes used for assembly tasks.
The nature robotic workcell has changed since early days of robotics. Instead of having a single robot synchronized with material handling equipment like conveyors, robots now work together in a cooperative fashion eliminating mechanized transfer devices. Human workers can be seen in closer proximity to robots and human-robots cooperation is closer to becoming a reality.
However, industrial robots still do not have the sensing, control and decision making capability that is required to operate in unstructured, 3D environments. Cost effective, reliable force sensing for assembly still remains a challenge. Finally we still lack the fundamental theory and algorithms for manipulation in unstructured environments, and industrial robots currently lack dexterity in their end-effectors and hands.
Robots in Lean Manufacturing
To understand the impact of robots on lean manufacturing, we need to gain a good understanding of the term – Lean Manufacturing. Lean manufacturing is a management philosophy focusing on reduction of the seven manufacturing related wastes as defined originally by Toyota. The wastes are:
• Overproduction (production ahead of demand).
• Transportation (moving product that is not actually required to perform the processing).
• Waiting (waiting for the next production step).
• Inventory (all components, work in progress and finished product not being processed).
• Motion (people or equipment moving or walking more than is required to perform the processing).
• Over processing (due to poor tool or product design creating activity).
• Defects (the efforts involved in inspecting for and fixing defects).
There has been a steady increase in the role of industrial robots in manufacturing. With over 15,000 industrial robots sold every year, robots have become a mainstay in the manufacturing industry. Traditionally, robots have not always been viewed to have a role in the implementation of lean strategies. However, due to their flexibility, reliability and repeatability, to name a few advantages, the role of robots in constantly increasing.
Robots have been an off the shelf purchase item for the last two decades. The cost of common robot models from major manufacturers has plummeted due to large volume sales to automotive OEM’s and to the financially negative impact of competition.
• Overproduction (production ahead of demand).
• Transportation (moving product that is not actually required to perform the processing).
• Waiting (waiting for the next production step).
• Inventory (all components, work in progress and finished product not being processed).
• Motion (people or equipment moving or walking more than is required to perform the processing).
• Over processing (due to poor tool or product design creating activity).
• Defects (the efforts involved in inspecting for and fixing defects).
There has been a steady increase in the role of industrial robots in manufacturing. With over 15,000 industrial robots sold every year, robots have become a mainstay in the manufacturing industry. Traditionally, robots have not always been viewed to have a role in the implementation of lean strategies. However, due to their flexibility, reliability and repeatability, to name a few advantages, the role of robots in constantly increasing.
Robots have been an off the shelf purchase item for the last two decades. The cost of common robot models from major manufacturers has plummeted due to large volume sales to automotive OEM’s and to the financially negative impact of competition.
Creating a Teach Point File
Teach points (TPs) define target positions for the robot. A teach point for the RV-2AJ robot consists of 5 values, namely:
1. Cartesian X position
2. Cartesian Y position
3. Cartesian Z position
4. A – wrist rotation A
5. B – wrist rotation B
Hence the Mitsubishi RV-2AJ robot has only 5 degrees of freedom. We know that in general to both position (x, y, z) and orientate (roll, pitch, yaw) an object in space requires SIX degrees of freedom. Therefore this robot, in common with many other industrial robots, has reduced functionality. In practice this does not seriously limit it’s the range of its application.
To teach the robot a new teach-point (TP) position:
• Switch the robot into “Teach pendant mode” and drive the arm to desired target position.
• Once the robot is in its target position open “Tools/TP Editor” from the menu on the PC software.
• Press the right mouse button and choose the option “New Teach Point”.
• Right click on the newly created teach-point and select the option “Learn Robot’s Position”.
• The teach-point can be renamed using the option “Edit Teach Point”. The first letter of the name of the new TP position should be a “P”. e.g. “P1”, “P10”, “PSAVE”.
• Right mouse-click the option save. The Teach Point file is saved to the hard drive. A teach point file must have the extension “.POS”.
1. Cartesian X position
2. Cartesian Y position
3. Cartesian Z position
4. A – wrist rotation A
5. B – wrist rotation B
Hence the Mitsubishi RV-2AJ robot has only 5 degrees of freedom. We know that in general to both position (x, y, z) and orientate (roll, pitch, yaw) an object in space requires SIX degrees of freedom. Therefore this robot, in common with many other industrial robots, has reduced functionality. In practice this does not seriously limit it’s the range of its application.
To teach the robot a new teach-point (TP) position:
• Switch the robot into “Teach pendant mode” and drive the arm to desired target position.
• Once the robot is in its target position open “Tools/TP Editor” from the menu on the PC software.
• Press the right mouse button and choose the option “New Teach Point”.
• Right click on the newly created teach-point and select the option “Learn Robot’s Position”.
• The teach-point can be renamed using the option “Edit Teach Point”. The first letter of the name of the new TP position should be a “P”. e.g. “P1”, “P10”, “PSAVE”.
• Right mouse-click the option save. The Teach Point file is saved to the hard drive. A teach point file must have the extension “.POS”.
Moving the Robot of Mitsubishi RV-2AJ
To move the robot two buttons must be pushed, i.e., the button under the teach pendant and the button “STEP/MOVE”. A low beeping sound from the PWM can be heard and the LED “SVO ON” goes green. Stay out of the work space when the servos are on.
The robot can be now moved in joint-space or Cartesian-space (XYZ). For Cartesian space operation press the XYZ button once. The buttons labeled –X, +X, -Y, +Y, -Z, +Z can now be used to move the robot. The buttons labeled –A, +A are to rotate the robot’s end effectors around the end effectors Z axis. The buttons labeled, -B, +B are to rotate the robot’s end effectors around the end effectors.
Pressing the XYZ button at any time displays the position of the end-effectors. To open and close the hand keep the dead-man’s handle pressed, release the “STEP/MOVE” button and keep the “HAND” button pressed instead. To open the hand press “+C” to close it “-C”.
Automatic operation is used to run programs that are stored in the robot controller memory. The program will cycle, i.e. run continuously once started.
Turn the key on the Teach Pendant to DISABLE.
Turn the key on the Controller to AUTO.
Press the “Change Display” button until the status shows “P.xxxxxx”.
Press the “Up” or “Down” button to select program.
Switch on the servos by pressing the “SVO ON” button.
Press “START”.
If the “END” button is pressed, the robot will finish the current program cycle and stop.
If the “STOP” button is pressed, the robot stops immediately.
The robot can be now moved in joint-space or Cartesian-space (XYZ). For Cartesian space operation press the XYZ button once. The buttons labeled –X, +X, -Y, +Y, -Z, +Z can now be used to move the robot. The buttons labeled –A, +A are to rotate the robot’s end effectors around the end effectors Z axis. The buttons labeled, -B, +B are to rotate the robot’s end effectors around the end effectors.
Pressing the XYZ button at any time displays the position of the end-effectors. To open and close the hand keep the dead-man’s handle pressed, release the “STEP/MOVE” button and keep the “HAND” button pressed instead. To open the hand press “+C” to close it “-C”.
Automatic operation is used to run programs that are stored in the robot controller memory. The program will cycle, i.e. run continuously once started.
Turn the key on the Teach Pendant to DISABLE.
Turn the key on the Controller to AUTO.
Press the “Change Display” button until the status shows “P.xxxxxx”.
Press the “Up” or “Down” button to select program.
Switch on the servos by pressing the “SVO ON” button.
Press “START”.
If the “END” button is pressed, the robot will finish the current program cycle and stop.
If the “STOP” button is pressed, the robot stops immediately.
MELFA Industrial Robot Systems
Quality requirements are becoming more exacting everyday. And as a result Mitsubishi robot in quality control applications are often in operation round the clock, seven days a week –yet another demonstration of the quality and reliability of Mitsubishi robots under most demanding conditions.
Thousands of students and trainees have already learned to appreciate the capabilities of Mitsubishi robots on these systems. Mitsubishi is ongoing to develop and improve of its robots, to ensure that they continue to earn the customers in future.
The MELFA line includes a broad selection of robot models and versions. This family of products is designed to meet all the needs of most industrial applications, and they also provide the extreme flexibility required for quick reconfiguration of production systems.
The assembly and product placement capabilities of the RH series of SCARA robots? Or the great versatility of the 5 and 6 DOF robots of the RH series? Whichever product you choose, you will get a system designed from the ground up for continuous operation, which will perform its work reliability 24 hours a day, 7 days a week.
If your application impose extreme precision, speed and reach requirements, robots from Mitsubishi Electric are the solution to all these problems.
Thousands of students and trainees have already learned to appreciate the capabilities of Mitsubishi robots on these systems. Mitsubishi is ongoing to develop and improve of its robots, to ensure that they continue to earn the customers in future.
The MELFA line includes a broad selection of robot models and versions. This family of products is designed to meet all the needs of most industrial applications, and they also provide the extreme flexibility required for quick reconfiguration of production systems.
The assembly and product placement capabilities of the RH series of SCARA robots? Or the great versatility of the 5 and 6 DOF robots of the RH series? Whichever product you choose, you will get a system designed from the ground up for continuous operation, which will perform its work reliability 24 hours a day, 7 days a week.
If your application impose extreme precision, speed and reach requirements, robots from Mitsubishi Electric are the solution to all these problems.
Robot Safety Standard
Because of safety concerns, this is probably the most popular industrial robots standard in USA. There are also European Union, Japanese and ISO standards. One of the fastest growing markets of industrial robot is that of used robots. Large industrial users are modernizing their fleet of robots and selling the old used ones. An industry of robot re-manufacturers has developed to re-build and re-sell these robots.
A robot component that will see significant changes after this standard becomes effective is the teach pendant. Ordinary teach pendants are required to be equipped with an enabling device. This is usually a spring loaded switch that must be kept pressed in order to enable any machine motion to take place. Most people call this device a “dead man switch,” because it will deactivate when the operators drops the teach pendant in an emergency. Recent research has revealed that some people in a panic state freeze and hold onto an emergency device instead of releasing it.
The practice of placing the robot controller console anywhere it is convenient on the plant floor will not be acceptable anymore. The location of the operator controls shall be constructed to provide clear visibility of the area where work is performed. The controller and all equipment requiring access during automatic operation shall be located outside the safeguarded space of the robot. This will reduce the likelihood of equipment and machinery being operated when another person is in hazardous position.
A robot component that will see significant changes after this standard becomes effective is the teach pendant. Ordinary teach pendants are required to be equipped with an enabling device. This is usually a spring loaded switch that must be kept pressed in order to enable any machine motion to take place. Most people call this device a “dead man switch,” because it will deactivate when the operators drops the teach pendant in an emergency. Recent research has revealed that some people in a panic state freeze and hold onto an emergency device instead of releasing it.
The practice of placing the robot controller console anywhere it is convenient on the plant floor will not be acceptable anymore. The location of the operator controls shall be constructed to provide clear visibility of the area where work is performed. The controller and all equipment requiring access during automatic operation shall be located outside the safeguarded space of the robot. This will reduce the likelihood of equipment and machinery being operated when another person is in hazardous position.
Robot Performance Standard
The US Performance standard consists of two volumes. R15.05-1 covers the point to point and static performance characteristic. And R15.05-2 covers the path related and dynamics performance characteristics.
Thus, if two different robots from two different vendors are being considered for an application, and one has a payload capacity of 45 kg and the other 50 kg, they will be tested under a standard load of 40 kg. The center of gravity of the 40 kg load with its associated support brackets shall have an axial CG offset of 12 cm and a radial CG offset of 6 cm from the mechanical interface coordinate system.
The standard test path is located on the standard test plane and lies along a reference center line. The standard test path segments can only assume three length 200 mm, 500 mm, or 1000 mm. Detailed instructions to determine the position and orientation of standard test plane and the reference center line are given in the standard.
The performance characteristics used by R15.05-1 are accuracy, repeatability, cycle time, overshoot, settling time, and compliance. This standard allows the vendor to tune operating parameters to optimize the values of desired performance characteristics. To identify the type of characteristic that is being optimized during a particular test, the standard establishes four performance classes. If class II testing is performed, the robot operates under optimum cycle time conditions. If class III testing is performed, the robot operates under optimum repeatability conditions.
Thus, if two different robots from two different vendors are being considered for an application, and one has a payload capacity of 45 kg and the other 50 kg, they will be tested under a standard load of 40 kg. The center of gravity of the 40 kg load with its associated support brackets shall have an axial CG offset of 12 cm and a radial CG offset of 6 cm from the mechanical interface coordinate system.
The standard test path is located on the standard test plane and lies along a reference center line. The standard test path segments can only assume three length 200 mm, 500 mm, or 1000 mm. Detailed instructions to determine the position and orientation of standard test plane and the reference center line are given in the standard.
The performance characteristics used by R15.05-1 are accuracy, repeatability, cycle time, overshoot, settling time, and compliance. This standard allows the vendor to tune operating parameters to optimize the values of desired performance characteristics. To identify the type of characteristic that is being optimized during a particular test, the standard establishes four performance classes. If class II testing is performed, the robot operates under optimum cycle time conditions. If class III testing is performed, the robot operates under optimum repeatability conditions.
ISO Robot Performance Standard
The specified tests are primarily intended to develop and verify individual robot specifications, prototype testing, or acceptance testing. The first version of this standard, ISO 9283:1990, did not specify standard test paths and test loads. The length of the paths and size of the test loads were specified as a percentage of the robot workspace and rated load. Since no two robots have the same workspace and rated load, it was not possible for them to be tested under the same conditions, thus making comparisons very difficult.
The test planes and test paths of this standard are defined with respect to a cube located inside the workspace of the robot. Various diagonal planes of this cube are used to locate the test planes, paths and points. This standard specifies tests for the measurement of fourteen performance characteristics. The most commonly used characteristics are those of accuracy and repeatability.
From the figure above, the robot was commanded to move to the origin of the coordinate frame (rectangle), but instead attained all the positions marked by the triangles. The centroid of these positions, called the barycenter by this standard, is marked by the cross. The cloud of attained positions usually forms an ellipsoid. The lengths and orientations of the principal axes of this ellipsoid provide significant information about the performance of the robot at this position of its workspace. To average the results of this test over a significant portion of the workspace, both the US and ISO standards require that this test be performed at several locations on the test plane and that the data are mixed together.
The test planes and test paths of this standard are defined with respect to a cube located inside the workspace of the robot. Various diagonal planes of this cube are used to locate the test planes, paths and points. This standard specifies tests for the measurement of fourteen performance characteristics. The most commonly used characteristics are those of accuracy and repeatability.
From the figure above, the robot was commanded to move to the origin of the coordinate frame (rectangle), but instead attained all the positions marked by the triangles. The centroid of these positions, called the barycenter by this standard, is marked by the cross. The cloud of attained positions usually forms an ellipsoid. The lengths and orientations of the principal axes of this ellipsoid provide significant information about the performance of the robot at this position of its workspace. To average the results of this test over a significant portion of the workspace, both the US and ISO standards require that this test be performed at several locations on the test plane and that the data are mixed together.
Gripper Anticipate Variation in Robot Shape and Orientation
This article introduces a new approach to material handling, part sorting, and component assembly called “Grasping”, in which a single reconfigurable grasper with embedded intelligence replaces an entire bank of unique, fixed shape grippers and tool changers. To appreciate the motivations that guided the design of Barrett’s grasper, we must explore what is wrong with robotics today, the enormous potential for robotics in the future, and the dead-end legacy of gripper solutions.
For the benefits of a robotic solution to be realized, programmable flexibility is required along the entire length of the robot, from its base, all the way to the target work-piece. A robot arm enables programmable flexibility from the base only up to the tool plate, a few centimeters short of the target work-piece. But these last few centimeters of a robot must adapt to the complexities of securing a new object on each robot cycle, capabilities where embedded intelligence and software excel. Like a weakest link in a serial chain, a flexible gripper limits the productivity of the entire robot work-cell.
Grippers have individually-customized, but fixed jaw shapes. The trial and error customization process is design intensive, generally drives cost and schedule, and is difficult to scope in advance. In general, each anticipated variation in shape, orientation, and robot approach angle requires another custom-but-fixed gripper, a place to store the additional gripper, and a mechanism to exchange grippers. An unanticipated variation or incremental improvement is simply not allowable.
For the benefits of a robotic solution to be realized, programmable flexibility is required along the entire length of the robot, from its base, all the way to the target work-piece. A robot arm enables programmable flexibility from the base only up to the tool plate, a few centimeters short of the target work-piece. But these last few centimeters of a robot must adapt to the complexities of securing a new object on each robot cycle, capabilities where embedded intelligence and software excel. Like a weakest link in a serial chain, a flexible gripper limits the productivity of the entire robot work-cell.
Grippers have individually-customized, but fixed jaw shapes. The trial and error customization process is design intensive, generally drives cost and schedule, and is difficult to scope in advance. In general, each anticipated variation in shape, orientation, and robot approach angle requires another custom-but-fixed gripper, a place to store the additional gripper, and a mechanism to exchange grippers. An unanticipated variation or incremental improvement is simply not allowable.
Components Emotion of Robot
The organization and operation of the emotion system is strongly inspired by various theories of emotions in humans. In concert with the robot’s drives, it is designed to be a flexible system that mediates between both environmental and internal stimulation to elicit an adaptive behavioral response that serves either social or self-maintenance functions.
Several theories posit that emotional reactions consist of several distinct but interrelated facets. In addition, several appraisal theories hypothesize that a characteristic appraisal triggers the emotional reaction in a context-sensitive manner. Summarizing these ideas, an “emotional” reaction for robots consists of:
• A precipitating event
• An affective appraisal of that event
• A characteristic expression (face, voice, posture)
• Action tendencies that motivate a behavioral response.
In living systems, it is believed that these individual facets are organized in a highly interdependent fashion. Physiological activity is hypothesized to physically prepare the creature to act in ways motivated by action tendencies. Furthermore, both the physiological activities and the action tendencies are organized around the adaptive implications of the appraisals that elicited the emotions. From a functional perspective, and suggest that the individual components of emotive facial expressions are also linked to these emotional facets in a highly systematic fashion.
Several theories posit that emotional reactions consist of several distinct but interrelated facets. In addition, several appraisal theories hypothesize that a characteristic appraisal triggers the emotional reaction in a context-sensitive manner. Summarizing these ideas, an “emotional” reaction for robots consists of:
• A precipitating event
• An affective appraisal of that event
• A characteristic expression (face, voice, posture)
• Action tendencies that motivate a behavioral response.
In living systems, it is believed that these individual facets are organized in a highly interdependent fashion. Physiological activity is hypothesized to physically prepare the creature to act in ways motivated by action tendencies. Furthermore, both the physiological activities and the action tendencies are organized around the adaptive implications of the appraisals that elicited the emotions. From a functional perspective, and suggest that the individual components of emotive facial expressions are also linked to these emotional facets in a highly systematic fashion.
Expressive Face of Robots
There are several projects that focus on the development of expressive robot faces, ranging in appearance from being graphically animated, to resembling a mechanical cartoon, to pursuing a more organic appearance. For instance, the typically resembling a Japanese woman that incorporate hair, teeth, silicone-skin and a large number of control points that map to the facial action units of the human face. Using a camera mounted in the left eyeball, the robot can recognize and produce a predefined set of emotive facial expressions (corresponding to anger, fear, disgust, happiness, sorrow and surprise).
A number of simpler expressive faces have been developed, one of which can adjust its amount of eye-opening and neck posture in response to light intensity. The robot is a Lego-based face robot used to explore tactile and affective interactions with people. It is increasingly common to integrate expressive faces with mobile robots that engage people in educational or entertainment setting, such as museum tour guide robots.
As expressive faces are incorporated into service or entertainment robots, there is a growing interest in understanding how humans react to and interact with them. For instance, explored techniques for characterizing people’s mental models of robots and how this is influenced by varying the robot’s appearance and dialog to make it appear either more playful and extraverted or more caring and serious.
A number of simpler expressive faces have been developed, one of which can adjust its amount of eye-opening and neck posture in response to light intensity. The robot is a Lego-based face robot used to explore tactile and affective interactions with people. It is increasingly common to integrate expressive faces with mobile robots that engage people in educational or entertainment setting, such as museum tour guide robots.
As expressive faces are incorporated into service or entertainment robots, there is a growing interest in understanding how humans react to and interact with them. For instance, explored techniques for characterizing people’s mental models of robots and how this is influenced by varying the robot’s appearance and dialog to make it appear either more playful and extraverted or more caring and serious.
The Sociable Machine Projects of Robot
The ability for people to naturally communicate with such machines is important. However for suitably complex environment and tasks, the ability for people to intuitively teach these robots will also be important. Ideally, the robot could engage in various forms of social learning, so that one could teach the robot just as one would teach another person. Learning by demonstration to acquire physical skills such as pole balancing, learning by imitation to acquire a proto-language, and learning to imitate in order to produce a sequence of gestures have been explored on physical humanoid robots and physic based animated humanoids.
Although current work in imitation-based learning with humanoid robot has dominantly focused on articulated motor coordination, social and emotional aspects can play profound role in building robots that can communicate with and learn from people.
The Sociable Machine Project develops an expressive anthropomorphic robot called Kismet that engage people in natural and expressive face to face interaction. The robot is about 1.5 the size of an adult human head and has a total of 21 degrees of freedom (DoF). Three DoF direct the robot’s gaze, another three controls the orientation of its head, and the remaining 15 move its facial features. To visually perceive the person who interacts with it, Kismet is equipped with a total of four color CCD cameras. In addition, Kismet has two small microphones (one mounted on each ear).
Although current work in imitation-based learning with humanoid robot has dominantly focused on articulated motor coordination, social and emotional aspects can play profound role in building robots that can communicate with and learn from people.
The Sociable Machine Project develops an expressive anthropomorphic robot called Kismet that engage people in natural and expressive face to face interaction. The robot is about 1.5 the size of an adult human head and has a total of 21 degrees of freedom (DoF). Three DoF direct the robot’s gaze, another three controls the orientation of its head, and the remaining 15 move its facial features. To visually perceive the person who interacts with it, Kismet is equipped with a total of four color CCD cameras. In addition, Kismet has two small microphones (one mounted on each ear).
Generating Emotive Expression on Robot
The emotion system influences the robot’s facial expression. The human can read the robot’s facial expression to interpret whether the robot is “distressed” or “content” and can adjust his interactions with the robot accordingly. The person accomplishes this by adjusting either the type and or the quality of the stimulus presented to Kismet. These emotive cues are critical for helping the human work with the robot to establish and maintain a suitable interaction where the robot’s drives are satisfied, where it is sufficiently challenged, yet where it is largely competent in the exchange.
The human observer perceives two broad affective categories on the face, arousal and pleasantness. It maps several emotions and corresponding expressions to these two dimensions. This scheme, however, seems fairly limiting for Kismet. First, it is not clear how all the primary emotions are represented with this scheme.
It also does not account for positively valence yet reserved expressions such as a coy smile or a sly grin. More importantly, “anger” and “fear” reside very close proximity to each other despite their very different behavioral correlates. From an evolutionary perspective, the behavioral correlate of anger is to attack, and the behavioral correlate for fear is to escape. These are stereotypical responses derived from cross-species studies – obviously human behavior can vary widely.
The human observer perceives two broad affective categories on the face, arousal and pleasantness. It maps several emotions and corresponding expressions to these two dimensions. This scheme, however, seems fairly limiting for Kismet. First, it is not clear how all the primary emotions are represented with this scheme.
It also does not account for positively valence yet reserved expressions such as a coy smile or a sly grin. More importantly, “anger” and “fear” reside very close proximity to each other despite their very different behavioral correlates. From an evolutionary perspective, the behavioral correlate of anger is to attack, and the behavioral correlate for fear is to escape. These are stereotypical responses derived from cross-species studies – obviously human behavior can vary widely.
Differences Human-Robot Interaction & Human-Computer Interaction
Robots are moving out of the research laboratory and into society. They have been used by military. They were used in search and rescue at the World Trade Center. Robots have been introduced as toys and household tools. Robots are also being considered for use in domains such as elder care. As robots become more a part of our society, the field of Human Robot Interaction (HRI) becomes increasingly important. To date, most interactions with robots have been by researchers in robotics, in their laboratories.
What is robot? A web search for a definition of a robot reveals several types: knowledge robots (commonly referred to as “bots”), computer software robots that continuously run and respond automatically to a user’s activity, and industrial robots. A dictionary definition of the “noun” robot is any automated machine programmed to perform specific mechanical functions in the manner of a man. It defines an intelligent robot as a mechanical creature that can function autonomously.
While a computer may be a building block of the robot, the robot differs from a computer in that it can interact in the physical world by moving around and by changing aspects of the physical world.
It follows that human-robot interaction is fundamentally different from typical human-computer interaction (HCI). HRI differs from HCI and Human-Machine Interaction (HMI) because it concerns systems that have complex, dynamic control systems, exhibit autonomy and cognition, and operate in changing, real world environments. In addition, differences occur in the types of interactions; the physical nature of robots; the number of systems a user may interact with simultaneously; the degree of autonomy of the robot; and the environment in which the interactions occur.
What is robot? A web search for a definition of a robot reveals several types: knowledge robots (commonly referred to as “bots”), computer software robots that continuously run and respond automatically to a user’s activity, and industrial robots. A dictionary definition of the “noun” robot is any automated machine programmed to perform specific mechanical functions in the manner of a man. It defines an intelligent robot as a mechanical creature that can function autonomously.
While a computer may be a building block of the robot, the robot differs from a computer in that it can interact in the physical world by moving around and by changing aspects of the physical world.
It follows that human-robot interaction is fundamentally different from typical human-computer interaction (HCI). HRI differs from HCI and Human-Machine Interaction (HMI) because it concerns systems that have complex, dynamic control systems, exhibit autonomy and cognition, and operate in changing, real world environments. In addition, differences occur in the types of interactions; the physical nature of robots; the number of systems a user may interact with simultaneously; the degree of autonomy of the robot; and the environment in which the interactions occur.
Roles of Interaction with Robots
There are three different roles for users interacting with robots: supervisor, operator, and peer. A subsequent paper expands these roles into five distinct interaction categories. The operator role has been subdivided into an operator and a mechanic role. The peer role has also been subdivided into a by stander role and a teammate role. Supervisors are responsible for overseeing a number of robots and responding when intervention is needed – either by assigning an operator to diagnose and correct the problem or assisting the robot directly.
The operator is responsible for working “inside” the robot. This might involve assigning way points, tele-operating the robot if needed, or even re-programming on the fly to compensate for an unanticipated situation. The mechanics deal with hardware and sensor problems but must be able to interact with the robot to determine if the adjustments made are sufficient. The teammate role assumes that humans and robots will work together to carry out some tasks, collaborating to adjust to dynamic conditions. The bystander would have no formal training with the robots but must co-exist in the same environment with the robots for a period of time and therefore needs to form some model of the robot’s behavior. Some of these roles can be carried out remotely as well as locally.
In order to evaluate HRI we need to consider the role or roles that individuals will assume when interacting with a robot. For example, our hypothesis is that supervisors need situational awareness of the area and need to monitor both dynamic conditions and task progress.
The operator is responsible for working “inside” the robot. This might involve assigning way points, tele-operating the robot if needed, or even re-programming on the fly to compensate for an unanticipated situation. The mechanics deal with hardware and sensor problems but must be able to interact with the robot to determine if the adjustments made are sufficient. The teammate role assumes that humans and robots will work together to carry out some tasks, collaborating to adjust to dynamic conditions. The bystander would have no formal training with the robots but must co-exist in the same environment with the robots for a period of time and therefore needs to form some model of the robot’s behavior. Some of these roles can be carried out remotely as well as locally.
In order to evaluate HRI we need to consider the role or roles that individuals will assume when interacting with a robot. For example, our hypothesis is that supervisors need situational awareness of the area and need to monitor both dynamic conditions and task progress.
Evaluations of Human-Robot Interaction
Typical HCI evaluations use efficiency, effectiveness, and user satisfaction as measures when evaluating user interfaces. Effectiveness is measure of the amount of the task that a user to complete a task. Efficiency is a measure of the time that it takes a user to complete a task. Satisfaction ratings are used to assess how the user feels about using the interface. The three measures seem appropriate for evaluation of a number of HRI roles. The roles of supervisor, operator, mechanic and teammate will all involve some sort of task and can benefit from using efficiency, effectiveness, and satisfaction as metrics.
Additionally, because robots interact with the physical world and may at times be remote from the user, the user will need some awareness of the robot’s current situation. This involves both an understanding of the external environment as well as the internal status of the robot.
Additionally, some roles such as the tam mate assume that the user is performing other tasks as well as interacting with the robot. Workload measures can be used to determine the load that the HRI places on the supervisor or operator of the robot.
The bystander role, however, will not involve performing specific tasks with the robot. Rather we envision the bystander role as an understanding of what the robot can do in order to co-exist in the same environment.
Additionally, because robots interact with the physical world and may at times be remote from the user, the user will need some awareness of the robot’s current situation. This involves both an understanding of the external environment as well as the internal status of the robot.
Additionally, some roles such as the tam mate assume that the user is performing other tasks as well as interacting with the robot. Workload measures can be used to determine the load that the HRI places on the supervisor or operator of the robot.
The bystander role, however, will not involve performing specific tasks with the robot. Rather we envision the bystander role as an understanding of what the robot can do in order to co-exist in the same environment.
Sociable Humanoid Robots
Sociable humanoid robots pose a dramatic and intriguing shift in the way one thinks about control of autonomous robots. Traditionally, autonomous robots are designed to operate as independently and remotely as possible from humans, often performing tasks in hazardous and hostile environments (such as sweeping minefields, inspecting oil wells, or exploring other planets). Other applications such as delivering hospital meals, mowing lawns, or vacuuming floors bring autonomous robots into environments shared with people, but human-robot interaction in these tasks is still minimal.
However, a new range of application domains (domestic, entertainment, health care, etc) are driving the development of robots that can interact and cooperate with people as a partner, rather than as a tool. In the field of human computer interaction (HCI), research has shown that humans (whether computer experts, lay people, or computer critics) generally treat computers as they might treat other people. From their studies, they argue that a social interface may be a truly universal interface.
Humanoid robots are arguably well suited to this. Sharing morphology, they can communicate in a manner that supports the natural communication modalities of humans. It is not surprising that studies such as these have strongly influenced work is designing technologies that communicate with and cooperate with people as collaborators.
However, a new range of application domains (domestic, entertainment, health care, etc) are driving the development of robots that can interact and cooperate with people as a partner, rather than as a tool. In the field of human computer interaction (HCI), research has shown that humans (whether computer experts, lay people, or computer critics) generally treat computers as they might treat other people. From their studies, they argue that a social interface may be a truly universal interface.
Humanoid robots are arguably well suited to this. Sharing morphology, they can communicate in a manner that supports the natural communication modalities of humans. It is not surprising that studies such as these have strongly influenced work is designing technologies that communicate with and cooperate with people as collaborators.
Human Robot Interfaces Development
Many years of human factors research have shown that the development of effective, efficient, and usable interfaces requires the inclusion of the user’s perspective throughout the entire design and development process. Many times interfaces are developed late in the design and development process with minimal user input. The result tends to be an interface that simply can not be employed to complete the required tasks or the actual users are unwilling accept the technology. Johnson points out numerous issues with Graphical User Interfaces, many of the issues that are raised also apply to Human-Robotic Interfaces (HRIs) development. Below are the lists of the principles:
In the case of Human-Robotic Interfaces (HRIs), the developers should generally have a good understanding of the targeted user group. For example, the rescue robots the users are two specific groups. The first group includes the incident commanders and the second group includes the robot control operator. The applications of HRIs for robots that assist the elderly represent a user group that is also fairly well defined.
Once one move to a few other domains, such as generally military personnel, then the HRI design must consider the varying conditions and user capabilities. In addition to the environmental factors associated with soldiers using robots, the user group will not be as well designed as the case rescue robots. On the other hand, the user group will not be as ill defined as a general consumer group representing mothers with children under the age of five.
In the case of Human-Robotic Interfaces (HRIs), the developers should generally have a good understanding of the targeted user group. For example, the rescue robots the users are two specific groups. The first group includes the incident commanders and the second group includes the robot control operator. The applications of HRIs for robots that assist the elderly represent a user group that is also fairly well defined.
Once one move to a few other domains, such as generally military personnel, then the HRI design must consider the varying conditions and user capabilities. In addition to the environmental factors associated with soldiers using robots, the user group will not be as well designed as the case rescue robots. On the other hand, the user group will not be as ill defined as a general consumer group representing mothers with children under the age of five.
Why is The User Interface Design Important?
The design of the human-robot interface can directly affect the operator’s ability and desire to complete a task. The design also affects the ability’ operators to understand the current situation, make decisions, as well as supervise and provide high level commands to the robotic system. While it is possible to spend a significant amount of time discussing specific interaction techniques, there is also wealth of human factors research that can affect all HRI designs, such research that can affect all HRI designs. Such research is related to human decision-making, situation awareness, vigilance, workload levels, and human error. Each of these areas should be considered when developing a human robotic interface.
The area of human decision-making appears to untapped resource for the field of HRIs. Humans make hundreds, if not thousands of decisions everyday. These decisions are made rapidly in dynamic environments under varying conditions.
Depending upon the human’s current task, such decisions may have dire consequences if incorrectly determined, for instance, pilots during take off, a chemical process operator during a chemical leak, and even any individual while driving their car down a busy street.
An understanding of the human decision process should be incorporated into the design of human robotic interfaces in order to support the process human employ. The field human decision-making research involves individuals making decisions as well as teams on individuals.
The area of human decision-making appears to untapped resource for the field of HRIs. Humans make hundreds, if not thousands of decisions everyday. These decisions are made rapidly in dynamic environments under varying conditions.
Depending upon the human’s current task, such decisions may have dire consequences if incorrectly determined, for instance, pilots during take off, a chemical process operator during a chemical leak, and even any individual while driving their car down a busy street.
An understanding of the human decision process should be incorporated into the design of human robotic interfaces in order to support the process human employ. The field human decision-making research involves individuals making decisions as well as teams on individuals.
Robot Swarmish LEDs
Each robot has a red, green, and blue LED on top that can be programmed to blink in several patterns. This is the primary behavior-level debugging interface on the robots. Patterns that use one LED alone or all the LEDs together seem to work best, as patterns involving multiple lights communicating independent information can be difficult to read quickly.
It has settled on two intensities, bright and dim, and two wave patterns, a square wave and semi-sinusoidal wave. The two patterns are only distinguishable at lower frequencies, which give us 12 distinct single-LED patterns that can be read by an experienced user. The absolute minimum time to read square patterns is ½ the period of the second-lowest frequencies, which are 533 milliseconds.
The semi-sinusoidal patterns can be read slightly faster because the user can infer the frequency from the slope. Besides the square and semi-sinusoidal patterns from above, the four all-LED patterns also include two patterns that cycle back and forth through all the lights, either with smooth or sharp transitions.
All these variations produce 108 common patterns, each of which can be read in about 1.2 second. In an actual application, similar behaviors and states are grouped into single colors, leaving many patterns unused.
It has settled on two intensities, bright and dim, and two wave patterns, a square wave and semi-sinusoidal wave. The two patterns are only distinguishable at lower frequencies, which give us 12 distinct single-LED patterns that can be read by an experienced user. The absolute minimum time to read square patterns is ½ the period of the second-lowest frequencies, which are 533 milliseconds.
The semi-sinusoidal patterns can be read slightly faster because the user can infer the frequency from the slope. Besides the square and semi-sinusoidal patterns from above, the four all-LED patterns also include two patterns that cycle back and forth through all the lights, either with smooth or sharp transitions.
All these variations produce 108 common patterns, each of which can be read in about 1.2 second. In an actual application, similar behaviors and states are grouped into single colors, leaving many patterns unused.
Robot Swarmish Audio
The LEDs offer detailed information about individual robot. In contrast, the audio system can give the user an overview of the activities of the entire swarm, and can be monitored while looking elsewhere. This is a somewhat nostalgic approach: many veteran software engineers reminisce fondly of a time when they could debug programs by listening to the internal operations of their computers on nearby radios. Once the user learned what normal execution sounded like, deviations were quickly noticed, focusing attention on the offending part of the program. This resurrected approach to debugging has proven to be very effective on the swarm, and our modern interpretation.
Each robot has a 1.1 watt audio system than can produce a subset of the general MIDI instruments. This allows the Swarm to play any MIDI file, but we have found that single note work best for debugging. There are four parameters to vary per note: instrument, pitch, duration and volume. Care must be taken to blend these selections into a group composition that is intelligible. Good note selections for correctly operating program produces chords, tempos and rhythms. Once a user has become attuned to variations in these elements (especially tempo and rhythm), he or she can spot bugs from across the room in seconds that would only be apparent after careful analysis of the combined execution traces from all the robots.
Each robot has a 1.1 watt audio system than can produce a subset of the general MIDI instruments. This allows the Swarm to play any MIDI file, but we have found that single note work best for debugging. There are four parameters to vary per note: instrument, pitch, duration and volume. Care must be taken to blend these selections into a group composition that is intelligible. Good note selections for correctly operating program produces chords, tempos and rhythms. Once a user has become attuned to variations in these elements (especially tempo and rhythm), he or she can spot bugs from across the room in seconds that would only be apparent after careful analysis of the combined execution traces from all the robots.
PDA Application for DB Human-Sized Robot
In collaboration with the Humanoid robotics and Computational science department (HRCN) at the Advanced Telecommunication Research Institute (ATR), it implemented the PDA-based language learning game in DB, an anthropomorphic hydraulic robot with 30 DOFs. The robot learns the names of two boxes and directions of movements (left and right). Once the robot has correctly learned the words, it can be pushed the boxes in a desired direction upon verbal command.
We try to place a table in front of the robot, with two boxes of different colors on it (e/g/ green and pink, to facilitate color tracking). An external stereo vision system tracks the boxes position.
DB is mounted at the pelvis. It is 1.85 meters tall, weighs 80 kg, and is driven by 25 linear hydraulic actuators and 5 rotary hydraulic actuators. Each arm has 7DOFs. The vision system consists of 2 cameras fixed on the ceiling and facing the robot. A color blobs tracking system generates blob position information at 60 Hz. In the application, the vision module of DB robot extracts relevant changes in the boxes position and direction of displacement.
Similarly to what was done with Robota. Speech and vision inputs are associated in an ANN, computed on board of the PDA. During retrieval, the output neurons activate absolute goal positions, and relative sequences of movement, such that the robot can push the requested box in the requested direction. The communication from PDA to robot is still assured by RS232 serial interface in the experiment, because the aim of the experiment was to evaluate the use of the PDA as a remote control. Wireless capabilities of the iPAQ will be used in further experiments.
We try to place a table in front of the robot, with two boxes of different colors on it (e/g/ green and pink, to facilitate color tracking). An external stereo vision system tracks the boxes position.
DB is mounted at the pelvis. It is 1.85 meters tall, weighs 80 kg, and is driven by 25 linear hydraulic actuators and 5 rotary hydraulic actuators. Each arm has 7DOFs. The vision system consists of 2 cameras fixed on the ceiling and facing the robot. A color blobs tracking system generates blob position information at 60 Hz. In the application, the vision module of DB robot extracts relevant changes in the boxes position and direction of displacement.
Similarly to what was done with Robota. Speech and vision inputs are associated in an ANN, computed on board of the PDA. During retrieval, the output neurons activate absolute goal positions, and relative sequences of movement, such that the robot can push the requested box in the requested direction. The communication from PDA to robot is still assured by RS232 serial interface in the experiment, because the aim of the experiment was to evaluate the use of the PDA as a remote control. Wireless capabilities of the iPAQ will be used in further experiments.
Human Robot Interface Design for Large Swarm
Human-robot interfaces for interacting with hundreds of autonomous robots must be very different from single-robot interfaces. The central design challenge is developing techniques to maintain, program and interact with the robots without having to handle them individually. This requires robots that can support hands-free operation, which drives many other aspect if the design.
This article presents the experience with Human-Robot interfaces to develop, debug, and evaluate distributed algorithms on the112-robot iRobot Swarm. These Human-Robot Interaction (HRI) techniques fall into three categories: a physical infrastructure to support hands-free operation, utility software for centralized development and debugging, and carefully designed lights, sounds and movement that allow the user interpret the inner working of groups of robots without having to look away or use special equipment.
The task interacting with hundreds of autonomous robots presents unique challenge for the user interface designer. Traditional graphical user interfaces, data logs, and even standard power switches fail to provide the user with a practical, efficient interface. The core issue is one of scale: in a system of n robots, any task that has to be done to one robot must be done to the remaining n-1. Our solution is a swarm that can operate largely without physical interaction, using an infrastructure that allows remote power management and autonomous recharging, and has software for centralized user input and techniques for global swarm output.
This article presents the experience with Human-Robot interfaces to develop, debug, and evaluate distributed algorithms on the112-robot iRobot Swarm. These Human-Robot Interaction (HRI) techniques fall into three categories: a physical infrastructure to support hands-free operation, utility software for centralized development and debugging, and carefully designed lights, sounds and movement that allow the user interpret the inner working of groups of robots without having to look away or use special equipment.
The task interacting with hundreds of autonomous robots presents unique challenge for the user interface designer. Traditional graphical user interfaces, data logs, and even standard power switches fail to provide the user with a practical, efficient interface. The core issue is one of scale: in a system of n robots, any task that has to be done to one robot must be done to the remaining n-1. Our solution is a swarm that can operate largely without physical interaction, using an infrastructure that allows remote power management and autonomous recharging, and has software for centralized user input and techniques for global swarm output.
Hardware for Hands-Free Operation of the Robots
The Swarm infrastructure components, provide the physical resources the robots need to keep themselves running. These include chargers, navigational beacons, and semi automated test stand. The charging stations are the most important of these components, as they allow the robots to autonomously recharge their batteries.
The long range navigation beacons are designed to help guide the robots to their chargers from anywhere in their workspace. In practice, we have found that it is easier to provide a multi-hop communications route, and hence navigational path, to the chargers using the robot’s local communications system. This eliminates the need to set up any additional hardware. The SwarmBot’s bump skirts provide the robust low-level obstacle avoidance needed to allow both the navigation and docking behaviors to run successfully.
The SwarmBot’s power management circuitry has four modes of operation: ON, Standby, Off, nab Battery disconnect. The standby mode allows the user to power-on the robots remotely from a “gateway robot”, once ON, the robot can be remotely powered down via the same interface. This ability supports the sporadic nature of software development; the robots remain on during periods of active progress, but can be easily powered down to conserve batteries when a difficult bug slows the pace. This reduces wear on the batteries, and allows the user to maintain a particular physical arrangement of robots for testing far longer than if the robots was left on continuously.
The long range navigation beacons are designed to help guide the robots to their chargers from anywhere in their workspace. In practice, we have found that it is easier to provide a multi-hop communications route, and hence navigational path, to the chargers using the robot’s local communications system. This eliminates the need to set up any additional hardware. The SwarmBot’s bump skirts provide the robust low-level obstacle avoidance needed to allow both the navigation and docking behaviors to run successfully.
The SwarmBot’s power management circuitry has four modes of operation: ON, Standby, Off, nab Battery disconnect. The standby mode allows the user to power-on the robots remotely from a “gateway robot”, once ON, the robot can be remotely powered down via the same interface. This ability supports the sporadic nature of software development; the robots remain on during periods of active progress, but can be easily powered down to conserve batteries when a difficult bug slows the pace. This reduces wear on the batteries, and allows the user to maintain a particular physical arrangement of robots for testing far longer than if the robots was left on continuously.
Centralized User Interfaces of Robots
Centralized input allows the swarm to be controlled by a single user, using the gateway robot to provide connectivity between the user’s computer and the Swarm. The VT100 terminal display allows the user to send commands to an individual robot or to the entire Swarm. Simple graphical output is also possible, but limitations of the VT100 display make this interface best for input.
Commercial-off-the shelf video game controllers are also an excellent hardware input devices for some applications. In particular, the controller designed for the Sony Play Stations are high quality, and simple to interface to the robots. This approach allows one or more users to directly control individual robots, and through group behaviors, the entire Swarm. These controllers are ideal for demonstrations and classroom lessons.
The graphical user interface displays real-time telemetry data, detailed internal state, local neighbor positioning, and global robot positioning. Its design is inspired by the graphical user interfaces (GUI) of real-time strategy video games such as StarCraft and WarCraft. Games like these challenge the user to direct an army of individual units to victory. Although it is common to have over 100 units on each team, elegant user interfaces make it simple for the user to control individual units, groups, or the entire army.
Commercial-off-the shelf video game controllers are also an excellent hardware input devices for some applications. In particular, the controller designed for the Sony Play Stations are high quality, and simple to interface to the robots. This approach allows one or more users to directly control individual robots, and through group behaviors, the entire Swarm. These controllers are ideal for demonstrations and classroom lessons.
The graphical user interface displays real-time telemetry data, detailed internal state, local neighbor positioning, and global robot positioning. Its design is inspired by the graphical user interfaces (GUI) of real-time strategy video games such as StarCraft and WarCraft. Games like these challenge the user to direct an army of individual units to victory. Although it is common to have over 100 units on each team, elegant user interfaces make it simple for the user to control individual units, groups, or the entire army.
PDA Interfaces for Humanoid Robots
End user communication with robot is usually provided by PC-based user interfaces, using common programming techniques, or by simple button based remote controls. While those methods are very suitable for highly constrained environments, where reprogramming of the robot need not to be continuous, these are undesirable for applications requiring the robot to work with laymen in their daily environment.
With the recent introduction to the market of affordable humanoids and toy robots, children as well as adults have started to spend a significant of their leisure time engaging with these creatures. Toy robots have to fulfill a very difficult task, that to entertain, and in some cases, that to educate.
Providing robots with capabilities for speech and vision, such that they mimic human everyday communication, is an open research issue. Efficient methods for such processing remain computationally expensive and, thus, can not be easily exploited on cost and size limited platforms. The rapid developments of multimedia applications for Personal Digital Assistants (PDAs) make these handheld devices an ideal low-cost platform to provide simple speech and vision based communication for a robot. PDA s is light and can, therefore, easily fit and small robot, without overburden the robot’s total weight. PDAs are easy to handle: they can be carried in one hand or in a pocket.
There is a growing interest in developing PDA applications to remotely control mobile robots. The recent work follows closely such a trend and investigates the use of PDA interfaces to provide easy means of directing and teaching humanoid robots.
With the recent introduction to the market of affordable humanoids and toy robots, children as well as adults have started to spend a significant of their leisure time engaging with these creatures. Toy robots have to fulfill a very difficult task, that to entertain, and in some cases, that to educate.
Providing robots with capabilities for speech and vision, such that they mimic human everyday communication, is an open research issue. Efficient methods for such processing remain computationally expensive and, thus, can not be easily exploited on cost and size limited platforms. The rapid developments of multimedia applications for Personal Digital Assistants (PDAs) make these handheld devices an ideal low-cost platform to provide simple speech and vision based communication for a robot. PDA s is light and can, therefore, easily fit and small robot, without overburden the robot’s total weight. PDAs are easy to handle: they can be carried in one hand or in a pocket.
There is a growing interest in developing PDA applications to remotely control mobile robots. The recent work follows closely such a trend and investigates the use of PDA interfaces to provide easy means of directing and teaching humanoid robots.
Subscribe to:
Posts (Atom)