Pervasive robotics will need, in a near future, small, light and cheap robots that exhibit complex behaviors. These requirements led to the development of the M2-M4 Macaco project - a robotic active vision head. Macaco is a portable system, capable to emulate the head of different creatures both functionally and aesthetically. It integrates mechanisms for autonomous navigation, social interactions, and object analysis. One AI approach is the development of robots whose embodiment and situatedness in the world evoke behaviors that obviate constant human supervision.
Blog about Robotics Introduction, news of the robot, information about the robot, sharing knowledge about the various kinds of robots, shared an article about robots, and others associated with the robot
Implementation of Pervasive Robotics
Security is one possible operational scenario for this active head. For this class of applications, Macaco robot was equipped with a behavioral system capable of searching for people or faces, and to further recognize them. In addition, human gaze direction might reveal security threats, and thus a head gaze detection algorithm was developed. Probable targets for such gazings are other people and mostly important, explosives and/or guns. Therefore, salient objects situated in the world are processed for 3D information extraction and texture/color analysis. Current work is also underway for object and scene recognition from contextual cues.
EMARO - European Master in Advanced Robotics
The target of this partnership application is to consolidate the European Consortium of EMARO master course with Asian partners. Duration of the partnership is three years. English is the working language in all the partnership institutions. Objectives of EMARO partnership:
RTA Systems Engineering in Robotics
There are many advances in robotics and autonomy depend on increased computational power. Therefore, it is advances in high performance, low power space onboard the computers are central to more capable the robotics. Current efforts in this direction include exploiting high performance field of the programmable gate arrays (FPGAs), multi-core processors, and enabling use in space of commercial grade the computer components through shielding, hardware redundancy, and fault tolerant the software design.
Further pushes in these or other directions to achieve greater in-space computing power are needed. The modular interfaces are needed to enable tool change-out for arms on rovers and for in-space robotics assembly and servicing. When the robots and humans need to work in close proximity; sensing, planning, and autonomous control system for the robots, and overall operational procedures for the robots and humans, it will have to be designed to ensure the human safety around the robots. Developing modular the robotic interfaces will also allow the multiple robots to operate together. These modular interfaces will allow the structural, mechanical, electrical, data, fluid, pneumatic and other interaction. The tools and end effectors can also be developed in a modular manner allowing interchangeability and a reduced the logistics footprint.
The modular interfaces will be the building block for the modular self-replicating robots, and self-assembling robotic systems. The reconfigurable system design offers the ability to reconfigure mechanical, electrical and computing assets in the response to system failures. Reconfigurable computing offers the ability to internally reconfigure in response to the chip level failures caused by the environmental (i.e. space radiation), the life limitations, or the fabrication errors. System verification will be a new challenge for the human rated spacecraft bound for the deep space. New V&V approaches and techniques will be required, and in-flight re-verification following a repair may be necessary.
Further pushes in these or other directions to achieve greater in-space computing power are needed. The modular interfaces are needed to enable tool change-out for arms on rovers and for in-space robotics assembly and servicing. When the robots and humans need to work in close proximity; sensing, planning, and autonomous control system for the robots, and overall operational procedures for the robots and humans, it will have to be designed to ensure the human safety around the robots. Developing modular the robotic interfaces will also allow the multiple robots to operate together. These modular interfaces will allow the structural, mechanical, electrical, data, fluid, pneumatic and other interaction. The tools and end effectors can also be developed in a modular manner allowing interchangeability and a reduced the logistics footprint.
The modular interfaces will be the building block for the modular self-replicating robots, and self-assembling robotic systems. The reconfigurable system design offers the ability to reconfigure mechanical, electrical and computing assets in the response to system failures. Reconfigurable computing offers the ability to internally reconfigure in response to the chip level failures caused by the environmental (i.e. space radiation), the life limitations, or the fabrication errors. System verification will be a new challenge for the human rated spacecraft bound for the deep space. New V&V approaches and techniques will be required, and in-flight re-verification following a repair may be necessary.
Autonomous Rendezvous and Docking of Robotic
AR&D is a capability requiring many vehicle subsystems to operate in concert. It is important to clarify that AR&D is not a system and cannot be purchased off the shelf. This strategy focuses on development of a certified, standardized capability suite of subsystems enabling AR&D for different mission classes and needs. This suite will be incrementally developed, tested and integrated over a span of several missions. This technology roadmap focuses on four specific subsystems required for any AR&D mission.
1. Relative Navigation Sensors – During the course of RPOD, varying accuracies of bearing, range, and relative attitude are needed for AR&D. Current implementations for optical, laser, and RF systems are mid-TRL (Technology Readiness Level) and require some development and flight experience to gain reliability and operational confidence. Inclusion of the ability for cooperating AR&D pairs to communicate directly can greatly improve the responsiveness and robustness of the system.
2. Robust AR&D GN&C Real-Time Flight Software (FSW) – AR&D GN&C algorithms are maturing, however, implementing these algorithms into FSW is an enormous challenge. A best practice based implementation of automated/autonomous GN&C algorithms into real-time FSW operating systems needs to be developed and tested.
3. Docking/Capture – NASA is planning for the imminent construction of a new low-impact docking mechanism built to an international standard for human spaceflight missions to ISS. A smaller common docking system for robotic spacecraft is also needed to enable robotic spacecraft AR&D within the capture envelopes of these systems. Assembly of the large vehicles and stages used for beyond LEO exploration missions will require new mechanisms with new capture envelopes beyond any docking system currently used or in development. Development and testing of autonomous robotic capture of non cooperative target vehicles in which the target does not have capture aids such as grapple fixtures or docking mechanisms is needed to support satellite servicing/rescue.
4. Mission/System Managers – A scalable spacecraft software executive that can be tailored for various mission applications, for the whole vehicle, and various levels of autonomy and automation is needed to ensure safety and operational confidence in AR&D software execution. Numerous spacecraft software executives have been developed, but the necessary piece that is missing is an Agencywide open standard which will minimize the costs of such architectures and its ability to evolve over time to help overcome general fears about autonomy/automation.
1. Relative Navigation Sensors – During the course of RPOD, varying accuracies of bearing, range, and relative attitude are needed for AR&D. Current implementations for optical, laser, and RF systems are mid-TRL (Technology Readiness Level) and require some development and flight experience to gain reliability and operational confidence. Inclusion of the ability for cooperating AR&D pairs to communicate directly can greatly improve the responsiveness and robustness of the system.
2. Robust AR&D GN&C Real-Time Flight Software (FSW) – AR&D GN&C algorithms are maturing, however, implementing these algorithms into FSW is an enormous challenge. A best practice based implementation of automated/autonomous GN&C algorithms into real-time FSW operating systems needs to be developed and tested.
3. Docking/Capture – NASA is planning for the imminent construction of a new low-impact docking mechanism built to an international standard for human spaceflight missions to ISS. A smaller common docking system for robotic spacecraft is also needed to enable robotic spacecraft AR&D within the capture envelopes of these systems. Assembly of the large vehicles and stages used for beyond LEO exploration missions will require new mechanisms with new capture envelopes beyond any docking system currently used or in development. Development and testing of autonomous robotic capture of non cooperative target vehicles in which the target does not have capture aids such as grapple fixtures or docking mechanisms is needed to support satellite servicing/rescue.
4. Mission/System Managers – A scalable spacecraft software executive that can be tailored for various mission applications, for the whole vehicle, and various levels of autonomy and automation is needed to ensure safety and operational confidence in AR&D software execution. Numerous spacecraft software executives have been developed, but the necessary piece that is missing is an Agencywide open standard which will minimize the costs of such architectures and its ability to evolve over time to help overcome general fears about autonomy/automation.
Robotic Autonomous System
Autonomy, in the context of a system (robotic, spacecraft, or aircraft), is the capability for the system to operate independently from external control. For NASA missions there is a spectrum of Autonomy in a system from basic automation (mechanistic execution of action or response to stimuli) through to fully autonomous systems able to act independently in dynamic and uncertain
environments. Two application areas of autonomy are:
(i) increased use of autonomy to enable an independent acting system, and
(ii) automation as an augmentation of human operation. Autonomy’s fundamental benefits are;
increasing a system operations capability, cost savings via increased human labor efficiencies and reduced needs, and increased mission assurance or robustness to uncertain environments.
An “autonomous system” is as a system that resolves choices on its own. The goals the system is trying to accomplish are provided by another entity; thus, the system is autonomous from the entity on whose behalf the goals are being achieved. The decision-making processes may in fact be simple, but the choices are made locally. The selections have been made already, and encoded in some way, or will be made externally to the system Key attributes of such autonomy for a robotic system include the ability for complex decision making, including autonomous mission execution and planning, the ability to self-adapt as the environment in which the system is operating changes, and the ability to understand system state and react accordingly.
Variable (or mixed initiative) autonomy refers to systems in which a user can specify the degree of autonomous control that the system is allowed to take on, and in which this degree of autonomy can be varied from essentially none to near or complete autonomy. For example, in a human-robot system with mixed initiative, the operator may switch levels of autonomy onboard the robot. Controlling levels of autonomy is tantamount to controlling bounds on the robot's authority, response, and operational capabilities.
environments. Two application areas of autonomy are:
(i) increased use of autonomy to enable an independent acting system, and
(ii) automation as an augmentation of human operation. Autonomy’s fundamental benefits are;
increasing a system operations capability, cost savings via increased human labor efficiencies and reduced needs, and increased mission assurance or robustness to uncertain environments.
An “autonomous system” is as a system that resolves choices on its own. The goals the system is trying to accomplish are provided by another entity; thus, the system is autonomous from the entity on whose behalf the goals are being achieved. The decision-making processes may in fact be simple, but the choices are made locally. The selections have been made already, and encoded in some way, or will be made externally to the system Key attributes of such autonomy for a robotic system include the ability for complex decision making, including autonomous mission execution and planning, the ability to self-adapt as the environment in which the system is operating changes, and the ability to understand system state and react accordingly.
Variable (or mixed initiative) autonomy refers to systems in which a user can specify the degree of autonomous control that the system is allowed to take on, and in which this degree of autonomy can be varied from essentially none to near or complete autonomy. For example, in a human-robot system with mixed initiative, the operator may switch levels of autonomy onboard the robot. Controlling levels of autonomy is tantamount to controlling bounds on the robot's authority, response, and operational capabilities.
Robonaut 2 Mission to ISS
During FY11 the Robonaut 2 system will be launched on STS-133 and delivered to the ISS in what will become the Permanent Multipurpose Module (PMM). Robonaut 2 (R2) is the latest in a series of dexterous robots built by NASA as technology demonstration, nowevolving from Earth to in-space experiments. The main objectives are to explore dexterous manipulation in zero gravity, test human-robot safety systems, test remote supervision techniques for operation across time delays, and experiment with ISS equipment to begin offloading crew of housekeeping and other chores. The R2 was built in a partnership with General Motors, with a shared vision of a capable but safe robot working near people.
The R2 has the state of the art in tactile sensing and perception, as well as depth map sensors, stereo vision, and force sensing. The R2 will be deployed initially on a fixed pedestal with zero mobility, but future upgrades are planned to allow it to climb and reposition itself at different worksites. Robonaut 2’s dexterous manipulators are the state of the art, with three levels of force sensing for safety, high strength to weight ratios, compliant and back drivable drive trains, soft and smooth coverings, fine force and position control, dual arm coordination, and kinematic redundancy.
Human interfaces for the R2 include direct force interaction where humans can manually position the limbs, trajectory design software tools, and script engines. R2 is designed to be directly tele-operated, remotely supervised, or run in an automated manner. The modular design can be upgraded over time to extend the Robonaut capabilities with new limbs, backpacks, sensors and software.
The Robotic Refueling Dexterous Demonstration (R2D2) is a multifaceted payload designed for representative tasks required to robotically refuel a spacecraft. Once mounted to the International Space Station, the demonstration will utilize the R2D2 payload complement, the Special Purpose Dexterous Manipulator (SPDM) robotic arms, and 4 customized, interchangeable tools to simulate the tasks needed to refuel a spacecraft using its standard ground fill‐and‐drain valve.
The R2 has the state of the art in tactile sensing and perception, as well as depth map sensors, stereo vision, and force sensing. The R2 will be deployed initially on a fixed pedestal with zero mobility, but future upgrades are planned to allow it to climb and reposition itself at different worksites. Robonaut 2’s dexterous manipulators are the state of the art, with three levels of force sensing for safety, high strength to weight ratios, compliant and back drivable drive trains, soft and smooth coverings, fine force and position control, dual arm coordination, and kinematic redundancy.
Human interfaces for the R2 include direct force interaction where humans can manually position the limbs, trajectory design software tools, and script engines. R2 is designed to be directly tele-operated, remotely supervised, or run in an automated manner. The modular design can be upgraded over time to extend the Robonaut capabilities with new limbs, backpacks, sensors and software.
The Robotic Refueling Dexterous Demonstration (R2D2) is a multifaceted payload designed for representative tasks required to robotically refuel a spacecraft. Once mounted to the International Space Station, the demonstration will utilize the R2D2 payload complement, the Special Purpose Dexterous Manipulator (SPDM) robotic arms, and 4 customized, interchangeable tools to simulate the tasks needed to refuel a spacecraft using its standard ground fill‐and‐drain valve.
Mobility in Robotic Space
The state of the art in robotic space mobility (e.g. not including conventional rocket propulsion) includes the Mars Exploration Rovers and the upcoming Mars Science Laboratory, and for human surface mobility the Apollo lunar roving vehicle used on the final three Apollo missions. Recently, systems have been developed and tested on Earth for mobility on planetary surfaces including the Space Exploration Vehicle and the ATHLETE wheel-on-leg cargo transporter. Both feature active suspension. A series of grand challenges have extended the reach of robotic off-road mobility to high speeds and progressively more extreme terrain.
For microgravity mobility, the Manned Maneuvering Unit (MMU), tested in 1984 and, more recently, the SAFER jet pack provide individual astronauts with the ability to move and maneuver in free space, or in the neighborhood of a Near-Earth Asteroid. The AERCam system flew on STS-87 in 1997 as the first of future small free-flying inspection satellites. We can expect in the next few decades that robotic vehicles designed for planetary surfaces will approach or even exceed the performance of the best piloted human vehicles on Earth in traversing extreme terrain and reaching sites of interest despite severe terrain challenges.
Human drivers have a remarkable ability to perceive terrain hazards at long range and to pilot surface vehicles along dynamic trajectories that seem nearly optimal. Despite the limitations of human sensing and cognition, it is generally observed that experienced drivers can pilot their vehicles at speeds near the limits set by physical law (e.g. frictional coefficients, tipover and other vehicle-terrain kinematic and dynamic failures). This fact is remarkable given the huge computational throughput requirements needed to quickly assess subtle terrain geometric and non-geometric properties (e.g. visually estimating the properties of soft soil) at long range fast enough to maintain speeds near the vehicle limits. This ability is lacking in today’s best obstacle detection and hazard avoidance systems.
For microgravity mobility, the Manned Maneuvering Unit (MMU), tested in 1984 and, more recently, the SAFER jet pack provide individual astronauts with the ability to move and maneuver in free space, or in the neighborhood of a Near-Earth Asteroid. The AERCam system flew on STS-87 in 1997 as the first of future small free-flying inspection satellites. We can expect in the next few decades that robotic vehicles designed for planetary surfaces will approach or even exceed the performance of the best piloted human vehicles on Earth in traversing extreme terrain and reaching sites of interest despite severe terrain challenges.
Human drivers have a remarkable ability to perceive terrain hazards at long range and to pilot surface vehicles along dynamic trajectories that seem nearly optimal. Despite the limitations of human sensing and cognition, it is generally observed that experienced drivers can pilot their vehicles at speeds near the limits set by physical law (e.g. frictional coefficients, tipover and other vehicle-terrain kinematic and dynamic failures). This fact is remarkable given the huge computational throughput requirements needed to quickly assess subtle terrain geometric and non-geometric properties (e.g. visually estimating the properties of soft soil) at long range fast enough to maintain speeds near the vehicle limits. This ability is lacking in today’s best obstacle detection and hazard avoidance systems.
Human-Systems Interfaces of Space Robotic
The ultimate efficacy of space systems depends greatly upon the interfaces that humans use to operate them. The current state of the art in human system interfaces is summarized below along with some of the advances that are expected in the next 25 years. Human operation of most systems today is accomplished in a simple pattern reminiscent of the classic “Sense – Plan – Act” control paradigm for robotics and remotely operated systems. The human observes the state of the system and its environment, forms a mental plan for its future action, and then commands the robot or machine to execute that plan. Most of the recent work in this field is focused on providing tools to more effectively communicate state to the human and capture commands for the robot, each of which is discussed in more detail below.
Current human-system interfaces typically include software applications that communicate internal system state via abstract gauges and readouts reminiscent of aircraft cockpits or overlays on realistic illustrations of the physical plant and its components. Information from sensors is available in its native form (for instance, a single image from a camera) and aggregated into a navigable model of the environment that may contain data from multiple measurements and sensors. Some interfaces are adapted to immersive displays, mobile devices, or allow multiple distributed operators to monitor the remote system simultaneously.
Future interfaces will communicate state through increased use of immersive displays, creating“Holodeck”-like virtual environments that can be naturally explored by the human operator with “Avatar”-like telepresence. These interfaces will also more fully engage the aural and tactile senses of the human to communicate more information about the state of the robot and its surroundings. As robots grow increasingly autonomous, improved techniques for communicating the “mental state” of robots will be introduced, as well as mechanisms for understanding the dynamic state of reconfigurable robots and complex sensor data from swarms.
Current human-robot interfaces typically allow for two types of commands. The first are simple, brief directives, sometimes sent via specialized control devices such as joysticks, which interrupt
existing commands and immediately affect the state of the robot. A few interfaces allow the issuance of these commands through speech and gestures.
Current human-system interfaces typically include software applications that communicate internal system state via abstract gauges and readouts reminiscent of aircraft cockpits or overlays on realistic illustrations of the physical plant and its components. Information from sensors is available in its native form (for instance, a single image from a camera) and aggregated into a navigable model of the environment that may contain data from multiple measurements and sensors. Some interfaces are adapted to immersive displays, mobile devices, or allow multiple distributed operators to monitor the remote system simultaneously.
Future interfaces will communicate state through increased use of immersive displays, creating“Holodeck”-like virtual environments that can be naturally explored by the human operator with “Avatar”-like telepresence. These interfaces will also more fully engage the aural and tactile senses of the human to communicate more information about the state of the robot and its surroundings. As robots grow increasingly autonomous, improved techniques for communicating the “mental state” of robots will be introduced, as well as mechanisms for understanding the dynamic state of reconfigurable robots and complex sensor data from swarms.
Current human-robot interfaces typically allow for two types of commands. The first are simple, brief directives, sometimes sent via specialized control devices such as joysticks, which interrupt
existing commands and immediately affect the state of the robot. A few interfaces allow the issuance of these commands through speech and gestures.
Tele-Robotics and Autonomous Systems Technology Area Breakdown Structure
The Robotics, Tele-Robotics and Autonomous Systems Technology Area Breakdown Structure (TABS). This area includes sensors and algorithms needed to convert sensor data into representations suitable for decision-making. Traditional spacecraft sensing and perception included position, attitude, and velocity estimation in reference frames centered on solar system bodies, plus sensing spacecraft internal degrees of freedom, such as scan-platform angles. Current and future development will expand this to include position, attitude, and velocity estimation relative to local terrain, plus rich perception of characteristics of local terrain — where “terrain” may include the structure of other spacecraft in the vicinity and dynamic events, such as atmospheric phenomena.
Enhanced sensing and perception will broadly impact three areas of capability: autonomous navigation, sampling and manipulation, and interpretation of science data. In autonomous navigation, 3-D perception has already been central to autonomous navigation of planetary rovers. Current capability focuses on stereoscopic 3-D perception in daylight. Active optical ranging (LIDAR) is commonly used in Earthbased robotic systems and is under development for landing hazard detection in planetary exploration. Progress is necessary in increasing the speed, resolution, and field of regard of such sensors, reducing their size, weight, and power, enabling night operation, and hardening them for flight.
Range and imagery data is already in some use for rover and lander position and velocity estimation, though with relatively slow update rates. Realtime, onboard 3-D perception, mapping, and terrain-relative position and velocity estimation capability is also needed for small body proximity operation, balloons and airships, and micro-inspector spacecraft. For surface navigation, sensing and perception must be extended from 3-D. Perception to estimating other terrain properties pertinent to trafficability analysis, such as softness of soil or depth to the load-bearing surface. Many types of sensors may be relevant to this task, including contact and remote sensors onboard rovers and remote sensors on orbiters.
Sampling generally refers to handling natural materials in scientific exploration; manipulation includes actions needed in sampling and handling man-made objects, including sample containers in scientific exploration and handling a variety of tools and structures during robotic assembly and maintenance. 3-D perception, mapping, and relative motion estimation are also relevant here. Non-geometric terrain property estimation is also relevant to distinguish where and how to sample, as well as where and how to anchor to surfaces in micro-gravity or to steep slopes on large bodies.
Enhanced sensing and perception will broadly impact three areas of capability: autonomous navigation, sampling and manipulation, and interpretation of science data. In autonomous navigation, 3-D perception has already been central to autonomous navigation of planetary rovers. Current capability focuses on stereoscopic 3-D perception in daylight. Active optical ranging (LIDAR) is commonly used in Earthbased robotic systems and is under development for landing hazard detection in planetary exploration. Progress is necessary in increasing the speed, resolution, and field of regard of such sensors, reducing their size, weight, and power, enabling night operation, and hardening them for flight.
Range and imagery data is already in some use for rover and lander position and velocity estimation, though with relatively slow update rates. Realtime, onboard 3-D perception, mapping, and terrain-relative position and velocity estimation capability is also needed for small body proximity operation, balloons and airships, and micro-inspector spacecraft. For surface navigation, sensing and perception must be extended from 3-D. Perception to estimating other terrain properties pertinent to trafficability analysis, such as softness of soil or depth to the load-bearing surface. Many types of sensors may be relevant to this task, including contact and remote sensors onboard rovers and remote sensors on orbiters.
Sampling generally refers to handling natural materials in scientific exploration; manipulation includes actions needed in sampling and handling man-made objects, including sample containers in scientific exploration and handling a variety of tools and structures during robotic assembly and maintenance. 3-D perception, mapping, and relative motion estimation are also relevant here. Non-geometric terrain property estimation is also relevant to distinguish where and how to sample, as well as where and how to anchor to surfaces in micro-gravity or to steep slopes on large bodies.
Manipulation Technology TeleRobotics and Autonomous Systems
Manipulation is defined as making an intentional change in the environment. Positioning sensors, handling objects, digging, assembling, grappling, berthing, deploying, sampling, bending, and even positioning the crew on the end of long arms are tasks considered to be forms of manipulation. Arms, cables, fingers, scoops, and combinations of multiple limbs are embodiments of manipulators. Here we look ahead to missions’ requirements and chart the evolution of these capabilities that will be needed for space missions. Manipulation applications for human missions can be found in Technology Area 7 as powered exoskeletons, or payload offloading devices that exceed human strength alone.
Sample Handling- The state of the art is found in the MSL arm, Phoenix arm, MER arm, Sojourner arm, and Viking. Future needs include handling segmented samples (cores, rocks) rather than scoop full of soil, loading samples into onboard devices, loading samples into containers, sorting samples, and cutting samples.
Grappling- The state art is got in the SRMS, MFD, ETS-VII, SSRMS, Orbital Express, and SPDM. Near term advances will be seen in the NASA Robonaut 2 mission. Challenges that
will need to be overcome include grappling with a dead spacecraft, grappling a natural object like an asteroid, grappling in deep space, and assembly of a multi-stack spacecraft.
Eye-Hand Coordination- The state of the art is placement of MER instruments on rocks, Orbital Express refueling, SPDM ORU handling and Phoenix digging. Challenges to be overcome include working with natural objects in micro gravity (asteroids), operation in poor lighting, calibration methods, and combination of vision and touch.
EVA positioning- The EVA community has come to rely on the use of large robot foot restraints
versus having crew climb. The state of the art is found in the SRMS and SSRMS. These arms were originally designed for handling inert payloads, and no controls were developed for control
by the crew on the arm. Challenges to be overcome involve letting crew position themselves without multiple IV crew helping, safety issues, and operation of these arms far from Earth support.
Sample Handling- The state of the art is found in the MSL arm, Phoenix arm, MER arm, Sojourner arm, and Viking. Future needs include handling segmented samples (cores, rocks) rather than scoop full of soil, loading samples into onboard devices, loading samples into containers, sorting samples, and cutting samples.
Grappling- The state art is got in the SRMS, MFD, ETS-VII, SSRMS, Orbital Express, and SPDM. Near term advances will be seen in the NASA Robonaut 2 mission. Challenges that
will need to be overcome include grappling with a dead spacecraft, grappling a natural object like an asteroid, grappling in deep space, and assembly of a multi-stack spacecraft.
Eye-Hand Coordination- The state of the art is placement of MER instruments on rocks, Orbital Express refueling, SPDM ORU handling and Phoenix digging. Challenges to be overcome include working with natural objects in micro gravity (asteroids), operation in poor lighting, calibration methods, and combination of vision and touch.
EVA positioning- The EVA community has come to rely on the use of large robot foot restraints
versus having crew climb. The state of the art is found in the SRMS and SSRMS. These arms were originally designed for handling inert payloads, and no controls were developed for control
by the crew on the arm. Challenges to be overcome involve letting crew position themselves without multiple IV crew helping, safety issues, and operation of these arms far from Earth support.
Digital Inputs/Outputs and Accelerometer WPI Robotics Library
Digital inputs
Digital inputs are generally used for controlling switches. The WPILib DigitalInput object is typically used to get the current state of the corresponding hardware line: 0 or 1. The digital inputs are more complex such as encoders or counters, are handled by using the appropriate classes. Using these other supported device types (encoder, ultrasonic rangefinder, gear tooth sensor, etc.) doesn’t require a digital input object to be created. The lines of digital input are shared from the 14 GPIO lines on each Digital Breakout Board. Creating an example of a DigitalInput object will automatically set the direction of the line to input.
The lines of digital input have pull-up resistors so an unconnected input will naturally be high. If a switch is contacted to the digital input it should connect to ground when closed. The switch open state will be 1 and the closed state will be 0. In Java, digital input values are true and false. So an open switch is true and a closed switch is false.
Digital Outputs
Typically digital outputs are used to run indicators or to interface with other electronics. The digital outputs provide the 14 GPIO lines on each Digital Breakout Board. Creating an example of a DigitalOutput object will automatically set the direction of the GPIO line to output. In C++, digital output values are 0 and 1 representing high (5V) and low (0V) signals. In Java, the digital output values are true (5V) and false (0V).
Accelerometer
The two-axis accelerometer given in the kit of parts is a two-axis accelerometer. This device can offer acceleration data in the X and Y axes relative to the circuit board. In the WPI Robotics Library you treat it as two separate devices, one for the X axis and the other for the Y axis. This provides better performance if your application only needs to use one axis. The accelerometer can be used as a tilt sensor – actually measuring the acceleration of gravity.
Digital inputs are generally used for controlling switches. The WPILib DigitalInput object is typically used to get the current state of the corresponding hardware line: 0 or 1. The digital inputs are more complex such as encoders or counters, are handled by using the appropriate classes. Using these other supported device types (encoder, ultrasonic rangefinder, gear tooth sensor, etc.) doesn’t require a digital input object to be created. The lines of digital input are shared from the 14 GPIO lines on each Digital Breakout Board. Creating an example of a DigitalInput object will automatically set the direction of the line to input.
The lines of digital input have pull-up resistors so an unconnected input will naturally be high. If a switch is contacted to the digital input it should connect to ground when closed. The switch open state will be 1 and the closed state will be 0. In Java, digital input values are true and false. So an open switch is true and a closed switch is false.
Digital Outputs
Typically digital outputs are used to run indicators or to interface with other electronics. The digital outputs provide the 14 GPIO lines on each Digital Breakout Board. Creating an example of a DigitalOutput object will automatically set the direction of the GPIO line to output. In C++, digital output values are 0 and 1 representing high (5V) and low (0V) signals. In Java, the digital output values are true (5V) and false (0V).
Accelerometer
The two-axis accelerometer given in the kit of parts is a two-axis accelerometer. This device can offer acceleration data in the X and Y axes relative to the circuit board. In the WPI Robotics Library you treat it as two separate devices, one for the X axis and the other for the Y axis. This provides better performance if your application only needs to use one axis. The accelerometer can be used as a tilt sensor – actually measuring the acceleration of gravity.
IRC5 Industrial Robot Controller
Fifth generation robot controller Based on more than four decades of robotics experience, the IRC5 sets a new benchmark in the robotics industry. Bringing previous achievements in motion control, flexibility, usability, safety and robustness along, it adds new breakthroughs in modularity, user interface, multi robot control and PC tool support.
Safety
Operator safety is the IRC5 central quality, fulfilling all relevant regulations with good measure, as certified by third-party inspections. Electronic position switches add the first touch of a new generation of safety, replacing earlier electro-mechanical solutions, and opening up for flexible and robust cell interlocking. For even more flexible cell safety concepts, e.g. involving collaboration between robot and operator, SafeMove offers a host of useful safety functions.
Motion control
According to advanced dynamic modeling, the IRC5 optimizes the performance of the robot for the physically shortest possible cycle time (QuickMove) and precise path accuracy (TrueMove). The predictable and high performance behavior is delivered automatically together with a speed-independent path, with no tuning required by the programmer.
Modularity
The IRC5 is available in different variants in order to provide a cost effective solution for every need. The ability to stack modules on top of each other, put them side by side or distributed in the cell is a unique feature, leading to optimization of footprint and cell layout. The panel-mounted version comes without a cabinet, enabling integration in any encapsulation for exceptional compactness or for special environmental requirements.
FlexPendant
The FlexPendant is characterized by its clean, color touch screen-based design and 3D joystick for intuitive interaction. Powerful customized application support enables loading of tailormade
applications, e.g. operator screens, thus eliminating the need for a separate operator HMI.
RAPID programming language
It provides the perfect combination of simplicity, flexibility and powerfulness. RAPID is a truly unlimited language with support for well-structured programs, shop floor language and advanced
features. It also incorporates powerful support for many process applications.
Communication
The IRC5 compatibles the state-of-the-art field busses for I/O and is a well-behaved node in any plant network. Sensor interface functionality, remote disk access and socket messaging are examples of the many powerful networking features.
Safety
Operator safety is the IRC5 central quality, fulfilling all relevant regulations with good measure, as certified by third-party inspections. Electronic position switches add the first touch of a new generation of safety, replacing earlier electro-mechanical solutions, and opening up for flexible and robust cell interlocking. For even more flexible cell safety concepts, e.g. involving collaboration between robot and operator, SafeMove offers a host of useful safety functions.
Motion control
According to advanced dynamic modeling, the IRC5 optimizes the performance of the robot for the physically shortest possible cycle time (QuickMove) and precise path accuracy (TrueMove). The predictable and high performance behavior is delivered automatically together with a speed-independent path, with no tuning required by the programmer.
Modularity
The IRC5 is available in different variants in order to provide a cost effective solution for every need. The ability to stack modules on top of each other, put them side by side or distributed in the cell is a unique feature, leading to optimization of footprint and cell layout. The panel-mounted version comes without a cabinet, enabling integration in any encapsulation for exceptional compactness or for special environmental requirements.
FlexPendant
The FlexPendant is characterized by its clean, color touch screen-based design and 3D joystick for intuitive interaction. Powerful customized application support enables loading of tailormade
applications, e.g. operator screens, thus eliminating the need for a separate operator HMI.
RAPID programming language
It provides the perfect combination of simplicity, flexibility and powerfulness. RAPID is a truly unlimited language with support for well-structured programs, shop floor language and advanced
features. It also incorporates powerful support for many process applications.
Communication
The IRC5 compatibles the state-of-the-art field busses for I/O and is a well-behaved node in any plant network. Sensor interface functionality, remote disk access and socket messaging are examples of the many powerful networking features.
The WPI Robotics Library
The National Instruments compact RIO-9074 real-time controller (cRIO) is presently the robot controller provided by the FIRST Robotics Competition (FRC). It has around five hundred times more memory than previous FRC controllers. Dedicated hardware of FPGA capable of sampling across 16 channels replaces previously cumbersome programming techniques necessary with previous controllers.
The WPI Robotics library is designed to:
• Work with the cRIO controller
• Handle low level interfacing of components
• Allow all experience levels users access to experience appropriate features
C++ and Java are the two choices of text-based languages available for use on the cRIO. These languages were selected due to they represent a better level of abstraction for robot programs than previously used languages. The WPI Robotics Library is designed for maximum extensibility and software reuse with these languages.
The library consist classes which support the sensors, speed controllers, driver station, and other hardware in the kit of parts. In addition, WPILib supports many commonly used sensors which are not in the kit, such as ultrasonic rangefinders. WPILib has a general features, such as general-purpose counters, to provide support for custom hardware and devices. The FPGA hardware also allows for interrupt processing to be dispatched at the task level, instead of as kernel interrupt handlers, reducing many common real-time bug problems.
The WPI Robotics library does not support the C++ explicitly, exception handling mechanism, though it is available to teams for their programs. Uncaught exceptions will unwind the entire call stack and cause the whole robot program to quit, therefore, we caution teams on the use of this feature.
Objects are allocated dynamically to represent each type of sensor. An internal reservation system for hardware is used to prevent reuse of the same ports for different. In the C++ version the code source for the library will be published on a server for teams to review and make comments. In the Java version the code source is included with each release. There will be a repository for teams to develop and share projects community for any language including LabVIEW.
The WPI Robotics library is designed to:
• Work with the cRIO controller
• Handle low level interfacing of components
• Allow all experience levels users access to experience appropriate features
C++ and Java are the two choices of text-based languages available for use on the cRIO. These languages were selected due to they represent a better level of abstraction for robot programs than previously used languages. The WPI Robotics Library is designed for maximum extensibility and software reuse with these languages.
The library consist classes which support the sensors, speed controllers, driver station, and other hardware in the kit of parts. In addition, WPILib supports many commonly used sensors which are not in the kit, such as ultrasonic rangefinders. WPILib has a general features, such as general-purpose counters, to provide support for custom hardware and devices. The FPGA hardware also allows for interrupt processing to be dispatched at the task level, instead of as kernel interrupt handlers, reducing many common real-time bug problems.
The WPI Robotics library does not support the C++ explicitly, exception handling mechanism, though it is available to teams for their programs. Uncaught exceptions will unwind the entire call stack and cause the whole robot program to quit, therefore, we caution teams on the use of this feature.
Objects are allocated dynamically to represent each type of sensor. An internal reservation system for hardware is used to prevent reuse of the same ports for different. In the C++ version the code source for the library will be published on a server for teams to review and make comments. In the Java version the code source is included with each release. There will be a repository for teams to develop and share projects community for any language including LabVIEW.
Sensors WPI Robotics
The WPI Robotics Library has the sensors that are supplied in the FRC kit of parts, as well as many other commonly used sensors available to FIRST teams through industrial and hobby robotics outlets. The WPILib supported sensors are listed in the chart below. The supported sensors include those that are provided in the FIRST kit of parts, as well as other commonly used sensors.
Types of supported sensors
On the cRIO, the FPGA implements all the high-speed measurements through dedicated hardware ensuring accurate measurements no matter how many sensors and motors are added to the robot. This is an improvement over previous systems, which required complex real-time software routines. Natively the library supports the sensors of the categories shown below.
The WPI Robotics Library has many features that make it easy to implement the sensors that don’t have prewritten classes. For example, general-purpose counters can measure period and count from any device generating the output pulses. Another example is a generalized interrupt facility to catch the high-speed events without polling and potentially missing them.
Digital I/O Subsystem
The digital sidecar of visual representation and the subsystem of digital I/O. The NI 9401 digital I/O module offered in the kit has 32 GPIO lines. Through the circuits in the digital breakout the board, these lines map into 10 PWM outputs, 8 Relay outputs for the driving Spike relays, the signal light output, an I2C port, and 14 bidirectional the GPIO lines.
The PWM lines of basic update rate is a multiple of approximately 5 ms. Jaguar speed controllers update at slightly over 5ms, Victors update at slightly over 10ms, and servos update at slightly over 20ms.
Types of supported sensors
On the cRIO, the FPGA implements all the high-speed measurements through dedicated hardware ensuring accurate measurements no matter how many sensors and motors are added to the robot. This is an improvement over previous systems, which required complex real-time software routines. Natively the library supports the sensors of the categories shown below.
The WPI Robotics Library has many features that make it easy to implement the sensors that don’t have prewritten classes. For example, general-purpose counters can measure period and count from any device generating the output pulses. Another example is a generalized interrupt facility to catch the high-speed events without polling and potentially missing them.
Digital I/O Subsystem
The digital sidecar of visual representation and the subsystem of digital I/O. The NI 9401 digital I/O module offered in the kit has 32 GPIO lines. Through the circuits in the digital breakout the board, these lines map into 10 PWM outputs, 8 Relay outputs for the driving Spike relays, the signal light output, an I2C port, and 14 bidirectional the GPIO lines.
The PWM lines of basic update rate is a multiple of approximately 5 ms. Jaguar speed controllers update at slightly over 5ms, Victors update at slightly over 10ms, and servos update at slightly over 20ms.
Parameter Identification for Rigid Robot Models
A general overview of the identification methods of parameter for rigid robots can be found in textbooks. The identification techniques of experimental robot estimate dynamic robot parameters based on force/torque and motion data that are measured during robot motions along optimized trajectories. Mostly, these techniques are based on the fact that the dynamic robot model can be written as a linear set of equations with the dynamic parameters as unknowns. A formulation of dynamic parameters such as this allows the use of linear estimation techniques that find the optimal parameter set in a global sense. However, not all parameters can be defined using these techniques since some of the parameters do not affect the dynamic response or affect the dynamic response in linear combinations with other parameters.
The null space is defined as the parameter space consisting parameter combinations that do not impact the dynamic response. Gautier, Khalil and Mayeda provide a set of rules based on the topology of the manipulator system to group the dependent inertia parameters and to form a minimal set of parameters that uniquely determine the dynamic response of the robot. In addition, the techniques of numerical like the QR decomposition or the Singular Value Decomposition can be used to find the set of minimal or base parameters.
Mostly the base parameter set obtained from a linear parameter fit is not guaranteed to be a physically meaningful solution. Waiboer suggest that the identified parameters become more physically convincing by choosing the null space in such a way that the estimated parameters match a priori given values in least squares sense. This needs an a priori estimation of the parameter values and a sufficiently accurate description of the null space, neither of which are trivial, in general. Mata force is a physical feasible solution by adding nonlinear constraints to the optimization problem. However, by adding nonlinear constraints to a linear problem gives a nonlinear optimization problem for which it is hard to find the global minimum.
The null space is defined as the parameter space consisting parameter combinations that do not impact the dynamic response. Gautier, Khalil and Mayeda provide a set of rules based on the topology of the manipulator system to group the dependent inertia parameters and to form a minimal set of parameters that uniquely determine the dynamic response of the robot. In addition, the techniques of numerical like the QR decomposition or the Singular Value Decomposition can be used to find the set of minimal or base parameters.
Mostly the base parameter set obtained from a linear parameter fit is not guaranteed to be a physically meaningful solution. Waiboer suggest that the identified parameters become more physically convincing by choosing the null space in such a way that the estimated parameters match a priori given values in least squares sense. This needs an a priori estimation of the parameter values and a sufficiently accurate description of the null space, neither of which are trivial, in general. Mata force is a physical feasible solution by adding nonlinear constraints to the optimization problem. However, by adding nonlinear constraints to a linear problem gives a nonlinear optimization problem for which it is hard to find the global minimum.
Parameter Identification for Flexible Models Using Additional Sensors
The identification procedure of linear least squares used for the identification of rigid robot models assumes that the position signal of all degrees of freedom are known or can be measured. If the all degrees position of freedom including the corresponding velocities and accelerations are known, the dynamic model can be written as a linear set of equations with the dynamic parameters as unknowns. Generally only motor position and torque data is available. Therefore, measurements from freedom arising additional degrees from flexibilities are not readily available and consequently the linear least squares technique cannot be used for flexible robot models. Several authors suggest the application of additional sensors to measure the elastic deformations, e.g. link position, acceleration sensors, and/or velocity sensors or torque. First, an overview of identification techniques using these additional sensors will be given.
This presents an identification method for the dynamical parameters of simple mechanical systems with lumped elasticity. The parameters are calculated by using the solution of a weighted least squares system of an over determined system that is linear with regard to a minimal set of parameters and obtained by sampling the dynamic model along a trajectory. Two different cases are considered according the types of measurements available for identification. In the first case, it is assumed that measurements for the load and the motor position are available. In the second case, it is assumed that measurements for the load acceleration and the motor position are available.
Instead of the load position reconstruction by integration of the measured acceleration, they suggest differentiating the dynamic equations twice. However, problems come out for non-continuous terms like joint friction. The use chirp signal as excitation signal should decrease the influence of the dynamic behavior, which is represented by these non-differentiable terms, on the measured data.
This presents an identification method for the dynamical parameters of simple mechanical systems with lumped elasticity. The parameters are calculated by using the solution of a weighted least squares system of an over determined system that is linear with regard to a minimal set of parameters and obtained by sampling the dynamic model along a trajectory. Two different cases are considered according the types of measurements available for identification. In the first case, it is assumed that measurements for the load and the motor position are available. In the second case, it is assumed that measurements for the load acceleration and the motor position are available.
Instead of the load position reconstruction by integration of the measured acceleration, they suggest differentiating the dynamic equations twice. However, problems come out for non-continuous terms like joint friction. The use chirp signal as excitation signal should decrease the influence of the dynamic behavior, which is represented by these non-differentiable terms, on the measured data.
Fundamental Laws of Robotics
We can not write an introduction to robotics without talking the fundamental laws of robots. The popular Russian writer of science fiction Isaac Asimov formulated the three fundamental laws for robots. The perspective in contrast to the robot described by Capek, a benevolent, good robot that acts as the human being. Asimov visualized the robot as an automated mechanical creature of human appearance having no feelings. The behavior and acts were dictated by a “brain” programmed by human beings, in such a way that certain ethical rules were satisfied.
The robotics term was thereafter introduced by Asimov, as the science devoted to the study of robots which was based on three fundamental laws. They were complemented by Asimov’s law in 1985. Since the robot laws establishment, the word robot has attained the alternate meaning as an industrial product designed by engineers or specialized technicians.
1. A robot may not allow humanity to come to harm, injure humanity, or, through inaction.
2. A robot may not allow a human being to injure, or, through inaction.
3. A robot has to obey the orders given it by human beings except where such orders would conflict with the First Law.
4. A robot has to protect its own existence as long as such protection does not conflict with the First or Second Laws. The first laws were complemented by two more laws goaled for industrial robots by Stig Moberg from ABB Robotics. In the additional laws, also the robot motion is considered.
5. A robot has to follow the trajectory specified by its master, as long as it does not conflict with the first three laws.
6. A robot has to follow the velocity and acceleration specified by its master, as long as nothing stands in its way and it does not conflict with the other laws.
The robotics term was thereafter introduced by Asimov, as the science devoted to the study of robots which was based on three fundamental laws. They were complemented by Asimov’s law in 1985. Since the robot laws establishment, the word robot has attained the alternate meaning as an industrial product designed by engineers or specialized technicians.
1. A robot may not allow humanity to come to harm, injure humanity, or, through inaction.
2. A robot may not allow a human being to injure, or, through inaction.
3. A robot has to obey the orders given it by human beings except where such orders would conflict with the First Law.
4. A robot has to protect its own existence as long as such protection does not conflict with the First or Second Laws. The first laws were complemented by two more laws goaled for industrial robots by Stig Moberg from ABB Robotics. In the additional laws, also the robot motion is considered.
5. A robot has to follow the trajectory specified by its master, as long as it does not conflict with the first three laws.
6. A robot has to follow the velocity and acceleration specified by its master, as long as nothing stands in its way and it does not conflict with the other laws.
The Terms Robotics and Industrial Robots
The robots distinction lie somewhere in the sophistication of the programmability of the device – a (NC) milling machine is not an industrial robot. If a device of mechanical can be programmed to perform a wide variety of applications, it is probably an industrial robot. The essential difference between an NC machine and an industrial robot is the versatility of the robot, that it is provided with tools of different types and has a large workspace compared to the volume of the robot itself. The numerically controlled machine is dedicated to a special task, although in a fairly flexible way, which gives a system built after fixed and limited.
The learning and control of industrial robots is not a new science, rather a mixture of “classical fields”. From mechanical engineering, the machine is studied in dynamic and static situations. It means of the spatial mathematics motions can be described. The designing and evaluating Tools algorithms to achieve the desired motion are provided by control theory. Electrical engineering is helpful when designing interfaces and sensors for industrial robots. Last but not least, computer science gives for programming the device to perform a desired task.
The term robotics has presently been defined as the science studying “the intelligent connection of perception to action”. Industrial robotics is a subject concerning robot design, control and applications in industry and the products are now reaching the level of a mature technology. The robotics technology status can be reflected by the definition of a robot originating from the Robot Institute of America.
Most of the organizations recently agree less or more to the definition of industrial robots, formulated by the International Organization for Standardization, ISO.
• Manipulating industrial robot is a controlled automatically, multi-purpose, reprogrammable, manipulative machine with several degrees of freedom, which may be either fixed in place or mobile for use in industrial automation applications.
• Manipulator is a machine, the mechanism of which usually contains of a series of segments jointed or sliding relative to one another, for the purpose of grasping and/or moving objects (pieces or tools) usually in several degrees of freedom.
The learning and control of industrial robots is not a new science, rather a mixture of “classical fields”. From mechanical engineering, the machine is studied in dynamic and static situations. It means of the spatial mathematics motions can be described. The designing and evaluating Tools algorithms to achieve the desired motion are provided by control theory. Electrical engineering is helpful when designing interfaces and sensors for industrial robots. Last but not least, computer science gives for programming the device to perform a desired task.
The term robotics has presently been defined as the science studying “the intelligent connection of perception to action”. Industrial robotics is a subject concerning robot design, control and applications in industry and the products are now reaching the level of a mature technology. The robotics technology status can be reflected by the definition of a robot originating from the Robot Institute of America.
Most of the organizations recently agree less or more to the definition of industrial robots, formulated by the International Organization for Standardization, ISO.
• Manipulating industrial robot is a controlled automatically, multi-purpose, reprogrammable, manipulative machine with several degrees of freedom, which may be either fixed in place or mobile for use in industrial automation applications.
• Manipulator is a machine, the mechanism of which usually contains of a series of segments jointed or sliding relative to one another, for the purpose of grasping and/or moving objects (pieces or tools) usually in several degrees of freedom.
Scenarios of Future Industrial Robots
Long-term visions industrial robots of the future have been depicted in five scenarios which are given in the following as examples:
1. Robot assistants as a versatile tool at the workplace
Scenario: A robot assistant is used as a versatile tool by the worker at a manual workplace. The applications could be manifold: arc welding, machining, woodworking, aircraft assembly etc.
Operation: The arm of compact robot is towed manually to the workplace. On a wireless portable interface the worker selects a process (e.g. “welding”). The worker indicates the process by guiding the robot along contours or over surfaces while giving additional instructions by voice. Process parameters are set and the sensor supported motion results in the machined/welded contours. The worker may override the robot motion as required. Successive tasks can be performed automatically without supervision by the worker.
2. Robot assistants in crafts
Scenario Robot as a versatile assistant for crafts
Operation The robot is mobile and is equipped with two arms and is instructed by gesture, voice and graphics. A craftsman (e.g. locksmith) has to weld a steel structure (stairway). The robot fettles the seams automatically with a brush.
3. Robots for empowering humans
Scenario The robot for human augmentation (force or precision augmentation) in assembly
Operation In a bus gear box assembly the heavy central shaft is grasped by the robot which balances it softly so the worker can insert it precisely in the housing. The robot learns and optimizes the constrained motion in successive steps “on the job”.
4. Multi-robot cooperation
Scenario: Many robots cooperate to execute a manufacturing task within a minimal workcell.
Operation: Robot 1 fetches a panel that has to be mounted simultaneously with the cover and robot 2 tells robot 1 where to put the panel. Finally robot 3 fetches an automatic screw driver to mount the cover and the panel together on the washing machine framework. If this is not OK, robot 3 needs the help from a worker who will change for example the orientation of the screw driver by direct interaction with the tool and the assembly can proceed.
1. Robot assistants as a versatile tool at the workplace
Scenario: A robot assistant is used as a versatile tool by the worker at a manual workplace. The applications could be manifold: arc welding, machining, woodworking, aircraft assembly etc.
Operation: The arm of compact robot is towed manually to the workplace. On a wireless portable interface the worker selects a process (e.g. “welding”). The worker indicates the process by guiding the robot along contours or over surfaces while giving additional instructions by voice. Process parameters are set and the sensor supported motion results in the machined/welded contours. The worker may override the robot motion as required. Successive tasks can be performed automatically without supervision by the worker.
2. Robot assistants in crafts
Scenario Robot as a versatile assistant for crafts
Operation The robot is mobile and is equipped with two arms and is instructed by gesture, voice and graphics. A craftsman (e.g. locksmith) has to weld a steel structure (stairway). The robot fettles the seams automatically with a brush.
3. Robots for empowering humans
Scenario The robot for human augmentation (force or precision augmentation) in assembly
Operation In a bus gear box assembly the heavy central shaft is grasped by the robot which balances it softly so the worker can insert it precisely in the housing. The robot learns and optimizes the constrained motion in successive steps “on the job”.
4. Multi-robot cooperation
Scenario: Many robots cooperate to execute a manufacturing task within a minimal workcell.
Operation: Robot 1 fetches a panel that has to be mounted simultaneously with the cover and robot 2 tells robot 1 where to put the panel. Finally robot 3 fetches an automatic screw driver to mount the cover and the panel together on the washing machine framework. If this is not OK, robot 3 needs the help from a worker who will change for example the orientation of the screw driver by direct interaction with the tool and the assembly can proceed.
Main Obstacles to Progress Long term Vision of Future Robot
The described long-term vision realization is subject to overcoming the following barriers:
• Man-machine-interaction: Today, manufacturing tasks cannot be expressed in intuitive enduser terms as would be typically required for instructions by voice. Multimodal dialogues based on voice, graphics, and texts should be initiated to quickly resolve insufficient or ambiguous information.
• Mechanical limitations: Robot mechanics account for some 80% of the system price. For some parts, particularly gears, there exists a painful dependency on Japanese suppliers. New drive lines should be developed where high density motors and compliant compact gears (e.g. on the basis of mechanical wave generators) with integrated torque and position sensors are used in order to decrease this dependency. Advanced control of sensor based drive systems will make it possible to decrease cost and weight without reducing the robot performance. Furthermore a cooperative space-sharing robot needs harmless motions. This can be achieved by intrinsically safe designs or suitable sensor equipment.
• Sensors: Full 3D recognition is required for work piece and worker localization in less structured environments. Inexpensive sensors do not exist yet but high volume supervision and entertainment applications will make this technology affordable.
• Robot automation life-cycle costs. The gains of robots productivity are probably less pronounced than quality gains, especially for investments into cooperating which in some cases will result in severe cost limits of such systems to achieve cost-effectiveness.
• Socio-economic factors. The systems of advanced mechatronic may slow down investments in novel robot systems especially in areas with little or no automation a strong conservative attitude in industry towards. The introduction of robotics into industries characterized by low status and bad working conditions can contribute to changing their attractiveness to employ young people.
• Standards. First standards towards cooperative robots and intelligent assist devices (e.g. “smart balancers”) are about to emerge. New standards for robot assistants allowing physical interaction at normal working speeds will be required. Setting new standards needs committed industries to support the high cost and time involved.
• Man-machine-interaction: Today, manufacturing tasks cannot be expressed in intuitive enduser terms as would be typically required for instructions by voice. Multimodal dialogues based on voice, graphics, and texts should be initiated to quickly resolve insufficient or ambiguous information.
• Mechanical limitations: Robot mechanics account for some 80% of the system price. For some parts, particularly gears, there exists a painful dependency on Japanese suppliers. New drive lines should be developed where high density motors and compliant compact gears (e.g. on the basis of mechanical wave generators) with integrated torque and position sensors are used in order to decrease this dependency. Advanced control of sensor based drive systems will make it possible to decrease cost and weight without reducing the robot performance. Furthermore a cooperative space-sharing robot needs harmless motions. This can be achieved by intrinsically safe designs or suitable sensor equipment.
• Sensors: Full 3D recognition is required for work piece and worker localization in less structured environments. Inexpensive sensors do not exist yet but high volume supervision and entertainment applications will make this technology affordable.
• Robot automation life-cycle costs. The gains of robots productivity are probably less pronounced than quality gains, especially for investments into cooperating which in some cases will result in severe cost limits of such systems to achieve cost-effectiveness.
• Socio-economic factors. The systems of advanced mechatronic may slow down investments in novel robot systems especially in areas with little or no automation a strong conservative attitude in industry towards. The introduction of robotics into industries characterized by low status and bad working conditions can contribute to changing their attractiveness to employ young people.
• Standards. First standards towards cooperative robots and intelligent assist devices (e.g. “smart balancers”) are about to emerge. New standards for robot assistants allowing physical interaction at normal working speeds will be required. Setting new standards needs committed industries to support the high cost and time involved.
Robotic in Automotive Industries
Currently, robots have been used mainly in the automotive industries, including their supply chains, accounting for more than 60% of total robot sales. Typically prime targets for robot automation in car manufacturing are welding, assembly of body, motor and gear-box, and painting and coating. Automotive industries are the driver key of application in terms of cost, technology and services robotics industry are subject to fierce global competition. Robot systems increasingly to be the central portion of investments in automotive manufacturing which may reach 60 % of the total manufacturing equipment investment in the year 2010 (for car and 1st tier suppliers). In general it is estimated that the a robot automation investment cost in these industries accounts to 4 times the unit prize of a robot.
The automation degree in the automotive industries is expected to increase in the future as robots will push the limits towards flexibility regarding faster change-over-times of different product types (through rapid programming generation schemes), capabilities to deal with tolerances (through an extensive use of sensors) and costs (by reducing customized work-cell installations and reuse of manufacturing equipment). These challenges lead to the following present RTD trends in robotics:
• Expensive fixing equipment and single-purpose transport is replaced by standard robots thus offering continuous production flows. Remaining fixtures may be adjusted by the robot itself.
• Cooperative robots in a work-cell coordinate fixing, handling and process tasks so that robots may be adjusted easily to varying work piece geometries, process parameters and task sequences. Short change-over times are achieved by automated program generation which takes into account necessary synchronization, collision avoidance and robot-to-robot calibration.
• Increased use of sensor systems and measuring devices mounted on robots and RFID-tagged parts carrying individual information contributes to better dealing with tolerances in automated processes.
• The gap between fully manual and fully automated task execution of human-robot-cooperation bridges. Robots and people will share cognitive, sensing, and physical capabilities.
The automation degree in the automotive industries is expected to increase in the future as robots will push the limits towards flexibility regarding faster change-over-times of different product types (through rapid programming generation schemes), capabilities to deal with tolerances (through an extensive use of sensors) and costs (by reducing customized work-cell installations and reuse of manufacturing equipment). These challenges lead to the following present RTD trends in robotics:
• Expensive fixing equipment and single-purpose transport is replaced by standard robots thus offering continuous production flows. Remaining fixtures may be adjusted by the robot itself.
• Cooperative robots in a work-cell coordinate fixing, handling and process tasks so that robots may be adjusted easily to varying work piece geometries, process parameters and task sequences. Short change-over times are achieved by automated program generation which takes into account necessary synchronization, collision avoidance and robot-to-robot calibration.
• Increased use of sensor systems and measuring devices mounted on robots and RFID-tagged parts carrying individual information contributes to better dealing with tolerances in automated processes.
• The gap between fully manual and fully automated task execution of human-robot-cooperation bridges. Robots and people will share cognitive, sensing, and physical capabilities.
Robotic Application in Industries
There are various new fields of applications in which robot technology is not widespread today due to its lack of flexibility and high costs involved when dealing with varying lot sizes and variable product geometries. New robotic applications will soon emerge from new industries and from SMEs, which cannot use today’s inflexible robot technology or which still require a lot of manual operations under strenuous, unhealthy and hazardous conditions. Relieving people from bad working conditions (e.g., operation of hazardous machines, handling poisonous or heavy material, working in dangerous or unpleasant environments) leads to many new opportunities for applying robotics technology. Bad working conditions examples can be found in foundries or the metal working industry.
Besides the need of handling objects at very high temperatures, work under unhealthy conditions takes place in manual fettling operations, which contribute to about 40% of the total production cost in a foundry. Manual fettling that’s mean strong vibrations, heavy lifts, metal dust and high noise levels, resulting in annual hospitalization costs of more than €150m in Europe. Bad working conditions can also be found in slaughterhouses, fisheries and cold stores where beside low temperatures also the handling of sharp tools makes the work unhealthy and hazardous
• Assembly and disassembly (vehicles, airplanes, refrigerators, washing machines, consumer goods). In some cases fully automatic task operation by robots is impossible. Cooperative robots should support the worker in terms of force parallelization, augmentation, or sharing of tasks.
• Aerospace industry presently uses customized NC machines for drilling, machining, assembly, quality testing operations on structural parts. In assembly and quality testing, the automation level is still low due to the variability of configurations and insufficient precision of available robots. Identified requirements for future robots call for higher accuracy, adaptivity towards workpiece tolerances, flexibility to cover different product ranges, and safe cooperation with operators.
• SME manufacturing: Fettling, cutting, deflashing, deburring, drilling, milling, grinding and polishing of products made of metal, glass, ceramics, plastics, rubber and wood.
• Food and consumer good industries: Processing, filling, assembly, handling and packaging of food and consumer goods
• Construction: Drilling, Cutting, grinding and welding of large beams and other construction elements for buildings, bridges, ships, trains, power stations, wind mills etc.
Besides the need of handling objects at very high temperatures, work under unhealthy conditions takes place in manual fettling operations, which contribute to about 40% of the total production cost in a foundry. Manual fettling that’s mean strong vibrations, heavy lifts, metal dust and high noise levels, resulting in annual hospitalization costs of more than €150m in Europe. Bad working conditions can also be found in slaughterhouses, fisheries and cold stores where beside low temperatures also the handling of sharp tools makes the work unhealthy and hazardous
• Assembly and disassembly (vehicles, airplanes, refrigerators, washing machines, consumer goods). In some cases fully automatic task operation by robots is impossible. Cooperative robots should support the worker in terms of force parallelization, augmentation, or sharing of tasks.
• Aerospace industry presently uses customized NC machines for drilling, machining, assembly, quality testing operations on structural parts. In assembly and quality testing, the automation level is still low due to the variability of configurations and insufficient precision of available robots. Identified requirements for future robots call for higher accuracy, adaptivity towards workpiece tolerances, flexibility to cover different product ranges, and safe cooperation with operators.
• SME manufacturing: Fettling, cutting, deflashing, deburring, drilling, milling, grinding and polishing of products made of metal, glass, ceramics, plastics, rubber and wood.
• Food and consumer good industries: Processing, filling, assembly, handling and packaging of food and consumer goods
• Construction: Drilling, Cutting, grinding and welding of large beams and other construction elements for buildings, bridges, ships, trains, power stations, wind mills etc.
Grasper Control Language of BarretHand Robotic
The BarrettHand consists its central supervisory microprocessor that coordinates four dedicated motion-control microprocessors and controls I/O via the RS232 line inside its compact palm. The control electronics are created on a parallel 70-pin backplane bus. Associated with each motion-control microprocessor are the related motor commutation electronics, sensor electronics, and motor-power current-amplifier electronics for that finger or spread action. The microprocessor of supervisory directs I/O communication via a high-speed, industry-standard RS232 serial communications link to the work cell PC or controller. RS232 offers compatibility with any robot controller while limiting umbilical cable diameter for all communications and power to only 8mm. The published grasper communications language (GSL) optimizes communications speed, exploiting the difference between bandwidth and time-of-flight latency for the special case of graspers. It is important to recognize that graspers usually remain inactive during most of the work cell cycle, while the arm is performing its gross motions, and are only active for short bursts at the ends of an arm’s trajectories.
While the robotic arm knees high control bandwidth during the entire cycle, the grasper has plenty of time to receive a large amount of setup information as it approaches its target. Then, the work cell controller releases a “trigger” command with precision timing, such as the ASCII character “C” for close, which begins grasp execution within a couple milliseconds.
The grasper can accept commands and communicate from any robot-workcell controller, PC, UNIX box, Mac, or even a Palmpilot via standard ASCII RS232-C serial communication — the common denominator of communications protocols. Though robust, RS232 has a slow bandwidth compared to FireWire standards or USB, but its simplicity leads to small latencies for short bursts of data. It has achieved time of flight to acknowledge and execute a command (from the work cell controller to the grasper and then back again to the work cell controller) of the order of milliseconds by streamlining the GCL.
While the robotic arm knees high control bandwidth during the entire cycle, the grasper has plenty of time to receive a large amount of setup information as it approaches its target. Then, the work cell controller releases a “trigger” command with precision timing, such as the ASCII character “C” for close, which begins grasp execution within a couple milliseconds.
The grasper can accept commands and communicate from any robot-workcell controller, PC, UNIX box, Mac, or even a Palmpilot via standard ASCII RS232-C serial communication — the common denominator of communications protocols. Though robust, RS232 has a slow bandwidth compared to FireWire standards or USB, but its simplicity leads to small latencies for short bursts of data. It has achieved time of flight to acknowledge and execute a command (from the work cell controller to the grasper and then back again to the work cell controller) of the order of milliseconds by streamlining the GCL.
Electronic and Mechanical Optimization of Programmable Robot
Dexterous control, Intelligent is key to the success of any programmable robot, whether it is an arm, automatically guided vehicle, or dexterous hand. While robotic intelligence is generally associated with processor-driven motor control, many biological systems, including human hands, integrate some degree of specialized reflex control independent of explicit motor-control signals from the brain. Actually, the BarrettHand combines programmable microprocessor intelligence and reflexive mechanical intelligence for a high degree of practical dexterity in real-world applications.
Base on the definition, neither the BarrettHand nor your hand is dexterous. Basically, their superior versatility challenges the definition itself. If the BarrettHand is followed by the strict definition for dexterity, it would require between eight and 16 motors, making it far too complex, bulky, and unreliable for any practical application outside the mathematical analysis of hand dexterity. But, by exploiting four intelligent, joint coupling mechanisms, the almost-dexterous BarrettHand needs only four servomotors. In some examples reflex control is even better than deliberate control. Two examples based on your own body illustrate this point. Accidentally suppose your hand touches a dangerously hot surface. It starts retracting itself instantly, relying on local reflex to override any ongoing cognitive commands. Your hand might burn when waiting for the sensations of pain to travel from your hand to your brain via relatively slow nerve fibers and then for your brain, through the same slow nerve fibers, to command your arm, wrist, and finger muscles to retract.
As the second example, let’s move the outer joint of your index finger without moving the adjacent joint on the same finger. You cannot shift these joints independently because the design of your hand is optimized for grasping. Your tendons and muscles are as lightweight and streamlined as possible without forfeiting functionality. The BarrettHand design recognizes that intelligent control of functional dexterity requires the integration of microprocessor and mechanical intelligence.
Base on the definition, neither the BarrettHand nor your hand is dexterous. Basically, their superior versatility challenges the definition itself. If the BarrettHand is followed by the strict definition for dexterity, it would require between eight and 16 motors, making it far too complex, bulky, and unreliable for any practical application outside the mathematical analysis of hand dexterity. But, by exploiting four intelligent, joint coupling mechanisms, the almost-dexterous BarrettHand needs only four servomotors. In some examples reflex control is even better than deliberate control. Two examples based on your own body illustrate this point. Accidentally suppose your hand touches a dangerously hot surface. It starts retracting itself instantly, relying on local reflex to override any ongoing cognitive commands. Your hand might burn when waiting for the sensations of pain to travel from your hand to your brain via relatively slow nerve fibers and then for your brain, through the same slow nerve fibers, to command your arm, wrist, and finger muscles to retract.
As the second example, let’s move the outer joint of your index finger without moving the adjacent joint on the same finger. You cannot shift these joints independently because the design of your hand is optimized for grasping. Your tendons and muscles are as lightweight and streamlined as possible without forfeiting functionality. The BarrettHand design recognizes that intelligent control of functional dexterity requires the integration of microprocessor and mechanical intelligence.
Gripper legacy of Robotics
Nowadays robotic part handling and assembling is done with grippers. If surface conditions offer, electromagnets and vacuum suction can also be used, for example in handling automobile windshields and body panels. As part sizes start to exceed the order of 100gms, a gripper’s jaws are custom shaped to ensure a secure hold. As the durable of handling mainstay and assembly, these tools have changed little since the beginning of robotics three decades ago. Grippers acts as simple pincers, have two or three unarticulated fingers, called “jaw”. Well organized catalog are available from manufacturers that guide the integrator or customer in tally various gripper components (except naturally for the custom jaw shape) to the task and part parameters.
The sizes of Payload range from grams for tiny pneumatic grippers to 100+ kilograms for massive hydraulic grippers. Typically the power source is hydraulic or pneumatic with simple on/off valve control switching between full-open and full-close states. The jaws usually move 1cm from full-open to full-close. The hand has two or three fingers, called “jaws”. The jaw part that connects the target part is made of a removable and machinably soft steel or aluminum, called a “soft jaw”. According to the unique circumstances, an expert tool designer determines the custom shapes to be machined into the rectangular soft-jaw pieces, The soft-jaw sets are attached to their respective gripper bodies and tested once machined to shape. This process can bring any number of iterations and adjustments until the system works properly. Tool designers redo the entire process each time a new shape is introduced. As consumers demand more variety of product choices and ever more frequent product introductions, the need for flexible automation has never been greater. However, the robotics industry over the past few years has followed the example of the automatic tool exchange technique used to exchange CNCmill cutting tools rather than make grippers more versatile.
The sizes of Payload range from grams for tiny pneumatic grippers to 100+ kilograms for massive hydraulic grippers. Typically the power source is hydraulic or pneumatic with simple on/off valve control switching between full-open and full-close states. The jaws usually move 1cm from full-open to full-close. The hand has two or three fingers, called “jaws”. The jaw part that connects the target part is made of a removable and machinably soft steel or aluminum, called a “soft jaw”. According to the unique circumstances, an expert tool designer determines the custom shapes to be machined into the rectangular soft-jaw pieces, The soft-jaw sets are attached to their respective gripper bodies and tested once machined to shape. This process can bring any number of iterations and adjustments until the system works properly. Tool designers redo the entire process each time a new shape is introduced. As consumers demand more variety of product choices and ever more frequent product introductions, the need for flexible automation has never been greater. However, the robotics industry over the past few years has followed the example of the automatic tool exchange technique used to exchange CNCmill cutting tools rather than make grippers more versatile.
Grasping Robotic of Barret’s Grasper
This article introduces a new approach to material handling, part sorting, and component assembly called “grasping”, in which a single reconfigurable grasper with embedded intelligence replaces an entire bank of unique, fixed-shape grippers and tool changers. We have to explore what is wrong with robotics today to appreciate the motivations that guided the design of Barrett’s grasper, the enormous potential for robotics in the future, and the dead-end legacy of gripper solutions.
Programmable flexibility is needed along the entire length of the robot, from its base, all the way to the target work piece for the benefits of a robotic solution to be realized. An arm of robot enables programmable flexibility from the base only up to the tool plate, a few centimeters short of the work piece target. But these last some centimeters of a robot have to adapt to the complexities of securing a new object on each robot cycle, capabilities where embedded intelligence and software excel. Look like the weakest link in a chain of serial, an inflexible gripper limits the productivity of the entire robot work cell.
Grippers have individually-customized, but fixed jaw shapes. The customization process of trial-and-error is design intensive, generally drives cost and schedule, and is difficult to scope in advance. Generally, each anticipated variation in orientation, shape, and robot approach angle requires another custom-but-fixed gripper, a place to store the additional gripper, and a mechanism to exchange grippers. An incremental improvement or unanticipated variation is simply not allowable.
For a flexibility high degree of tasks requiring such as handling variably shaped payloads presented in multiple orientations, a grasper is more secure, quicker to install, and more cost effective than an entire bank of custom-machined grippers with tool changers and storage racks.
Just one or two spare graspers can serve as emergency backups for several work cells, whereas one or two spare grippers are required for each gripper variation – potentially dozens per work cell for uninterrupted operation. And, it’s catastrophic if both backups of gripper fail in a gripper system, since it may be days before replacements can be identified, shipped, custom shaped from scratch, and physically replaced to bring the affected line back into operation. Since graspers are physically similar, they are always available in unlimited quantity, with all customization provided instantly in software.
Programmable flexibility is needed along the entire length of the robot, from its base, all the way to the target work piece for the benefits of a robotic solution to be realized. An arm of robot enables programmable flexibility from the base only up to the tool plate, a few centimeters short of the work piece target. But these last some centimeters of a robot have to adapt to the complexities of securing a new object on each robot cycle, capabilities where embedded intelligence and software excel. Look like the weakest link in a chain of serial, an inflexible gripper limits the productivity of the entire robot work cell.
Grippers have individually-customized, but fixed jaw shapes. The customization process of trial-and-error is design intensive, generally drives cost and schedule, and is difficult to scope in advance. Generally, each anticipated variation in orientation, shape, and robot approach angle requires another custom-but-fixed gripper, a place to store the additional gripper, and a mechanism to exchange grippers. An incremental improvement or unanticipated variation is simply not allowable.
For a flexibility high degree of tasks requiring such as handling variably shaped payloads presented in multiple orientations, a grasper is more secure, quicker to install, and more cost effective than an entire bank of custom-machined grippers with tool changers and storage racks.
Just one or two spare graspers can serve as emergency backups for several work cells, whereas one or two spare grippers are required for each gripper variation – potentially dozens per work cell for uninterrupted operation. And, it’s catastrophic if both backups of gripper fail in a gripper system, since it may be days before replacements can be identified, shipped, custom shaped from scratch, and physically replaced to bring the affected line back into operation. Since graspers are physically similar, they are always available in unlimited quantity, with all customization provided instantly in software.
Self-Reconfiguring Robots Modules Algorithm
The next step is to investigate the use of reconfiguration in other algorithmic applications after the basic reconfiguration problem is solved. One such class of algorithmic questions deals with resource utilization.
Heterogeneous systems allow specialized modules for communications, mobility, power, computation, or other resources. How these resources should best be distributed for various tasks is an interesting problem. For example, in a manipulation task it may be desirable to move a dedicated power module close to the task through reconfiguration. Another example is sensor deployment. Sensor modules should be carried in the volume of the robot for locomotion, and deployed to the surface for use. A related task would be to store wheel modules in the body of a legged configuration, and to deploy the wheels when wheeled locomotion was possible. The application-level question is how to best use this capability, assuming a solution to the problem of reconfiguration with uniquely identified modules. Specifically, the research issue is to determine a target configuration that optimizes placement of power, sensor, or other specialized modules to best suit the task.
SR modules are used another application involves the problem of constructing rigid structures. Often a SR robot requires structural rigidity, but it is difficult to construct connectors with desirable connection and disconnection properties that can withstand much torque. Power and weight available to a module are both severely limited, so connectors must use small efficient actuators. The result is that current connectors have serious problems with rigidity. A line of Crystal modules, for example, can deform to a great degree.
Any algorithms we design should be implemented and simulated in software. The challenge for heterogeneous systems is to build simulators to represent the varieties of modules. In hardware, building a heterogeneous system by adding sensors or communication to a homogeneous system is an easy strategy. It would also be interesting to construct modules of different shapes. Demonstrating general reconfiguration in hardware remains a significant goal. Overall, the research goal here is to build a suitable software simulator to test our algorithms, and to perform hardware experiments where possible.
Heterogeneous systems allow specialized modules for communications, mobility, power, computation, or other resources. How these resources should best be distributed for various tasks is an interesting problem. For example, in a manipulation task it may be desirable to move a dedicated power module close to the task through reconfiguration. Another example is sensor deployment. Sensor modules should be carried in the volume of the robot for locomotion, and deployed to the surface for use. A related task would be to store wheel modules in the body of a legged configuration, and to deploy the wheels when wheeled locomotion was possible. The application-level question is how to best use this capability, assuming a solution to the problem of reconfiguration with uniquely identified modules. Specifically, the research issue is to determine a target configuration that optimizes placement of power, sensor, or other specialized modules to best suit the task.
SR modules are used another application involves the problem of constructing rigid structures. Often a SR robot requires structural rigidity, but it is difficult to construct connectors with desirable connection and disconnection properties that can withstand much torque. Power and weight available to a module are both severely limited, so connectors must use small efficient actuators. The result is that current connectors have serious problems with rigidity. A line of Crystal modules, for example, can deform to a great degree.
Any algorithms we design should be implemented and simulated in software. The challenge for heterogeneous systems is to build simulators to represent the varieties of modules. In hardware, building a heterogeneous system by adding sensors or communication to a homogeneous system is an easy strategy. It would also be interesting to construct modules of different shapes. Demonstrating general reconfiguration in hardware remains a significant goal. Overall, the research goal here is to build a suitable software simulator to test our algorithms, and to perform hardware experiments where possible.
SCARA Robot Modeling and Trajectory Generation
FORWARD AND INVERSE KINEMATICS
For the case of simple robotics structures such as the one used in this Lab, it is possible to find the inverse kinematics model by only merely geometrical reasoning. That is what is implemented in InverseKinematicsUsingGeometry.m function. Use the InverseKinematics.m function so that no one of the robot joints leaves the robot workspace during the movement execution. Indication: use the possibility to choose what solution (Q1 (low elbow) or Q2 (high elbow)) to assign to the robot final position.
FORWARD AND INVERSE INSTANTANEOUS KINEMATICS
Knowing the Forward Instantaneous Kinematics Model (FIKM) of the SCARA robot given by ForwardInstantaneousKinematics.m function, program the Inverse Instantaneous Kinematics Model (IIKM) InverseInstantaneousKinematics.m. While interfacing your IIKM to the Simulink diagram, simulate a rectilinear displacement with constant speed of the end-effector according to its x axis (use the provided interface to observe the result).
It is imperative to manage correctly the singular robot configurations in order to warn all erratic movement of the robot. What are the singular positions of the studied robot? Use this knowledge so that the robot avoids these singular configurations. Indications:
• The only program to modify is the one where you defined the robot IIKM,
• For the singularities in the robot workspace limits, one can impose software stops of the robot angles evolutions in order to stop rightly before the configuration “completely tense arm” or “completely folded arm”.
TRAJECTORY GENERATION (WORKING AND JOINT SPACE CONFIGURATION)
Basing on the SetPointTrajectory.m function, give the end-effector a circular trajectory (with ray = 2 and center C=(0, 7.5)).
Basing on the Order1Interpolation.m file, write a 5 degrees interpolator generator for the SCARA robot between 2 points of the joint space qi=[100° 100°] to qf=[6° 60°].
ROBOT ARM CONTROL
This part of the Lab introduces the dynamical model of the SCARA robot in order to address some problematic linked to robot arm control. You will find below the description of a set of programs attached to those given previously.
ForwardDynamicalModel.m
Defines the acceleration of the robot joints according to the torques applied by its actuators.
SimulinkLabLibrary.mdl
Contains Simulink blocks to be used directly in your Lab.
SimulinkRobotControlWithDyMo.mdl
Simulink model that permits to use a PID controller to control the robot in the working space.
For the case of simple robotics structures such as the one used in this Lab, it is possible to find the inverse kinematics model by only merely geometrical reasoning. That is what is implemented in InverseKinematicsUsingGeometry.m function. Use the InverseKinematics.m function so that no one of the robot joints leaves the robot workspace during the movement execution. Indication: use the possibility to choose what solution (Q1 (low elbow) or Q2 (high elbow)) to assign to the robot final position.
FORWARD AND INVERSE INSTANTANEOUS KINEMATICS
Knowing the Forward Instantaneous Kinematics Model (FIKM) of the SCARA robot given by ForwardInstantaneousKinematics.m function, program the Inverse Instantaneous Kinematics Model (IIKM) InverseInstantaneousKinematics.m. While interfacing your IIKM to the Simulink diagram, simulate a rectilinear displacement with constant speed of the end-effector according to its x axis (use the provided interface to observe the result).
It is imperative to manage correctly the singular robot configurations in order to warn all erratic movement of the robot. What are the singular positions of the studied robot? Use this knowledge so that the robot avoids these singular configurations. Indications:
• The only program to modify is the one where you defined the robot IIKM,
• For the singularities in the robot workspace limits, one can impose software stops of the robot angles evolutions in order to stop rightly before the configuration “completely tense arm” or “completely folded arm”.
TRAJECTORY GENERATION (WORKING AND JOINT SPACE CONFIGURATION)
Basing on the SetPointTrajectory.m function, give the end-effector a circular trajectory (with ray = 2 and center C=(0, 7.5)).
Basing on the Order1Interpolation.m file, write a 5 degrees interpolator generator for the SCARA robot between 2 points of the joint space qi=[100° 100°] to qf=[6° 60°].
ROBOT ARM CONTROL
This part of the Lab introduces the dynamical model of the SCARA robot in order to address some problematic linked to robot arm control. You will find below the description of a set of programs attached to those given previously.
ForwardDynamicalModel.m
Defines the acceleration of the robot joints according to the torques applied by its actuators.
SimulinkLabLibrary.mdl
Contains Simulink blocks to be used directly in your Lab.
SimulinkRobotControlWithDyMo.mdl
Simulink model that permits to use a PID controller to control the robot in the working space.
Expected Contributions Self-Reconfiguring Robots
Like homogeneous systems, heterogeneous SR systems promise versatility and usefulness superior to fixed architecture robots through their ability to match structure to task. In addition, heterogeneous systems further this goal with their ability to match capability to task. The original vision of reconfigurable systems was inherently heterogeneous, and during the subsequent fifteen years researchers have accrued much knowledge of homogeneous systems. In this thesis, we propose to widen this understanding into the realm of heterogeneous systems. We plan to address fundamental algorithmic issues and demonstrate solutions in simulation and hardware where possible. The results of this work should shed light on the relative complex it of hardware versus software design in SR systems and lead to an algorithmic basis for heterogeneous
self-reconfiguring robots.
We have proposed a framework for categorizing SR modules, and we have chosen a simple theoretical module on which to build reconfiguration algorithms. We will attempt to prove lower bounds for the basic problem and extend the results to systems with greater heterogeneity. There are other algorithmic issues we will address which are enabled by previous reconfiguration solutions, and by our previous work with non-actuated modules, path planning, Goal Recognition, and distributed locomotion.
Finally, we propose to construct a software simulator with which to demonstrate our algorithms. This simulator should be suitable for further use by other researchers in the area. We also hope to perform hardware experiments where available.
The main expected contribution of the proposal is an algorithmic basis for heterogeneous SR systems. This contribution is supported by the following items:
• Framework for heterogeneous modules
• Reconfiguration in 2D and 3D with Sliding Cube model, with arbitrary size ratios
• Reconfiguration with non-actuated modules
• Complexity analysis for reconfiguration
• Applications involving resource trade-offs and optimization
• Implementation in simulation
• Hardware experimentation
self-reconfiguring robots.
We have proposed a framework for categorizing SR modules, and we have chosen a simple theoretical module on which to build reconfiguration algorithms. We will attempt to prove lower bounds for the basic problem and extend the results to systems with greater heterogeneity. There are other algorithmic issues we will address which are enabled by previous reconfiguration solutions, and by our previous work with non-actuated modules, path planning, Goal Recognition, and distributed locomotion.
Finally, we propose to construct a software simulator with which to demonstrate our algorithms. This simulator should be suitable for further use by other researchers in the area. We also hope to perform hardware experiments where available.
The main expected contribution of the proposal is an algorithmic basis for heterogeneous SR systems. This contribution is supported by the following items:
• Framework for heterogeneous modules
• Reconfiguration in 2D and 3D with Sliding Cube model, with arbitrary size ratios
• Reconfiguration with non-actuated modules
• Complexity analysis for reconfiguration
• Applications involving resource trade-offs and optimization
• Implementation in simulation
• Hardware experimentation
Reconfiguration for Robot Locomotion
Reconfiguration is generally discussed in terms of task-specific shape transformation, but it can also be used for locomotion. We have developed a distributed locomotion algorithm for unit-compressible robots using inchworm-like motion, and implemented this algorithm in hardware on the Crystal system. We also performed extensive experimentation; the algorithm ran for over 75 hours in total at the SIGGRAPH and AAAI conferences. The algorithm and experiments are described in this section.
Inchworm locomotion uses friction with the ground to move a group of unit-compressible modules forward. The algorithm is based on a set of rules that test the module’s relative geometry and generate expansions and contractions as well as messages that modules send to their neighbors. When a module receives a message from a neighbor indicating a change of state, it tests the neighborhood against all the rules, and if any rule applies, executes the commands associated with the rule. The algorithm is designed to mimic inchworm-like locomotion: compressions are created and propagated from the back of the group to the front, producing overall motion.
The message types it can send and receive, and the procedures that are called from the message handlers (including the rules of the algorithm). The “tail” module contracts first, which signals its forward neighbor to contract. Each module expands after contraction, so that the contraction propagates through the robot. When the contraction has reached the front of the group, the group will have moved half a unit forward (in theory; empirical results show nearly optimal distance-per-step for chains of five or more units.
Depending on context, once the leader of the group has contracted and expanded, it can then send a message back to the tail to initiate another step. We implemented this algorithm and performed experiments with various shapes. The experiments successfully demonstrated reliable locomotion in the configurations we tested. See Butler, Fitch and Rus for further discussion. This locomotion gait is significant first in that it exemplifies the style of distributed, scalable algorithms we wish to develop and implement in proposed work.
Inchworm locomotion uses friction with the ground to move a group of unit-compressible modules forward. The algorithm is based on a set of rules that test the module’s relative geometry and generate expansions and contractions as well as messages that modules send to their neighbors. When a module receives a message from a neighbor indicating a change of state, it tests the neighborhood against all the rules, and if any rule applies, executes the commands associated with the rule. The algorithm is designed to mimic inchworm-like locomotion: compressions are created and propagated from the back of the group to the front, producing overall motion.
The message types it can send and receive, and the procedures that are called from the message handlers (including the rules of the algorithm). The “tail” module contracts first, which signals its forward neighbor to contract. Each module expands after contraction, so that the contraction propagates through the robot. When the contraction has reached the front of the group, the group will have moved half a unit forward (in theory; empirical results show nearly optimal distance-per-step for chains of five or more units.
Depending on context, once the leader of the group has contracted and expanded, it can then send a message back to the tail to initiate another step. We implemented this algorithm and performed experiments with various shapes. The experiments successfully demonstrated reliable locomotion in the configurations we tested. See Butler, Fitch and Rus for further discussion. This locomotion gait is significant first in that it exemplifies the style of distributed, scalable algorithms we wish to develop and implement in proposed work.