ActivMedia PTZ Robotic

The robotic color cameras of ActivMedia Robotics Pan-Tilt-Zoom (PTZ) are fully integrated accessories for ActivMedia mobile robots. The PTZ camera systems include the Sony EVI-D30 or the Cannon VC-C4 with custom signal and power cables, mounting hardware, and software for integration on the robot and control through client applications, including ACTS, ARIA, and Saphira.

Power, Signal and Control Cable Installation of PTZ Robotic

Power Cable
Base on the model trend and time to time changing, one end of the camera’s 12VDC power cable plugs into the 12-position latch-lock header on the “legacy” motor-power board, or into the 4-position minifit AUX1 (RADIO) or AUX2 switched power connector on the Pioneer 2-Plus or Pioneer 3 motor-power board. Feed the other end of the cable through the console near the camera, such as through the console port of the DX or AT. Power to the camera is switched through the respective AUX1 (RADIO) or AUX2 pushbutton on the robot’s User-Control side panel for the Pioneer 3 and 2-Plus robots. Another switch on the camera itself may also control power.

PTZ Robotic Camera Software

Your PTZ Robotic Camera assemblies with software support for controlling the pan, tilt, and zoom features of the camera through your ActivMedia robot’s operating system, such as P2OS and AROS. In addition, we provide C- and C++-language-based libraries and source code for integration of the PTZ Robotic Camera with your client applications.

Artificial Intelligence Pervasive Robotics

Pervasive robotics will need, in a near future, small, light and cheap robots that exhibit complex behaviors. These requirements led to the development of the M2-M4 Macaco project - a robotic active vision head. Macaco is a portable system, capable to emulate the head of different creatures both functionally and aesthetically. It integrates mechanisms for autonomous navigation, social interactions, and object analysis. One AI approach is the development of robots whose embodiment and situatedness in the world evoke behaviors that obviate constant human supervision.

Implementation of Pervasive Robotics

Security is one possible operational scenario for this active head. For this class of applications, Macaco robot was equipped with a behavioral system capable of searching for people or faces, and to further recognize them. In addition, human gaze direction might reveal security threats, and thus a head gaze detection algorithm was developed. Probable targets for such gazings are other people and mostly important, explosives and/or guns. Therefore, salient objects situated in the world are processed for 3D information extraction and texture/color analysis. Current work is also underway for object and scene recognition from contextual cues.

EMARO - European Master in Advanced Robotics

The target of this partnership application is to consolidate the European Consortium of EMARO master course with Asian partners. Duration of the partnership is three years. English is the working language in all the partnership institutions. Objectives of EMARO partnership:

RTA Systems Engineering in Robotics

There are many advances in robotics and autonomy depend on increased computational power. Therefore, it is advances in high performance, low power space onboard the computers are central to more capable the robotics. Current efforts in this direction include exploiting high performance field of the programmable gate arrays (FPGAs), multi-core processors, and enabling use in space of commercial grade the computer components through shielding, hardware redundancy, and fault tolerant the software design.

Further pushes in these or other directions to achieve greater in-space computing power are needed. The modular interfaces are needed to enable tool change-out for arms on rovers and for in-space robotics assembly and servicing. When the robots and humans need to work in close proximity; sensing, planning, and autonomous control system for the robots, and overall operational procedures for the robots and humans, it will have to be designed to ensure the human safety around the robots. Developing modular the robotic interfaces will also allow the multiple robots to operate together. These modular interfaces will allow the structural, mechanical, electrical, data, fluid, pneumatic and other interaction. The tools and end effectors can also be developed in a modular manner allowing interchangeability and a reduced the logistics footprint.

The modular interfaces will be the building block for the modular self-replicating robots, and self-assembling robotic systems. The reconfigurable system design offers the ability to reconfigure mechanical, electrical and computing assets in the response to system failures. Reconfigurable computing offers the ability to internally reconfigure in response to the chip level failures caused by the environmental (i.e. space radiation), the life limitations, or the fabrication errors. System verification will be a new challenge for the human rated spacecraft bound for the deep space. New V&V approaches and techniques will be required, and in-flight re-verification following a repair may be necessary.

Autonomous Rendezvous and Docking of Robotic

AR&D is a capability requiring many vehicle subsystems to operate in concert. It is important to clarify that AR&D is not a system and cannot be purchased off the shelf. This strategy focuses on development of a certified, standardized capability suite of subsystems enabling AR&D for different mission classes and needs. This suite will be incrementally developed, tested and integrated over a span of several missions. This technology roadmap focuses on four specific subsystems required for any AR&D mission.
1. Relative Navigation Sensors – During the course of RPOD, varying accuracies of bearing, range, and relative attitude are needed for AR&D. Current implementations for optical, laser, and RF systems are mid-TRL (Technology Readiness Level) and require some development and flight experience to gain reliability and operational confidence. Inclusion of the ability for cooperating AR&D pairs to communicate directly can greatly improve the responsiveness and robustness of the system.
2. Robust AR&D GN&C Real-Time Flight Software (FSW) – AR&D GN&C algorithms are maturing, however, implementing these algorithms into FSW is an enormous challenge. A best practice based implementation of automated/autonomous GN&C algorithms into real-time FSW operating systems needs to be developed and tested.
3. Docking/Capture – NASA is planning for the imminent construction of a new low-impact docking mechanism built to an international standard for human spaceflight missions to ISS. A smaller common docking system for robotic spacecraft is also needed to enable robotic spacecraft AR&D within the capture envelopes of these systems. Assembly of the large vehicles and stages used for beyond LEO exploration missions will require new mechanisms with new capture envelopes beyond any docking system currently used or in development. Development and testing of autonomous robotic capture of non cooperative target vehicles in which the target does not have capture aids such as grapple fixtures or docking mechanisms is needed to support satellite servicing/rescue.
4. Mission/System Managers – A scalable spacecraft software executive that can be tailored for various mission applications, for the whole vehicle, and various levels of autonomy and automation is needed to ensure safety and operational confidence in AR&D software execution. Numerous spacecraft software executives have been developed, but the necessary piece that is missing is an Agencywide open standard which will minimize the costs of such architectures and its ability to evolve over time to help overcome general fears about autonomy/automation.

Robotic Autonomous System

Autonomy, in the context of a system (robotic, spacecraft, or aircraft), is the capability for the system to operate independently from external control. For NASA missions there is a spectrum of Autonomy in a system from basic automation (mechanistic execution of action or response to stimuli) through to fully autonomous systems able to act independently in dynamic and uncertain
environments. Two application areas of autonomy are:
(i) increased use of autonomy to enable an independent acting system, and
(ii) automation as an augmentation of human operation. Autonomy’s fundamental benefits are;
increasing a system operations capability, cost savings via increased human labor efficiencies and reduced needs, and increased mission assurance or robustness to uncertain environments.

An “autonomous system” is as a system that resolves choices on its own. The goals the system is trying to accomplish are provided by another entity; thus, the system is autonomous from the entity on whose behalf the goals are being achieved. The decision-making processes may in fact be simple, but the choices are made locally. The selections have been made already, and encoded in some way, or will be made externally to the system Key attributes of such autonomy for a robotic system include the ability for complex decision making, including autonomous mission execution and planning, the ability to self-adapt as the environment in which the system is operating changes, and the ability to understand system state and react accordingly.

Variable (or mixed initiative) autonomy refers to systems in which a user can specify the degree of autonomous control that the system is allowed to take on, and in which this degree of autonomy can be varied from essentially none to near or complete autonomy. For example, in a human-robot system with mixed initiative, the operator may switch levels of autonomy onboard the robot. Controlling levels of autonomy is tantamount to controlling bounds on the robot's authority, response, and operational capabilities.

Robonaut 2 Mission to ISS

During FY11 the Robonaut 2 system will be launched on STS-133 and delivered to the ISS in what will become the Permanent Multipurpose Module (PMM). Robonaut 2 (R2) is the latest in a series of dexterous robots built by NASA as technology demonstration, nowevolving from Earth to in-space experiments. The main objectives are to explore dexterous manipulation in zero gravity, test human-robot safety systems, test remote supervision techniques for operation across time delays, and experiment with ISS equipment to begin offloading crew of housekeeping and other chores. The R2 was built in a partnership with General Motors, with a shared vision of a capable but safe robot working near people.

The R2 has the state of the art in tactile sensing and perception, as well as depth map sensors, stereo vision, and force sensing. The R2 will be deployed initially on a fixed pedestal with zero mobility, but future upgrades are planned to allow it to climb and reposition itself at different worksites. Robonaut 2’s dexterous manipulators are the state of the art, with three levels of force sensing for safety, high strength to weight ratios, compliant and back drivable drive trains, soft and smooth coverings, fine force and position control, dual arm coordination, and kinematic redundancy.

Human interfaces for the R2 include direct force interaction where humans can manually position the limbs, trajectory design software tools, and script engines. R2 is designed to be directly tele-operated, remotely supervised, or run in an automated manner. The modular design can be upgraded over time to extend the Robonaut capabilities with new limbs, backpacks, sensors and software.

The Robotic Refueling Dexterous Demonstration (R2D2) is a multifaceted payload designed for representative tasks required to robotically refuel a spacecraft. Once mounted to the International Space Station, the demonstration will utilize the R2D2 payload complement, the Special Purpose Dexterous Manipulator (SPDM) robotic arms, and 4 customized, interchangeable tools to simulate the tasks needed to refuel a spacecraft using its standard ground fill‐and‐drain valve.

Mobility in Robotic Space

The state of the art in robotic space mobility (e.g. not including conventional rocket propulsion) includes the Mars Exploration Rovers and the upcoming Mars Science Laboratory, and for human surface mobility the Apollo lunar roving vehicle used on the final three Apollo missions. Recently, systems have been developed and tested on Earth for mobility on planetary surfaces including the Space Exploration Vehicle and the ATHLETE wheel-on-leg cargo transporter. Both feature active suspension. A series of grand challenges have extended the reach of robotic off-road mobility to high speeds and progressively more extreme terrain.

For microgravity mobility, the Manned Maneuvering Unit (MMU), tested in 1984 and, more recently, the SAFER jet pack provide individual astronauts with the ability to move and maneuver in free space, or in the neighborhood of a Near-Earth Asteroid. The AERCam system flew on STS-87 in 1997 as the first of future small free-flying inspection satellites. We can expect in the next few decades that robotic vehicles designed for planetary surfaces will approach or even exceed the performance of the best piloted human vehicles on Earth in traversing extreme terrain and reaching sites of interest despite severe terrain challenges.

Human drivers have a remarkable ability to perceive terrain hazards at long range and to pilot surface vehicles along dynamic trajectories that seem nearly optimal. Despite the limitations of human sensing and cognition, it is generally observed that experienced drivers can pilot their vehicles at speeds near the limits set by physical law (e.g. frictional coefficients, tipover and other vehicle-terrain kinematic and dynamic failures). This fact is remarkable given the huge computational throughput requirements needed to quickly assess subtle terrain geometric and non-geometric properties (e.g. visually estimating the properties of soft soil) at long range fast enough to maintain speeds near the vehicle limits. This ability is lacking in today’s best obstacle detection and hazard avoidance systems.

Human-Systems Interfaces of Space Robotic

The ultimate efficacy of space systems depends greatly upon the interfaces that humans use to operate them. The current state of the art in human system interfaces is summarized below along with some of the advances that are expected in the next 25 years. Human operation of most systems today is accomplished in a simple pattern reminiscent of the classic “Sense – Plan – Act” control paradigm for robotics and remotely operated systems. The human observes the state of the system and its environment, forms a mental plan for its future action, and then commands the robot or machine to execute that plan. Most of the recent work in this field is focused on providing tools to more effectively communicate state to the human and capture commands for the robot, each of which is discussed in more detail below.

Current human-system interfaces typically include software applications that communicate internal system state via abstract gauges and readouts reminiscent of aircraft cockpits or overlays on realistic illustrations of the physical plant and its components. Information from sensors is available in its native form (for instance, a single image from a camera) and aggregated into a navigable model of the environment that may contain data from multiple measurements and sensors. Some interfaces are adapted to immersive displays, mobile devices, or allow multiple distributed operators to monitor the remote system simultaneously.

Future interfaces will communicate state through increased use of immersive displays, creating“Holodeck”-like virtual environments that can be naturally explored by the human operator with “Avatar”-like telepresence. These interfaces will also more fully engage the aural and tactile senses of the human to communicate more information about the state of the robot and its surroundings. As robots grow increasingly autonomous, improved techniques for communicating the “mental state” of robots will be introduced, as well as mechanisms for understanding the dynamic state of reconfigurable robots and complex sensor data from swarms.

Current human-robot interfaces typically allow for two types of commands. The first are simple, brief directives, sometimes sent via specialized control devices such as joysticks, which interrupt
existing commands and immediately affect the state of the robot. A few interfaces allow the issuance of these commands through speech and gestures.

Tele-Robotics and Autonomous Systems Technology Area Breakdown Structure

The Robotics, Tele-Robotics and Autonomous Systems Technology Area Breakdown Structure (TABS). This area includes sensors and algorithms needed to convert sensor data into representations suitable for decision-making. Traditional spacecraft sensing and perception included position, attitude, and velocity estimation in reference frames centered on solar system bodies, plus sensing spacecraft internal degrees of freedom, such as scan-platform angles. Current and future development will expand this to include position, attitude, and velocity estimation relative to local terrain, plus rich perception of characteristics of local terrain — where “terrain” may include the structure of other spacecraft in the vicinity and dynamic events, such as atmospheric phenomena.

Enhanced sensing and perception will broadly impact three areas of capability: autonomous navigation, sampling and manipulation, and interpretation of science data. In autonomous navigation, 3-D perception has already been central to autonomous navigation of planetary rovers. Current capability focuses on stereoscopic 3-D perception in daylight. Active optical ranging (LIDAR) is commonly used in Earthbased robotic systems and is under development for landing hazard detection in planetary exploration. Progress is necessary in increasing the speed, resolution, and field of regard of such sensors, reducing their size, weight, and power, enabling night operation, and hardening them for flight.

Range and imagery data is already in some use for rover and lander position and velocity estimation, though with relatively slow update rates. Realtime, onboard 3-D perception, mapping, and terrain-relative position and velocity estimation capability is also needed for small body proximity operation, balloons and airships, and micro-inspector spacecraft. For surface navigation, sensing and perception must be extended from 3-D. Perception to estimating other terrain properties pertinent to trafficability analysis, such as softness of soil or depth to the load-bearing surface. Many types of sensors may be relevant to this task, including contact and remote sensors onboard rovers and remote sensors on orbiters.

Sampling generally refers to handling natural materials in scientific exploration; manipulation includes actions needed in sampling and handling man-made objects, including sample containers in scientific exploration and handling a variety of tools and structures during robotic assembly and maintenance. 3-D perception, mapping, and relative motion estimation are also relevant here. Non-geometric terrain property estimation is also relevant to distinguish where and how to sample, as well as where and how to anchor to surfaces in micro-gravity or to steep slopes on large bodies.

Manipulation Technology TeleRobotics and Autonomous Systems

Manipulation is defined as making an intentional change in the environment. Positioning sensors, handling objects, digging, assembling, grappling, berthing, deploying, sampling, bending, and even positioning the crew on the end of long arms are tasks considered to be forms of manipulation. Arms, cables, fingers, scoops, and combinations of multiple limbs are embodiments of manipulators. Here we look ahead to missions’ requirements and chart the evolution of these capabilities that will be needed for space missions. Manipulation applications for human missions can be found in Technology Area 7 as powered exoskeletons, or payload offloading devices that exceed human strength alone.

Sample Handling- The state of the art is found in the MSL arm, Phoenix arm, MER arm, Sojourner arm, and Viking. Future needs include handling segmented samples (cores, rocks) rather than scoop full of soil, loading samples into onboard devices, loading samples into containers, sorting samples, and cutting samples.

Grappling- The state art is got in the SRMS, MFD, ETS-VII, SSRMS, Orbital Express, and SPDM. Near term advances will be seen in the NASA Robonaut 2 mission. Challenges that
will need to be overcome include grappling with a dead spacecraft, grappling a natural object like an asteroid, grappling in deep space, and assembly of a multi-stack spacecraft.

Eye-Hand Coordination- The state of the art is placement of MER instruments on rocks, Orbital Express refueling, SPDM ORU handling and Phoenix digging. Challenges to be overcome include working with natural objects in micro gravity (asteroids), operation in poor lighting, calibration methods, and combination of vision and touch.

EVA positioning- The EVA community has come to rely on the use of large robot foot restraints
versus having crew climb. The state of the art is found in the SRMS and SSRMS. These arms were originally designed for handling inert payloads, and no controls were developed for control
by the crew on the arm. Challenges to be overcome involve letting crew position themselves without multiple IV crew helping, safety issues, and operation of these arms far from Earth support.

Digital Inputs/Outputs and Accelerometer WPI Robotics Library

Digital inputs
Digital inputs are generally used for controlling switches. The WPILib DigitalInput object is typically used to get the current state of the corresponding hardware line: 0 or 1. The digital inputs are more complex such as encoders or counters, are handled by using the appropriate classes. Using these other supported device types (encoder, ultrasonic rangefinder, gear tooth sensor, etc.) doesn’t require a digital input object to be created. The lines of digital input are shared from the 14 GPIO lines on each Digital Breakout Board. Creating an example of a DigitalInput object will automatically set the direction of the line to input.

The lines of digital input have pull-up resistors so an unconnected input will naturally be high. If a switch is contacted to the digital input it should connect to ground when closed. The switch open state will be 1 and the closed state will be 0. In Java, digital input values are true and false. So an open switch is true and a closed switch is false.

Digital Outputs
Typically digital outputs are used to run indicators or to interface with other electronics. The digital outputs provide the 14 GPIO lines on each Digital Breakout Board. Creating an example of a DigitalOutput object will automatically set the direction of the GPIO line to output. In C++, digital output values are 0 and 1 representing high (5V) and low (0V) signals. In Java, the digital output values are true (5V) and false (0V).

Accelerometer
The two-axis accelerometer given in the kit of parts is a two-axis accelerometer. This device can offer acceleration data in the X and Y axes relative to the circuit board. In the WPI Robotics Library you treat it as two separate devices, one for the X axis and the other for the Y axis. This provides better performance if your application only needs to use one axis. The accelerometer can be used as a tilt sensor – actually measuring the acceleration of gravity.

IRC5 Industrial Robot Controller

Fifth generation robot controller Based on more than four decades of robotics experience, the IRC5 sets a new benchmark in the robotics industry. Bringing previous achievements in motion control, flexibility, usability, safety and robustness along, it adds new breakthroughs in modularity, user interface, multi robot control and PC tool support.

Safety
Operator safety is the IRC5 central quality, fulfilling all relevant regulations with good measure, as certified by third-party inspections. Electronic position switches add the first touch of a new generation of safety, replacing earlier electro-mechanical solutions, and opening up for flexible and robust cell interlocking. For even more flexible cell safety concepts, e.g. involving collaboration between robot and operator, SafeMove offers a host of useful safety functions.

Motion control
According to advanced dynamic modeling, the IRC5 optimizes the performance of the robot for the physically shortest possible cycle time (QuickMove) and precise path accuracy (TrueMove). The predictable and high performance behavior is delivered automatically together with a speed-independent path, with no tuning required by the programmer.

Modularity
The IRC5 is available in different variants in order to provide a cost effective solution for every need. The ability to stack modules on top of each other, put them side by side or distributed in the cell is a unique feature, leading to optimization of footprint and cell layout. The panel-mounted version comes without a cabinet, enabling integration in any encapsulation for exceptional compactness or for special environmental requirements.

FlexPendant
The FlexPendant is characterized by its clean, color touch screen-based design and 3D joystick for intuitive interaction. Powerful customized application support enables loading of tailormade
applications, e.g. operator screens, thus eliminating the need for a separate operator HMI.

RAPID programming language
It provides the perfect combination of simplicity, flexibility and powerfulness. RAPID is a truly unlimited language with support for well-structured programs, shop floor language and advanced
features. It also incorporates powerful support for many process applications.

Communication
The IRC5 compatibles the state-of-the-art field busses for I/O and is a well-behaved node in any plant network. Sensor interface functionality, remote disk access and socket messaging are examples of the many powerful networking features.

The WPI Robotics Library

The National Instruments compact RIO-9074 real-time controller (cRIO) is presently the robot controller provided by the FIRST Robotics Competition (FRC). It has around five hundred times more memory than previous FRC controllers. Dedicated hardware of FPGA capable of sampling across 16 channels replaces previously cumbersome programming techniques necessary with previous controllers.
The WPI Robotics library is designed to:
• Work with the cRIO controller
• Handle low level interfacing of components
• Allow all experience levels users access to experience appropriate features

C++ and Java are the two choices of text-based languages available for use on the cRIO. These languages were selected due to they represent a better level of abstraction for robot programs than previously used languages. The WPI Robotics Library is designed for maximum extensibility and software reuse with these languages.

The library consist classes which support the sensors, speed controllers, driver station, and other hardware in the kit of parts. In addition, WPILib supports many commonly used sensors which are not in the kit, such as ultrasonic rangefinders. WPILib has a general features, such as general-purpose counters, to provide support for custom hardware and devices. The FPGA hardware also allows for interrupt processing to be dispatched at the task level, instead of as kernel interrupt handlers, reducing many common real-time bug problems.

The WPI Robotics library does not support the C++ explicitly, exception handling mechanism, though it is available to teams for their programs. Uncaught exceptions will unwind the entire call stack and cause the whole robot program to quit, therefore, we caution teams on the use of this feature.

Objects are allocated dynamically to represent each type of sensor. An internal reservation system for hardware is used to prevent reuse of the same ports for different. In the C++ version the code source for the library will be published on a server for teams to review and make comments. In the Java version the code source is included with each release. There will be a repository for teams to develop and share projects community for any language including LabVIEW.

Sensors WPI Robotics

The WPI Robotics Library has the sensors that are supplied in the FRC kit of parts, as well as many other commonly used sensors available to FIRST teams through industrial and hobby robotics outlets. The WPILib supported sensors are listed in the chart below. The supported sensors include those that are provided in the FIRST kit of parts, as well as other commonly used sensors.

Types of supported sensors
On the cRIO, the FPGA implements all the high-speed measurements through dedicated hardware ensuring accurate measurements no matter how many sensors and motors are added to the robot. This is an improvement over previous systems, which required complex real-time software routines. Natively the library supports the sensors of the categories shown below.

The WPI Robotics Library has many features that make it easy to implement the sensors that don’t have prewritten classes. For example, general-purpose counters can measure period and count from any device generating the output pulses. Another example is a generalized interrupt facility to catch the high-speed events without polling and potentially missing them.
Digital I/O Subsystem
The digital sidecar of visual representation and the subsystem of digital I/O. The NI 9401 digital I/O module offered in the kit has 32 GPIO lines. Through the circuits in the digital breakout the board, these lines map into 10 PWM outputs, 8 Relay outputs for the driving Spike relays, the signal light output, an I2C port, and 14 bidirectional the GPIO lines.
The PWM lines of basic update rate is a multiple of approximately 5 ms. Jaguar speed controllers update at slightly over 5ms, Victors update at slightly over 10ms, and servos update at slightly over 20ms.

Parameter Identification for Rigid Robot Models

A general overview of the identification methods of parameter for rigid robots can be found in textbooks. The identification techniques of experimental robot estimate dynamic robot parameters based on force/torque and motion data that are measured during robot motions along optimized trajectories. Mostly, these techniques are based on the fact that the dynamic robot model can be written as a linear set of equations with the dynamic parameters as unknowns. A formulation of dynamic parameters such as this allows the use of linear estimation techniques that find the optimal parameter set in a global sense. However, not all parameters can be defined using these techniques since some of the parameters do not affect the dynamic response or affect the dynamic response in linear combinations with other parameters.

The null space is defined as the parameter space consisting parameter combinations that do not impact the dynamic response. Gautier, Khalil and Mayeda provide a set of rules based on the topology of the manipulator system to group the dependent inertia parameters and to form a minimal set of parameters that uniquely determine the dynamic response of the robot. In addition, the techniques of numerical like the QR decomposition or the Singular Value Decomposition can be used to find the set of minimal or base parameters.

Mostly the base parameter set obtained from a linear parameter fit is not guaranteed to be a physically meaningful solution. Waiboer suggest that the identified parameters become more physically convincing by choosing the null space in such a way that the estimated parameters match a priori given values in least squares sense. This needs an a priori estimation of the parameter values and a sufficiently accurate description of the null space, neither of which are trivial, in general. Mata force is a physical feasible solution by adding nonlinear constraints to the optimization problem. However, by adding nonlinear constraints to a linear problem gives a nonlinear optimization problem for which it is hard to find the global minimum.

Parameter Identification for Flexible Models Using Additional Sensors

The identification procedure of linear least squares used for the identification of rigid robot models assumes that the position signal of all degrees of freedom are known or can be measured. If the all degrees position of freedom including the corresponding velocities and accelerations are known, the dynamic model can be written as a linear set of equations with the dynamic parameters as unknowns. Generally only motor position and torque data is available. Therefore, measurements from freedom arising additional degrees from flexibilities are not readily available and consequently the linear least squares technique cannot be used for flexible robot models. Several authors suggest the application of additional sensors to measure the elastic deformations, e.g. link position, acceleration sensors, and/or velocity sensors or torque. First, an overview of identification techniques using these additional sensors will be given.

This presents an identification method for the dynamical parameters of simple mechanical systems with lumped elasticity. The parameters are calculated by using the solution of a weighted least squares system of an over determined system that is linear with regard to a minimal set of parameters and obtained by sampling the dynamic model along a trajectory. Two different cases are considered according the types of measurements available for identification. In the first case, it is assumed that measurements for the load and the motor position are available. In the second case, it is assumed that measurements for the load acceleration and the motor position are available.

Instead of the load position reconstruction by integration of the measured acceleration, they suggest differentiating the dynamic equations twice. However, problems come out for non-continuous terms like joint friction. The use chirp signal as excitation signal should decrease the influence of the dynamic behavior, which is represented by these non-differentiable terms, on the measured data.

Fundamental Laws of Robotics

We can not write an introduction to robotics without talking the fundamental laws of robots. The popular Russian writer of science fiction Isaac Asimov formulated the three fundamental laws for robots. The perspective in contrast to the robot described by Capek, a benevolent, good robot that acts as the human being. Asimov visualized the robot as an automated mechanical creature of human appearance having no feelings. The behavior and acts were dictated by a “brain” programmed by human beings, in such a way that certain ethical rules were satisfied.

The robotics term was thereafter introduced by Asimov, as the science devoted to the study of robots which was based on three fundamental laws. They were complemented by Asimov’s law in 1985. Since the robot laws establishment, the word robot has attained the alternate meaning as an industrial product designed by engineers or specialized technicians.
1. A robot may not allow humanity to come to harm, injure humanity, or, through inaction.
2. A robot may not allow a human being to injure, or, through inaction.
3. A robot has to obey the orders given it by human beings except where such orders would conflict with the First Law.
4. A robot has to protect its own existence as long as such protection does not conflict with the First or Second Laws. The first laws were complemented by two more laws goaled for industrial robots by Stig Moberg from ABB Robotics. In the additional laws, also the robot motion is considered.
5. A robot has to follow the trajectory specified by its master, as long as it does not conflict with the first three laws.
6. A robot has to follow the velocity and acceleration specified by its master, as long as nothing stands in its way and it does not conflict with the other laws.

The Terms Robotics and Industrial Robots

The robots distinction lie somewhere in the sophistication of the programmability of the device – a (NC) milling machine is not an industrial robot. If a device of mechanical can be programmed to perform a wide variety of applications, it is probably an industrial robot. The essential difference between an NC machine and an industrial robot is the versatility of the robot, that it is provided with tools of different types and has a large workspace compared to the volume of the robot itself. The numerically controlled machine is dedicated to a special task, although in a fairly flexible way, which gives a system built after fixed and limited.

The learning and control of industrial robots is not a new science, rather a mixture of “classical fields”. From mechanical engineering, the machine is studied in dynamic and static situations. It means of the spatial mathematics motions can be described. The designing and evaluating Tools algorithms to achieve the desired motion are provided by control theory. Electrical engineering is helpful when designing interfaces and sensors for industrial robots. Last but not least, computer science gives for programming the device to perform a desired task.

The term robotics has presently been defined as the science studying “the intelligent connection of perception to action”. Industrial robotics is a subject concerning robot design, control and applications in industry and the products are now reaching the level of a mature technology. The robotics technology status can be reflected by the definition of a robot originating from the Robot Institute of America.

Most of the organizations recently agree less or more to the definition of industrial robots, formulated by the International Organization for Standardization, ISO.
• Manipulating industrial robot is a controlled automatically, multi-purpose, reprogrammable, manipulative machine with several degrees of freedom, which may be either fixed in place or mobile for use in industrial automation applications.
• Manipulator is a machine, the mechanism of which usually contains of a series of segments jointed or sliding relative to one another, for the purpose of grasping and/or moving objects (pieces or tools) usually in several degrees of freedom.

Scenarios of Future Industrial Robots

Long-term visions industrial robots of the future have been depicted in five scenarios which are given in the following as examples:
1. Robot assistants as a versatile tool at the workplace
Scenario: A robot assistant is used as a versatile tool by the worker at a manual workplace. The applications could be manifold: arc welding, machining, woodworking, aircraft assembly etc.
Operation: The arm of compact robot is towed manually to the workplace. On a wireless portable interface the worker selects a process (e.g. “welding”). The worker indicates the process by guiding the robot along contours or over surfaces while giving additional instructions by voice. Process parameters are set and the sensor supported motion results in the machined/welded contours. The worker may override the robot motion as required. Successive tasks can be performed automatically without supervision by the worker.

2. Robot assistants in crafts
Scenario Robot as a versatile assistant for crafts
Operation The robot is mobile and is equipped with two arms and is instructed by gesture, voice and graphics. A craftsman (e.g. locksmith) has to weld a steel structure (stairway). The robot fettles the seams automatically with a brush.

3. Robots for empowering humans
Scenario The robot for human augmentation (force or precision augmentation) in assembly
Operation In a bus gear box assembly the heavy central shaft is grasped by the robot which balances it softly so the worker can insert it precisely in the housing. The robot learns and optimizes the constrained motion in successive steps “on the job”.

4. Multi-robot cooperation
Scenario: Many robots cooperate to execute a manufacturing task within a minimal workcell.
Operation: Robot 1 fetches a panel that has to be mounted simultaneously with the cover and robot 2 tells robot 1 where to put the panel. Finally robot 3 fetches an automatic screw driver to mount the cover and the panel together on the washing machine framework. If this is not OK, robot 3 needs the help from a worker who will change for example the orientation of the screw driver by direct interaction with the tool and the assembly can proceed.

Main Obstacles to Progress Long term Vision of Future Robot

The described long-term vision realization is subject to overcoming the following barriers:
Man-machine-interaction: Today, manufacturing tasks cannot be expressed in intuitive enduser terms as would be typically required for instructions by voice. Multimodal dialogues based on voice, graphics, and texts should be initiated to quickly resolve insufficient or ambiguous information.
Mechanical limitations: Robot mechanics account for some 80% of the system price. For some parts, particularly gears, there exists a painful dependency on Japanese suppliers. New drive lines should be developed where high density motors and compliant compact gears (e.g. on the basis of mechanical wave generators) with integrated torque and position sensors are used in order to decrease this dependency. Advanced control of sensor based drive systems will make it possible to decrease cost and weight without reducing the robot performance. Furthermore a cooperative space-sharing robot needs harmless motions. This can be achieved by intrinsically safe designs or suitable sensor equipment.
Sensors: Full 3D recognition is required for work piece and worker localization in less structured environments. Inexpensive sensors do not exist yet but high volume supervision and entertainment applications will make this technology affordable.
Robot automation life-cycle costs. The gains of robots productivity are probably less pronounced than quality gains, especially for investments into cooperating which in some cases will result in severe cost limits of such systems to achieve cost-effectiveness.
Socio-economic factors. The systems of advanced mechatronic may slow down investments in novel robot systems especially in areas with little or no automation a strong conservative attitude in industry towards. The introduction of robotics into industries characterized by low status and bad working conditions can contribute to changing their attractiveness to employ young people.
Standards. First standards towards cooperative robots and intelligent assist devices (e.g. “smart balancers”) are about to emerge. New standards for robot assistants allowing physical interaction at normal working speeds will be required. Setting new standards needs committed industries to support the high cost and time involved.

Robotic in Automotive Industries

Currently, robots have been used mainly in the automotive industries, including their supply chains, accounting for more than 60% of total robot sales. Typically prime targets for robot automation in car manufacturing are welding, assembly of body, motor and gear-box, and painting and coating. Automotive industries are the driver key of application in terms of cost, technology and services robotics industry are subject to fierce global competition. Robot systems increasingly to be the central portion of investments in automotive manufacturing which may reach 60 % of the total manufacturing equipment investment in the year 2010 (for car and 1st tier suppliers). In general it is estimated that the a robot automation investment cost in these industries accounts to 4 times the unit prize of a robot.

The automation degree in the automotive industries is expected to increase in the future as robots will push the limits towards flexibility regarding faster change-over-times of different product types (through rapid programming generation schemes), capabilities to deal with tolerances (through an extensive use of sensors) and costs (by reducing customized work-cell installations and reuse of manufacturing equipment). These challenges lead to the following present RTD trends in robotics:
• Expensive fixing equipment and single-purpose transport is replaced by standard robots thus offering continuous production flows. Remaining fixtures may be adjusted by the robot itself.
• Cooperative robots in a work-cell coordinate fixing, handling and process tasks so that robots may be adjusted easily to varying work piece geometries, process parameters and task sequences. Short change-over times are achieved by automated program generation which takes into account necessary synchronization, collision avoidance and robot-to-robot calibration.
• Increased use of sensor systems and measuring devices mounted on robots and RFID-tagged parts carrying individual information contributes to better dealing with tolerances in automated processes.
• The gap between fully manual and fully automated task execution of human-robot-cooperation bridges. Robots and people will share cognitive, sensing, and physical capabilities.

Robotic Application in Industries

There are various new fields of applications in which robot technology is not widespread today due to its lack of flexibility and high costs involved when dealing with varying lot sizes and variable product geometries. New robotic applications will soon emerge from new industries and from SMEs, which cannot use today’s inflexible robot technology or which still require a lot of manual operations under strenuous, unhealthy and hazardous conditions. Relieving people from bad working conditions (e.g., operation of hazardous machines, handling poisonous or heavy material, working in dangerous or unpleasant environments) leads to many new opportunities for applying robotics technology. Bad working conditions examples can be found in foundries or the metal working industry.

Besides the need of handling objects at very high temperatures, work under unhealthy conditions takes place in manual fettling operations, which contribute to about 40% of the total production cost in a foundry. Manual fettling that’s mean strong vibrations, heavy lifts, metal dust and high noise levels, resulting in annual hospitalization costs of more than €150m in Europe. Bad working conditions can also be found in slaughterhouses, fisheries and cold stores where beside low temperatures also the handling of sharp tools makes the work unhealthy and hazardous
Assembly and disassembly (vehicles, airplanes, refrigerators, washing machines, consumer goods). In some cases fully automatic task operation by robots is impossible. Cooperative robots should support the worker in terms of force parallelization, augmentation, or sharing of tasks.
Aerospace industry presently uses customized NC machines for drilling, machining, assembly, quality testing operations on structural parts. In assembly and quality testing, the automation level is still low due to the variability of configurations and insufficient precision of available robots. Identified requirements for future robots call for higher accuracy, adaptivity towards workpiece tolerances, flexibility to cover different product ranges, and safe cooperation with operators.
SME manufacturing: Fettling, cutting, deflashing, deburring, drilling, milling, grinding and polishing of products made of metal, glass, ceramics, plastics, rubber and wood.
Food and consumer good industries: Processing, filling, assembly, handling and packaging of food and consumer goods
Construction: Drilling, Cutting, grinding and welding of large beams and other construction elements for buildings, bridges, ships, trains, power stations, wind mills etc.

Grasper Control Language of BarretHand Robotic

The BarrettHand consists its central supervisory microprocessor that coordinates four dedicated motion-control microprocessors and controls I/O via the RS232 line inside its compact palm. The control electronics are created on a parallel 70-pin backplane bus. Associated with each motion-control microprocessor are the related motor commutation electronics, sensor electronics, and motor-power current-amplifier electronics for that finger or spread action. The microprocessor of supervisory directs I/O communication via a high-speed, industry-standard RS232 serial communications link to the work cell PC or controller. RS232 offers compatibility with any robot controller while limiting umbilical cable diameter for all communications and power to only 8mm. The published grasper communications language (GSL) optimizes communications speed, exploiting the difference between bandwidth and time-of-flight latency for the special case of graspers. It is important to recognize that graspers usually remain inactive during most of the work cell cycle, while the arm is performing its gross motions, and are only active for short bursts at the ends of an arm’s trajectories.

While the robotic arm knees high control bandwidth during the entire cycle, the grasper has plenty of time to receive a large amount of setup information as it approaches its target. Then, the work cell controller releases a “trigger” command with precision timing, such as the ASCII character “C” for close, which begins grasp execution within a couple milliseconds.

The grasper can accept commands and communicate from any robot-workcell controller, PC, UNIX box, Mac, or even a Palmpilot via standard ASCII RS232-C serial communication — the common denominator of communications protocols. Though robust, RS232 has a slow bandwidth compared to FireWire standards or USB, but its simplicity leads to small latencies for short bursts of data. It has achieved time of flight to acknowledge and execute a command (from the work cell controller to the grasper and then back again to the work cell controller) of the order of milliseconds by streamlining the GCL.

Electronic and Mechanical Optimization of Programmable Robot

Dexterous control, Intelligent is key to the success of any programmable robot, whether it is an arm, automatically guided vehicle, or dexterous hand. While robotic intelligence is generally associated with processor-driven motor control, many biological systems, including human hands, integrate some degree of specialized reflex control independent of explicit motor-control signals from the brain. Actually, the BarrettHand combines programmable microprocessor intelligence and reflexive mechanical intelligence for a high degree of practical dexterity in real-world applications.

Base on the definition, neither the BarrettHand nor your hand is dexterous. Basically, their superior versatility challenges the definition itself. If the BarrettHand is followed by the strict definition for dexterity, it would require between eight and 16 motors, making it far too complex, bulky, and unreliable for any practical application outside the mathematical analysis of hand dexterity. But, by exploiting four intelligent, joint coupling mechanisms, the almost-dexterous BarrettHand needs only four servomotors. In some examples reflex control is even better than deliberate control. Two examples based on your own body illustrate this point. Accidentally suppose your hand touches a dangerously hot surface. It starts retracting itself instantly, relying on local reflex to override any ongoing cognitive commands. Your hand might burn when waiting for the sensations of pain to travel from your hand to your brain via relatively slow nerve fibers and then for your brain, through the same slow nerve fibers, to command your arm, wrist, and finger muscles to retract.

As the second example, let’s move the outer joint of your index finger without moving the adjacent joint on the same finger. You cannot shift these joints independently because the design of your hand is optimized for grasping. Your tendons and muscles are as lightweight and streamlined as possible without forfeiting functionality. The BarrettHand design recognizes that intelligent control of functional dexterity requires the integration of microprocessor and mechanical intelligence.

Gripper legacy of Robotics

Nowadays robotic part handling and assembling is done with grippers. If surface conditions offer, electromagnets and vacuum suction can also be used, for example in handling automobile windshields and body panels. As part sizes start to exceed the order of 100gms, a gripper’s jaws are custom shaped to ensure a secure hold. As the durable of handling mainstay and assembly, these tools have changed little since the beginning of robotics three decades ago. Grippers acts as simple pincers, have two or three unarticulated fingers, called “jaw”. Well organized catalog are available from manufacturers that guide the integrator or customer in tally various gripper components (except naturally for the custom jaw shape) to the task and part parameters.

The sizes of Payload range from grams for tiny pneumatic grippers to 100+ kilograms for massive hydraulic grippers. Typically the power source is hydraulic or pneumatic with simple on/off valve control switching between full-open and full-close states. The jaws usually move 1cm from full-open to full-close. The hand has two or three fingers, called “jaws”. The jaw part that connects the target part is made of a removable and machinably soft steel or aluminum, called a “soft jaw”. According to the unique circumstances, an expert tool designer determines the custom shapes to be machined into the rectangular soft-jaw pieces, The soft-jaw sets are attached to their respective gripper bodies and tested once machined to shape. This process can bring any number of iterations and adjustments until the system works properly. Tool designers redo the entire process each time a new shape is introduced. As consumers demand more variety of product choices and ever more frequent product introductions, the need for flexible automation has never been greater. However, the robotics industry over the past few years has followed the example of the automatic tool exchange technique used to exchange CNCmill cutting tools rather than make grippers more versatile.

Grasping Robotic of Barret’s Grasper

This article introduces a new approach to material handling, part sorting, and component assembly called “grasping”, in which a single reconfigurable grasper with embedded intelligence replaces an entire bank of unique, fixed-shape grippers and tool changers. We have to explore what is wrong with robotics today to appreciate the motivations that guided the design of Barrett’s grasper, the enormous potential for robotics in the future, and the dead-end legacy of gripper solutions.

Programmable flexibility is needed along the entire length of the robot, from its base, all the way to the target work piece for the benefits of a robotic solution to be realized. An arm of robot enables programmable flexibility from the base only up to the tool plate, a few centimeters short of the work piece target. But these last some centimeters of a robot have to adapt to the complexities of securing a new object on each robot cycle, capabilities where embedded intelligence and software excel. Look like the weakest link in a chain of serial, an inflexible gripper limits the productivity of the entire robot work cell.

Grippers have individually-customized, but fixed jaw shapes. The customization process of trial-and-error is design intensive, generally drives cost and schedule, and is difficult to scope in advance. Generally, each anticipated variation in orientation, shape, and robot approach angle requires another custom-but-fixed gripper, a place to store the additional gripper, and a mechanism to exchange grippers. An incremental improvement or unanticipated variation is simply not allowable.

For a flexibility high degree of tasks requiring such as handling variably shaped payloads presented in multiple orientations, a grasper is more secure, quicker to install, and more cost effective than an entire bank of custom-machined grippers with tool changers and storage racks.
Just one or two spare graspers can serve as emergency backups for several work cells, whereas one or two spare grippers are required for each gripper variation – potentially dozens per work cell for uninterrupted operation. And, it’s catastrophic if both backups of gripper fail in a gripper system, since it may be days before replacements can be identified, shipped, custom shaped from scratch, and physically replaced to bring the affected line back into operation. Since graspers are physically similar, they are always available in unlimited quantity, with all customization provided instantly in software.

Self-Reconfiguring Robots Modules Algorithm

The next step is to investigate the use of reconfiguration in other algorithmic applications after the basic reconfiguration problem is solved. One such class of algorithmic questions deals with resource utilization.

Heterogeneous systems allow specialized modules for communications, mobility, power, computation, or other resources. How these resources should best be distributed for various tasks is an interesting problem. For example, in a manipulation task it may be desirable to move a dedicated power module close to the task through reconfiguration. Another example is sensor deployment. Sensor modules should be carried in the volume of the robot for locomotion, and deployed to the surface for use. A related task would be to store wheel modules in the body of a legged configuration, and to deploy the wheels when wheeled locomotion was possible. The application-level question is how to best use this capability, assuming a solution to the problem of reconfiguration with uniquely identified modules. Specifically, the research issue is to determine a target configuration that optimizes placement of power, sensor, or other specialized modules to best suit the task.

SR modules are used another application involves the problem of constructing rigid structures. Often a SR robot requires structural rigidity, but it is difficult to construct connectors with desirable connection and disconnection properties that can withstand much torque. Power and weight available to a module are both severely limited, so connectors must use small efficient actuators. The result is that current connectors have serious problems with rigidity. A line of Crystal modules, for example, can deform to a great degree.

Any algorithms we design should be implemented and simulated in software. The challenge for heterogeneous systems is to build simulators to represent the varieties of modules. In hardware, building a heterogeneous system by adding sensors or communication to a homogeneous system is an easy strategy. It would also be interesting to construct modules of different shapes. Demonstrating general reconfiguration in hardware remains a significant goal. Overall, the research goal here is to build a suitable software simulator to test our algorithms, and to perform hardware experiments where possible.

SCARA Robot Modeling and Trajectory Generation

FORWARD AND INVERSE KINEMATICS
For the case of simple robotics structures such as the one used in this Lab, it is possible to find the inverse kinematics model by only merely geometrical reasoning. That is what is implemented in InverseKinematicsUsingGeometry.m function. Use the InverseKinematics.m function so that no one of the robot joints leaves the robot workspace during the movement execution. Indication: use the possibility to choose what solution (Q1 (low elbow) or Q2 (high elbow)) to assign to the robot final position.
FORWARD AND INVERSE INSTANTANEOUS KINEMATICS
Knowing the Forward Instantaneous Kinematics Model (FIKM) of the SCARA robot given by ForwardInstantaneousKinematics.m function, program the Inverse Instantaneous Kinematics Model (IIKM) InverseInstantaneousKinematics.m. While interfacing your IIKM to the Simulink diagram, simulate a rectilinear displacement with constant speed of the end-effector according to its x axis (use the provided interface to observe the result).
It is imperative to manage correctly the singular robot configurations in order to warn all erratic movement of the robot. What are the singular positions of the studied robot? Use this knowledge so that the robot avoids these singular configurations. Indications:
• The only program to modify is the one where you defined the robot IIKM,
• For the singularities in the robot workspace limits, one can impose software stops of the robot angles evolutions in order to stop rightly before the configuration “completely tense arm” or “completely folded arm”.

TRAJECTORY GENERATION (WORKING AND JOINT SPACE CONFIGURATION)
Basing on the SetPointTrajectory.m function, give the end-effector a circular trajectory (with ray = 2 and center C=(0, 7.5)).
Basing on the Order1Interpolation.m file, write a 5 degrees interpolator generator for the SCARA robot between 2 points of the joint space qi=[100° 100°] to qf=[6° 60°].
ROBOT ARM CONTROL
This part of the Lab introduces the dynamical model of the SCARA robot in order to address some problematic linked to robot arm control. You will find below the description of a set of programs attached to those given previously.
ForwardDynamicalModel.m
Defines the acceleration of the robot joints according to the torques applied by its actuators.

SimulinkLabLibrary.mdl
Contains Simulink blocks to be used directly in your Lab.

SimulinkRobotControlWithDyMo.mdl
Simulink model that permits to use a PID controller to control the robot in the working space.

Expected Contributions Self-Reconfiguring Robots

Like homogeneous systems, heterogeneous SR systems promise versatility and usefulness superior to fixed architecture robots through their ability to match structure to task. In addition, heterogeneous systems further this goal with their ability to match capability to task. The original vision of reconfigurable systems was inherently heterogeneous, and during the subsequent fifteen years researchers have accrued much knowledge of homogeneous systems. In this thesis, we propose to widen this understanding into the realm of heterogeneous systems. We plan to address fundamental algorithmic issues and demonstrate solutions in simulation and hardware where possible. The results of this work should shed light on the relative complex it of hardware versus software design in SR systems and lead to an algorithmic basis for heterogeneous
self-reconfiguring robots.

We have proposed a framework for categorizing SR modules, and we have chosen a simple theoretical module on which to build reconfiguration algorithms. We will attempt to prove lower bounds for the basic problem and extend the results to systems with greater heterogeneity. There are other algorithmic issues we will address which are enabled by previous reconfiguration solutions, and by our previous work with non-actuated modules, path planning, Goal Recognition, and distributed locomotion.

Finally, we propose to construct a software simulator with which to demonstrate our algorithms. This simulator should be suitable for further use by other researchers in the area. We also hope to perform hardware experiments where available.

The main expected contribution of the proposal is an algorithmic basis for heterogeneous SR systems. This contribution is supported by the following items:
• Framework for heterogeneous modules
• Reconfiguration in 2D and 3D with Sliding Cube model, with arbitrary size ratios
• Reconfiguration with non-actuated modules
• Complexity analysis for reconfiguration
• Applications involving resource trade-offs and optimization
• Implementation in simulation
• Hardware experimentation

Reconfiguration for Robot Locomotion

Reconfiguration is generally discussed in terms of task-specific shape transformation, but it can also be used for locomotion. We have developed a distributed locomotion algorithm for unit-compressible robots using inchworm-like motion, and implemented this algorithm in hardware on the Crystal system. We also performed extensive experimentation; the algorithm ran for over 75 hours in total at the SIGGRAPH and AAAI conferences. The algorithm and experiments are described in this section.

Inchworm locomotion uses friction with the ground to move a group of unit-compressible modules forward. The algorithm is based on a set of rules that test the module’s relative geometry and generate expansions and contractions as well as messages that modules send to their neighbors. When a module receives a message from a neighbor indicating a change of state, it tests the neighborhood against all the rules, and if any rule applies, executes the commands associated with the rule. The algorithm is designed to mimic inchworm-like locomotion: compressions are created and propagated from the back of the group to the front, producing overall motion.

The message types it can send and receive, and the procedures that are called from the message handlers (including the rules of the algorithm). The “tail” module contracts first, which signals its forward neighbor to contract. Each module expands after contraction, so that the contraction propagates through the robot. When the contraction has reached the front of the group, the group will have moved half a unit forward (in theory; empirical results show nearly optimal distance-per-step for chains of five or more units.

Depending on context, once the leader of the group has contracted and expanded, it can then send a message back to the tail to initiate another step. We implemented this algorithm and performed experiments with various shapes. The experiments successfully demonstrated reliable locomotion in the configurations we tested. See Butler, Fitch and Rus for further discussion. This locomotion gait is significant first in that it exemplifies the style of distributed, scalable algorithms we wish to develop and implement in proposed work.

Applying 3D-MBP to Self-Repair of SR Robot

The problem of self-repair is to restore functionality to a SR robot, without outside intervention, in response to module failure. Our strategy is to detect the failure, eject the failed module from the system, and replace it with a spare built into the robot’s structure. Path planning in this problem is based on finding rectilinear paths that minimize the number of turns. Our 3D-MBP algorithm addresses this issue, and in this section we describe how 3D-MBP is applied to the self-repair application. This technique is also a type of reconfiguration algorithm for systems with limited heterogeneity.

The Crystal’s motion planning based on virtual module relocation reduces to finding a rectilinear path through the robot structure. Each segment of the path can be executed in constant time, assuming no failed modules, so an efficient motion plan requires a rectilinear path of minimum bends. Replacing a failed module (filling a “hole” in the structure) can be solved using virtual module relocation. To eject a failed module, this planning technique can not be used directly since here a particular module must be actually pushed (or pulled) to a position on the surface of the robot. However, pushing gaits to move the failed module, also exhibit the property that turns are more expensive than straight line motion. To find path of a minimum-bend is therefore useful in both steps.

An MBP problem is constructed by modeling the source and destination points in module coordinates, and holes and concavities This motion planning technique leads to a 2D self-repair solution, and easily extends to 3D given an efficient shortest path algorithm. A 3D rectilinear path is able to be decomposed into a sequence of 2D turns (not all of which are in the same plane). Therefore, given a 3D rectilinear path, a motion plan can be constructed by iterating the appropriate module gait over each path segment. Note that pushing gaits require a minimum amount of supporting structure, but we can build this into the path planning problem by growing the obstacles (holes in the structure and boundaries) by the required amount. This ensures that any path returned by the algorithm is feasible.

Function of Simulink File of SCARA Robot

Function (m-file .m or Simulink file .mdl) description as below:
Main.m
It is permits to open one user interface to simulate the robotics system with or without using Simulink blocks. One graphical representation of the robot SCARA 2dof as well as the possibility to animate it is accessible via this interface. This interface permits also to recover the mouse events. In this case, the mouse will allow you, in the setting of your Lab, to give position set-point (x, y) to be reached by the end-effector.
ForwardKinematics.m
Defines the positions (xT, yT) of the SCARA end-effector according to its joints coordinates.
InverseKinematics.m
Defines the joints coordinates of the SCARA according to the position of its end-effector. The output function gives [Q1, Q2, err] with Q1 the solution when .2>0 (low elbow), Q2 the second solution. To satisfy some constraints linked to the robot workspace one imposes that:
• The position of the end-effector belongs to the working space
• The values of .1 and .2 vary in the robot joint domain (i.e. 0°=.1=180°;
• 180°<.2<180°).
Thus InverseKinematics.m function gives: err(1)=0 if the first solution Q1 is possible and =1 if the solution is not possible, and in the same way err(2) indicates the feasibility of the second solution.
InverseKinematicsUsingGeometry.m
Defines, using simple geometrical construction, the joint coordinates of the SCARA according to the position of its end-effector.
SetPointTrajectory.m
Gives the set-point to follow by the robot end-effector in the work space domain.
GUI_Management.m
Manage all events (click of mouse, pressed button, etc.) of the GUI (Graphical User Interface) window.
SetDispaly.m
Updates the graphical representation of the SCARA robot as well as the display of the joint information and the position of the end-effector.
SimulinkRobotControlWithoutDyMo.mdl
Simulink model that permits while interfacing it with the programs described above to control the movement of the SCARA in order to follow for example a trajectory.

Reconfiguration Planning of Self-Robotic System

The transforming task of a modular system from one configuration into another is called the Reconfiguration Planning problem. Solving this problem is fundamental to any SR system. In some approaches explicit start and goal configurations are given, and in others the goal shape is defined by desired properties. Centralized algorithms require global system knowledge and compute reconfiguration plans directly, whereas decentralized algorithms compute solutions in a distributed fashion without the use of a central controller.

Reconfiguration algorithms can be designed for classes of modules, or for specific robots. Often a centralized solution is more obvious and is developed first, followed by a distributed version, although not always. Not all decentralized algorithms are guaranteed to converge to a solution, or are correct for arbitrary goal shapes.

Reconfiguration of CEBOT was planned by a central control cell known as a master. Master cells were later intended to be dynamically chosen, blurring the distinction between centralization and decentralization. Later CEBOT control is hierarchical (behavior-based). A common technique is used in reconfiguration algorithms for a lattice-based system is to build a graph representation of the robot configuration, and then to use standard graph techniques such as search to compute motion plans. Planning for the Molecule robot developed by the Dartmouth group is one example. Other example from the Dartmouth group is planning for unit-compressible systems such as the Crystal. This planner, named MeltGrow, uses the concept of a metamodule , where a group of modules are treated as a single unit with additional motion capabilities. The Crystal robot implements convex transitions using metamodules called Grains. Graph-based algorithms are also used by the MTRAN planner to compute individual module trajectories.

Pre-computed data structures can also be centralized by planners store such as gait-control tables. Once a gait is selected by the central controller, it is executed by local controllers on the individual modules. This type of algorithm is used by Polypod. The division between central and local controllers is also used in by RMMS, and I-Cubes.

Chain-based Robots

Modules aggregate as connected 1D strings of units in chain-based systems. This class of robots easily implements rolling or undulating motions as in snake robots or legged robots. However, control is much more difficult for chain-based systems than for lattice-based systems because of the continuous nature of the actuation: modules can move to any arbitrary position as opposed to a fixed number of neighbor positions in a lattice. Our previous work has not considered chain-based systems, but it is important to understand their characteristics in the interest of developing more generalized algorithms.

Polypod was the first prominent chain-based system, proposed by Yim in 1993. Polypod is made up of Segments, which are actuated 2-DOF 10-bar linkages, and Nodes, which are rigid cubes housing batteries. Multiple gaits for locomotion, including rolling, legged, and even Moonwalk gaits, were demonstrated with Polypod. Polybot succeeds Polypod, sharing the same bipartite structure. Segments in Polybot abandon the 10-bar linkage in favor of a 1-DOF rotational actuator. The latest generation of Polybot prototypes has on-board processing and CANbus (controller area network) hardware for communication. A system that uses a similar actuation design is CONRO (CONfigurable RObot). The CONRO module has two rotational degrees of freedom, one for pitch and one for yaw, and was designed with particular size and weight considerations. Considerable attention has been paid to the connection mechanism, which is a peg-in-hole connector with SMA (shape-memory alloy) latching mechanism that can be disconnected by either face. Computation is on-board each module, so unlike Polypod, CONRO has only one module type. Power can be provided externally or via batteries on later prototypes. Manually configured shapes examples are the snake and the hexapod, and the current CONRO system is designed for self-reconfiguration.

The DRAGON is a robot snake with torsion-free (constant-velocity) joints. A sophisticated connector has been developed for the DRAGON, designed for strength and tolerance for docking.

Lattice-based Robots Hardware

Modules are constrained to occupy positions in a virtual grid, or lattice in lattice-based systems. One of the simplest module shapes in a 2D lattice based system is a square, but more complex polygons such as a hexagon (and a rhombic dodecahedron in 3D) have also been proposed. Because of the discrete, regular nature of their structure, developing algorithms for lattice-based systems is often easier than for other systems. The grid constraint makes implementing certain rolling motions, such as the tank-tread, more challenging since module attachment and detachment is required. We would like to develop algorithms implementable by most, or all, lattice-based systems, so a complete review of their properties is essential.

One of the first lattice-based SR robots planned and constructed in hardware is the Fracta robot. The 2D Fractum modules link to each other using electromagnets. Communication is achieved through infrared devices embedded in the sides of the units, and allows one fractum to communicate with its neighbors. Computation is also onboard; each fractum contains an 8-bit microprocessor. Power, however, is provided either through tethers or from electrical contacts with the base plane.

This system was designed for self-assembly, and can form simple symmetric shapes such as a triangle, as well as arbitrary shapes. Other lattice-based robots include a smaller 2D system, and a 3D system. Another early SR robot is the Metamorphic Robot. The basic unit of this robot is a six-bar linkage forming a hexagon. The kinematics of this shape was investigated when the design was proposed, and hardware prototypes were constructed later. A unique characteristic of this system is that it can directly implement a convex transition; a given module can move around its neighbor with no supporting structure.

The hexagon deforms and translates in an owing motion. A square shape with this same property was also proposed. This motion primitive is important since it is required by many general reconfiguration algorithms, but many systems can only implement it using a group of basic units working together.

Hardware Design Pioneering Research

To build reconfiguring robots in hardware involves designing and constructing the basic modular units that combine to form the robot itself. Such modules differ from the wheels, arms, and grippers of fixed architecture robots in that they are functional only as a group as opposed to individually. Because we are interested in developing general algorithms for classes of robots instead of particular systems, familiarity with the entire spectrum of existing systems is valuable. Current systems can be divided into classes based on a number of module properties. Systems composed of a single module type are known as homogeneous systems, and those with multiple module types are called heterogeneous systems.

Modules can connect to each other in various ways, and the combination of connection and actuation mechanisms determine the possible positions a module can occupy in physical space, relative to neighbor modules. This gives rise to the major division within SR systems, lattice-based versus chain-based systems. In lattice-based systems, modules move among discrete positions, as if embedded in a lattice. Chain-based systems, however, attach together using hinge-like joints and permit snake-type configurations that connect to form shapes such as legged walkers and tank treads. Another class of modular systems cannot self-reconfigure, but can reconfigure with outside intervention. This class is called manually reconfiguring systems.

Cell Structured Robot (CEBOT) was the first proposed self-reconfiguring robot as an implementation of a Dynamically Reconfigurable Robotic System (DRRS). The DRRS definition parallels our current conception of self-reconfiguring robots - the system is made up of robotic modules (cells) which can attach and detach from each other autonomously to optimize their structure for a given task. The idea is directly inspired by biological concepts and this is reacted in the chosen terminology. It is interesting that this proposed SR robot is heterogeneous: cells have a specialized mechanical function and fall into one of three “levels”. Level one cells are joints (bending, rotation, sliding) or mobile cells (wheels or legs). Linkage cells are part of Level two, and Level three contains end-effectors such as special tools. Communication and computation are assumed for all cells.

CEBOT is the physical instantiation of DRRS. Various versions range from reconfigurable modules to “Mark-V,” which more closely resembles a mobile robot team.

Heterogeneous Self-Reconfiguring Robotics

Self-reconfiguring (SR) robots are robots which can change shape to match the task at hand. These robots comprise many discrete modules, often all identical, with simple functionality such as connecting to neighbors, limited actuation, computation, communication and power. Orchestrating the behavior of the individual modules allows the robot to approximate, and reconfigure between, arbitrary shapes.

This shape-changing ability allows SR robots to respond to unpredictable environments better than fixed architecture robots. Common samples of reconfigure ability in action include transforming between snake shapes for moving through holes and legged locomotion for traversing rough terrain, and using reconfiguration for locomotion. SR robots also show promise for a high degree of fault tolerance since modules are generally interchangeable. The robot can self-repair by replacing the broken unit with a spare stored in its structure if one module fails. When all modules are the same, the system is known as homogeneous.

This design difficulty promotes fault-tolerance and versatility. However, a homogeneous system has limitations; all resources that may be required must be built into the basic module. We would like to relax the assumption that all modules are identical and investigate heterogeneous systems, where several classes of modules work together in a single robot. The heterogeneous systems can retain the advantages of their homogeneous counterparts while offering increased capabilities. The benefit would be a robot that can match not only structure to task by changing physical configuration, but also capability to task by using specialized components.

The future consideration application in which exploration tasks are carried out by SR robots. When necessary, the robot reconfigures into a legged walker to move across rough terrain or rubble, or transforms into a snake shape for moving through small holes. The robot can take advantage of smooth terrain as well by deploying a special module type containing wheels for fast, efficient locomotion. A variety of sensors are onboard, contained as modules within the structure of the robot.

Main Research Questions for Heterogeneous Self-Reconfiguring Robots

To design heterogeneous SR systems, there are significant challenges involved. A fundamental issue is the degree to which modules are different from each other. There are many possible dimensions of heterogeneity, such as size and shape differences, various sensor payloads, or different actuation capabilities.

These differences all impact the main algorithmic problem, which is how to reconfigure when all units are not identical. Present reconfiguration algorithms are based on homogeneity. Heterogeneous reconfiguration planning is similar to the Warehouse problem which is P SPACE-hard in the general case. Beyond the reconfiguration planning problem itself, many other challenges remain in developing applications that capitalize on module specialization. In response to these challenges, we would like to develop an algorithmic basis for heterogeneous self-reconfiguring robots, and to develop software simulations that demonstrate our solutions along with hardware experiments where possible. Below are the four main research questions:
1. Framework for heterogeneity. There are many possible differences between SR modules. In order to reason about heterogeneous systems, a categorization scheme is required that models the various dimensions of heterogeneity. The benefit of such a framework is that algorithms can be developed for classes of systems instead of specific robots. We will identify some primary axes of heterogeneity and build this framework.
2. Reconfiguration algorithms. Reconfiguration planning is the main algorithmic problem in SR systems. Because of homogeneous reconfiguration algorithms are insensitive to module differences, we need to develop a new class of reconfiguration algorithms that are distributed, scalable, and take into account different types of resources, or tradeoffs between resources.
3. Lower bounds for reconfiguration. We propose to determine lower bounds for the complexity of reconfiguration problems under various assumptions about heterogeneity as developed in our framework defined above.
4. Applications. The solution to the heterogeneous reconfiguration problem is significant from a theoretical perspective, it is also interested in developing example applications in simulation and in the physical world.

Virtual Instructor Intervention of LEGO Robotics

A virtual instructor (VI) is an autonomous entity whose main objective is to deliver personalized instruction to human beings and to improve human learning performance by combining knowledge of empirically researched instructional (i.e., pedagogical, andragogical) techniques of how human’s learn & behave. A VI may be embodied (e.g., graphical, holographic, robotic, etc.), non-embodied (i.e., without an identified form), verbal, non-verbal, ubiquitous, accessible from mixed reality environments, and serve as a continuous assistant to continually improve human learning performance across cultural and socio-economic lines. A virtual instructor provides a personalized human learning experience by applying empirically evaluated and tested instructional techniques for improving human learning.

For robotics instruction, a virtual instructor is distributed in a mixed reality environment (e.g., virtual reality or augmented) with capabilities to provide personalized instruction on conceptual and psychomotor tasks. For learning psychomotor tasks, the learner interfaces with the virtual instructor wearing an augmented reality like heads-up display (HMD)

The HMD is equipped with sensors including, but not limited to a camera, which is used to recognize and select parts. Depending on the step along a task, this feature assists the learner identify which part to grab for completing, for example, a robot assembly task. For learning conceptual tasks, the learner uses the traditional keyboard and mouse and also a wearable headset (speaker and microphone) to interface with a virtual reality environment in order to learn basic concepts from three dimensional simulations of robotic equipment, torque, assembly processes, etc.

In both types of mixed reality environments, the learner communicates with the virtual instructor using voice commands and receives both computer synthesized voice and graphic instruction. The VI instructs by following a workflow guiding the learner through procedural steps towards completing a task. Throughout the entire human VI interaction, the VI subsystem continuously updates the learner profile and measures the learning performance

Robots Application in Nuclear Industry

Robots, whether teleoperated under autonomous or supervisory control have been used in a variety of applications in maintenance and repair. The following subsections describe many of these systems, focusing primarily on applications for which working robot prototypes have been developed.

Teleoperators have been utilized as well in the maintenance role for more than 4 decades in the nuclear industry. Several features of maintenance make it a good application for teleoperators in this arena.

First is the low frequency of the operation, which calls for a general-purpose system capable of doing an array of maintenance tasks.
Second, maintenance and repair of nuclear industry require high levels of dexterity.
Third, these tasks complexity may be unpredictable because of the uncertain impact of a failure. For these reasons, the choice for this role is often between a human and a teleoperator. When the environment is hazardous, a teleoperator is generally the best selection. If humans in protective clothing can perform the same job, the benefits of having teleoperators continuously at the work site need to be weighed against the cost of suiting up and transporting humans to and from the work site. While humans are likely to be able to complete tasks more quickly than teleoperators, using teleoperators can: (1) shorten mean time to repair by reducing the response time to failures, (2) reduce health risks, (3) improve safety, and (4) improve availability by allowing maintenance to take place during operations, instead of halting operations.

The maintenance important for nuclear industry robotics, the proceedings of the 1995 American Nuclear Society topical meeting on robotics and remote handling included 124 papers, nearly a quarter of which were devoted to some aspect of maintenance. The 1997 meeting included 150 papers, where more than 40% dealt with some aspect of maintenance. Furthermore, if one considers environmental recovery operations as a form of maintenance, then a much larger proportion of papers at both meetings were maintenance-related.

Implementing Robotic Exercises for Students

Solving robotic exercises is a difficult task for students because the modeling activity involved requires students to comprehend programming, robotic design concepts as well as basic engineering skills. To help students train themselves, we are proposing a mixed reality based instructional system that addresses these learning challenges and teaches them a general problem solving method. In this article it presents the benefits of using such a system in a learning process, in relation to standard teaching (as in the classroom).

With the inception of LEGO Mindstorms, robotics is being embraced by children, adults and educators. Educators are infusing LEGO Mindstorms in various curriculums such as: computer science, information systems, engineering, robotics, and psychology. The LEGO environment provides students with the opportunity to test the results of abstract design concepts through concrete hands-on robotic manipulation. In this LEGO learning environment, students often discover they need to acquire new skill sets and the cycle of revising their knowledge base before they can achieve a new function becomes apparent.

Understanding and implementing robotic exercises are difficult tasks for students. The LEGO Mindstorm environment is an excellent vehicle in which students can train themselves. This self training approach is a “constructive method”. This methodology enables students to become conscious of the underlying mechanics and programming constructs required to successfully produce a seamless execution. Utilizing this learning environment to teach robotics has forced us to define and to name robotic, mechanical and programming concepts in domains which are generally not directly taught in the classroom.

The focus of the next LEGO Robotic article is to discuss the myriad of benefits that student can draw from when immersed in a virtual instructor learning environment. The subsequent section will show why robotics is a difficult domain. The next section will present the virtual simulation and instruction portion of the course.

LEGO Mindstorms NXT Interface

In 2007, LEGO introduced a new kit the Mindstorms NXT which consists of 510 pieces, thereby reducing the number of parts by 208. The LEGO Mindstorms NXT Kit uses a graphical programming interface to teach RC concepts. It is powered by a 32-bit ARM processor. The LEGO MINDSTORMS NXT brick also has a co-processor, an 8-bit AVR processor and includes Bluetooth communication. It includes 4 input ports that support both analog and digital interfaces and 3 output ports that are used to drive motors using precise encoders.

It also has an LCD display that can be programmed and a loudspeaker that supports up to 16 KHz. All of these features are included in a toy that is supposed to be programmed by a kid! students from biomedical, aerospace, mechanical and chemical engineering are not necessarily coding experts but they are specialists in their field, and have a good grasp on the algorithms and designs that can solve a particular problem. Traditional programming techniques posed a large learning curve to helping the students use RC hardware. Graphical programming alleviates these problems and provides a natural learning curve by abstracting the unnecessary implementation details. In Fig. 10 below, the LEGO interface is easy and intuitive for students to learn. On the right-hand side icons with a specific function are listed. The open space on the page is where the student can drag and drop the icons to formulate program.
Thus, graphical programming languages help kids build parallel, embedded programs that they can use to program the robots-without having to worry about the hardware interfaces and optimization issues. In addition, the kids can use the same brick and reprogram the hardware to perform a different kind of action.

Graphical programming languages provide the user with a true drag-and-drop interface that reduces the learning curve drastically. A program containing 3 icons that instructs the robot to turn continuously. The last icon with the two arrows in a circle denotes a loop. Hence, student working with high level icons are able to instruct the robot to do complex task with minimal commands or sequencing of the icons.

Benefits of a Virtual Instructor

Many publications have demonstrated that computer assisted learning and virtual instruction makes it easier for students to grasp complex concepts, work at their own pace as well as benefit from numerous advantages these technologies offer. In this section we highlight additional benefits when a virtual instructor is coupled with the LEGO Mindstorm NXT software interface.

A methodology for solving robotic problems
Most students have no general method to solve robotic problems. Many textbooks fail to have a systematic approach to breaking down a robotic problem and finding a solution. They usually stress mechanical or engineering concepts while other textbooks teach programming. The NXT software combines the graphical interface with a systematic approach for solving problems.

First it defines the challenge and demonstrates a successful solution in the challenge brief. A building and programming guide give detail step-by-step instructions to the student. It formalizes an intuitive approach. However, it does not explain what is on the screen nor does it give any descriptions of what to do. The building component fails to explain why certain pieces were selected and not others nor does it give a rationale for its assembly solution.

However, when this graphical interface is utilized in conjunction with a virtual instructor (VI) these minor issues are eliminated. The VI provides additional instruction to the student as well as guides them through the building and programming modules. In addition, the VI provides helpful hints to the individual student during the learning process, provides personalized instruction, and keeps track of where the student is having difficulty.

Immediate Detection and Correction of Errors
The ability for students to understand and learn when they are making mistakes and immediately receive information in order to correct mistakes instantaneously is far more beneficial than trying to correct the mistakes later. The current NXT graphical interface can not detect the procedural tasks students perform or determine whether the procedural steps are correct or not.

Robotics A Difficult Domain

The robotics Introducing to students for the first time is extremely challenging. This initial stage exposes students to basic engineering concepts, mechanical designs, and introductory programming skills. Because students are pliable at this initial stage, they need to be immersed in a learning environment that addresses all these skills. The Virtual Instructor Learning Environment is one paradigm that has proven to be beneficial to students who are learning robotics for the first time.

There are three distinct skill sets students need to acquire in order to successfully manipulate the robot. They are: robotic design concepts and construction, basic engineering skills, and programming. Successful robotic construction implies that the student is able to not only recognize the LEGO piece but know its functionality as well. Determining which pieces are best assembled together and designing it as such is challenging. In the Spring and Fall of 2006, students were asked to familiarize themselves with the LEGO Mindstorms pieces. This Mindstorms robot kit #9790 consisted of 718 pieces.

Learning the mechanics of each piece and becoming familiar with 718 pieces is challenging for most students. Most students easily recognized the wheels, but had trouble differentiating between the bushings, connectors, bricks, and beams.

Once the students were able to recognize the parts then next step challenge was getting them to understand the functionality of each piece and deciding which parts should be assemble. Students were given 5 robotic construction tasks. The most challenging for them was the light sensor (task #5) and the constructing the first motorized vehicle. The light sensor has to be mounted correctly on the vehicle in order to work successfully and many of the students vehicles were poorly design thus adding the light sensor was challenging. As for Task#1, students had trouble understanding the correlation between the wheel, axle, gear, and motor. Placing these parts together and seeing how they function together to create movement was most challenging for all the students. Clearly reveals that Task #4 gave the students the most difficulty in programming as well as constructing.
Related Posts Plugin for WordPress, Blogger...