Human Robot Interaction

Robots are machines endowed with the information processing, sensing, and motor abilities. Information processing takes notably the form of perception, reasoning, planning, and learning, in addition to feedback signal processing and control in the robotic systems. The coordinated exercise of these abilities enable robotic systems to achieve adaptive behaviors and goal-oriented. The communication technologies enable robots to access networks of the software agents hosted by computer systems and other robotic. New generations of the robots are becoming increasingly proficient in coordinating their behaviors and pursuing shared goals with agents of heterogeneous teams which include other robots, humans, and software systems. The robots were mostly confined to industrial environments, and rigid protocols severely limited human-robot interaction (HRI) there during the last decades of the last century. Environments, which range from the extreme scenarios of deep sea explorations, space missions, and rescue operations to the more conventional human habitats of workshops, homes, offices, hospitals, museums, and schools. In the particular, research in a special area of service robotics called personal robotics is expected to enable richer and more flexible forms of HRI in the near future, bringing robots closer to humans in a variety of healthcare, training, education, and entertainment contexts.

Robot ethics is an applied ethics branch which Endeavour’s to isolate and analyze ethical issues arising in connection with present and prospective uses of robots. The below questions vividly illustrate the range of issues falling in the purview of robot ethics.

• Who is responsible for damages caused by personal and service robots?
• Are there ethical constraints on the control hierarchies design for mixed human-robot cooperative teams?
• Is the privacy right threatened by personal robots accessing the internet?
• Are human linguistic abilities and culture impoverished by extensive interactions with robots which are linguistically less proficient than human beings?

Develop Robot Using BasicX Microcontroller

Controllers and processors continually improve at an exponential rate. Courses instructors in microcontrollers and microprocessors need constant exposure to the latest technologies available to ensure that their students remain on the cutting edge of our technological capability. This article focuses on how the BasicX (BX-24p) microcontroller may be utilized to control non-holonomic autonomous mobile robots.

This includes interfacing common robotic components such as: motors, sensors and end effectors. The BX-24p is a microcontroller industry grade that is very robust and powerful. It is also compatible with the microcontroller carrier boards of BASIC Stamp, the BASIC Stamp’s components and other components commonly found in technology and engineering classrooms. Many helpful BasicX control functions are discussed and explained that were utilized for the development of two mobile.

One of small-scale system that is often used as an introductory experience of microprocessor or microcontroller is the BASIC Stamp II (BS2) microcontroller by Parallax. The BS2 is an educational carrier board called the Board of Education (Boe) and a robotic chassis giving the whole system the nick name “Boe-Bot.” Parallax also makes a variety of sensors and add-on components, which in most cases have all the necessary circuitry prepackaged with the sensor. These low priced devices allow students the opportunity to develop robotic systems using microprocessors in an environment that is safe, easy to interface, cost effective and enjoyable.

The language of the BX-24p is called BasicX and is almost 100% compatible with the QBasic and Visual Basic. It also has the programming structure of other popular languages such as C++, C, Java, and Perl. This allows students to learn these languages and techniques and implement them in the classroom without having standard industrial equipment. Once students move beyond higher education, they will have knowledge of commonly used languages and techniques and not just theory. All of the capabilities of the BX-24p make it a good choice to control systems that need high-powered performance at a low cost.

BX-24p Robotic Motor Controller

Many difficulties controlling of mobile robots involve having a motion controller that is small enough to be on the mobile robot, powerful enough to control the device and simple enough to be programmed with ease. In this case, the BX-24p comes to the rescue.

Robotic motion control can be realized using a Motor Mind C carrier board (MMC_BS2) that is designed for use with the BS2, but since the BX-24p is backward compatible with the BS2, integration from one to another can be easily done. Some of this carrier board features include a place to insert a microcontroller chip (BX-24p), places to solder wires for I/O and a place to insert a motor controller. Typically, the motor controller used with the MMC_BS2 is the Motor Mind C (MMC) which is capable of driving two 12V DC motors. The 12V DC power supply and DC motors are wired directly to the MMC_BS2. The user control, almost all, is done through programming in the BasicX language which reduces greatly other types of integration. The BasicX program can drive each motor at different speeds in either direction allowing any combination of forward, reverse and turning motions at any speed desired to move a robotic vehicle.

One must understand how the MMC motor controller works to drive a motor using the BX-24p and the MMC. The controller uses a number of hexadecimal to set the speed of a motor. For instance “0000” means no motion, “03FF” stands for 100% forward and “FC01” stands for 100% reverse. This means that any full motion percentage in either direction can be achieved by taking that percentage of the maximum hexadecimal value. For instance, to find 28% of full speed first take 28% * 1023 (Decimal 1023 = Hexadecimal 03ff). Next take that product (286) and convert it back to the hexadecimal value (011E). 011E represents the hexadecimal value of 28% of full speed. The setup speed for the controller is placed in a queue in the program and sent from the BX-24p to the MMC as a command packet via a serial communication.

Robot Tactile Sensing

The people involved number in research and development of tactile and haptic sensing and the number of reported works has increased particularly in the last couple of years, but the use of tactile sensors is still extremely low and fails to show momentum. Why? We think that basically there is no real market- oriented driving force boosting the tactile sensing domain: industrial automation aims efficiency at low cost. This generally means usage of well established reliable and as simple as possible technologies.

Robots with tactile sensing are not at that stage and some applications that could profit from them are implemented by forcing a structured environment and using simpler sensing devices like proximity sensors; other domains like medicine, particularly surgery, and service robotics have not been able to play that role until now. We must add two other considerations: tactile and particularly haptic sensing is quite demanding not only in terms of hardware but also of software.

The extraction of information from tactile sensors may require the implementation of complicated algorithms; the hardware and software available, even at an experimental level, are still not adequate for some already defined needs. The future vision in what tactile sensing is concerned is optimistic but only moderately. Assuming that the industry will not change very much its production style in the near future, we think that it will be up to scientists and engineers to go on developing new sensors suitable for other domains of applications.

We believe and expect that the technology will be able to overcome some of the current limitations of tactile sensing such as taxel dimension (resolution) and arrangement (array organized sensors suffer from crosstalk, i.e. several taxels can be excited by a very localized force), and integration of all components required to output tactile sensation (sensors, conditioning circuits, processing units, etc.). Nanotechnologies Nano-sciences and will probably provide answers to these problems but no one can assure if the solutions will have a major impact on tactile sensor usage.

Approach to Robot Programming

The industrial robot programming task is done basically in two ways:
1. The programmer in task charged can use some modeling technique but, instead of thinking only about the problem, it is necessary to think about the robot that will run the program, and about its programming language. Both the language and the robot will limitate the specification of the problem; moreover it is not possible to reuse the same program in a different robot.

2. The programmer uses some environment of graphical development, where is possible to test the program before using it in the robot. It is also possible to develop programs for different robots, but it is necessary to have a library for each robot. Even with these facilities, these tools do not solve the programming problem of the robot to interact with its environment.

The programming languages of industrial robot did not evolved in the same manner as the computer languages. Those environments and languages have some drawbacks:
• The typical languages are low-level, imperative, or structured. Both of them are more closed to the robot specification than to the problem, difficult the problem modeling task and all other good practices that a correct software engineering should require.
• Each industrial robot has its own programming language, which difficulties, or even turns impossible, to reuse the source code.

There is the problem to be solved. It will be used an adequate modeling technique, responsible for decompose the problem into simple problems, that would be easily programmed; formal models are employed to describe data structures and operations necessary to solve the elements sub-problems. To describe formally the overall problem and the sub-problems, a truly high level language, closed to the specification instead of the robot, should be used. Then we advocate the use of an easy-to-use compiler front-end, like Grafcet, that can interpret the specification language and generate an intermediate description for the program specified. An intermediate representation is used because the front-end must be focused on the specification of the problem, and not on the robot.

Modeling Techniques of Robotic Programming

The use of an adequate modeling technique will facilitate the programming system development, enabling the system developers and the system clients to express their ideas, allowing their communication in a known way. The modeling advantage is the creation of models from the system and its behavior that can be seen in different abstraction levels, before implementing it. So, it is very important to model the system.

If the programs are created directly, thinking on the problem and on the machine, this programs would be difficulty to read, to write, and consequently, to maintain. So, some techniques were created, like the structure analysis, that was the first modeling technique, where the problem is decomposed based on the data and the operations, that should be modeled separately. It is used the Entity Relationship Diagram (ERD) to model the data, and to model the operations it is used the Data Flow Diagram (DFD). Recently, there are some others modeling techniques, like Unified Modeling Language (UML), used to model object orienting systems. It is used the Class Diagram, that shows the classes and their relationships in a logical view (like ERD to structure analysis); the State Transition Diagram, that shows the events that causes transition from one state to an- other, with its resulting actions (like DFD to structure analysis); and the Use-Cases Diagram, that shows the system’s use cases and the actors that interact to them.

The robot programming has its modeling techniques. One modeling technique used in the development of mobile robots, the Subsumption Architecture, was used to model a manufacturing cell, composed by two robots and some others components. The subsumption architecture was the first behavior based modeling technique and, even it had been created to develop mobile robots, they can be used, as a high level abstraction, to model industrial applications. The analyst should choose the most adequate for such problem among various modeling techniques, to obtain the advantages of the software engineering.

Declarative Language Robotic Programming

It is important to use a modeling technique as it was said before, but is also important to have some language that would allow the programmer to express exactly what he intends to do.

Such a language should be simple, and as closed to the specification of the problem as possible. It must also have high level constructors that allow the definition of structured and complex abstract data types and mathematical operators over them for a language to be close to the specification.

Basically, there are two kinds of programming languages:
1. Imperative languages: the underlying principle or the operational semantics is very similar to the processor’s execution cycle, being necessary to understand its architecture; the kind of avail- able statements is also similar to the machine instructions. The programmer should also know how to manipulate memory elements to store the necessary data;

2. Declarative languages: instead of following the execution principles, those languages have as back- ground a mathematical theory that supports operations over that data and data representation. The explicit memory manipulation is not necessary; the programmer just manipulate, in a high level data, without being necessary to know where this data is stored.

Declarative languages are higher level than imperative languages, more closed to the problem specification. One example to show the difference between this two kind of languages is the file manipulation, that can be done in imperative languages (like C) and in declarative languages (like SQL).

Declarative languages are classified as functional or relational (logic) according to the style. The first group is supported by the principle that a program is just a function mapping the input data into the output results; while the second family relies upon the idea that a program is a set of assertions defining the relations that hold in some world. Typical declarative languages are: ML, Lisp, or Haskell (functional paradigm), and Prolog (logic paradigm).

Offline Programming to Solve Industrial Robot Limitation

One simple approach to solve some limitations on, the industrial robotic systems development is still a costly, difficult, and time consuming operation, are the Off-line programming environments. These environments are based in platforms of graphical simulation, in which the programming and execution process are shown using the real objects models. Consequently, the robot programmer has to learn only the simulation language and not any of the robot programming languages. Other benefits of off-line programming environments include libraries of pre-defined high-level commands for certain types of applications, such as welding or painting, and the possibility to assess the kinematics feasibility of a move, thus enabling the user to plan collision-free paths. The simulation may also be used to determine the cycle time for a sequence of movements.

These environments generally provide a set of primitives commonly used by various robot vendors, and produce sequence of robot manipulator language primitives such as ”move” or ”open gripper” that are then down- loaded in the respective robot controllers. However, the current state-of-the-art of systems off-line suffers from two main drawbacks. The first, they do not address the issue of sensor-guided robot actions. Secondly, they are limited to a simulator of robot motion , which pro- vides no advanced reasoning functionality, nor flexibility in the tasks.

It proposes an integrated, formal and high- level approach to industrial robot programming that would solve the above problems. It is necessary to have the following components (some may exist, some others need to be developed) to use this approach:

• A truly high level and declarative language.
• An easy-to-use front-end
• An intermediate representation
• An automatic generator of the robot code generators

It presents the importance of a modeling technique and discussion the four components of this approach. At the end, appear the conclusions and future works.

Design of the Industrial Robot

Current work focuses on a wheeled industrial vehicle called Industrial Ro- bot. Four driving modules are present; each is equipped with two wheels. These modules properly align to the direction of the movement.

This vehicle was designed in order to meet the following central requirements:
• The ability to transport materials of a maximum mass of m=500[kg] (large transmissions, automotive engines, etc.)
• A minimal vehicle velocity of v =1[m/s];
• A minimal vehicle acceleration a=0.5[m/s2]
• Overall dimensions of the vehicle 1200[cm] x 800 [cm] (according to a Euro palette);
• Excellent maneuverability and high dynamics;
• Autonomous driving;
• Additional functions (e.g. scalability).
The robot is able to drive in any arbitrary direction, can perform slipless turning and rotation around its central point. These kinds of movements are essential while driving within limited spaces. The solution allows avoiding curved turns it may also influence positively the total battery usage as e.g. the heading direction can be changed in place.

The vehicle contains of four identical components called driving modules. These subsystems consist of two electrical motors units MCD EPOS, one of them is programmable - P (Programmable) and acts as master, the second one is not programmable and acts as slave - S (Slave). It contains of two wheels and two motors and thus it is similar to a differential drive. Thanks to the bearing in the upper part, the lower part of the module is able to rotate while the top of the module is mounted to a platform. A slip-ring is used to interconnect the in-module electronics with the rest of the vehicle in order to provide unlimited rotation.

The module consists also of an electromagnetic brake and an encoder, besides motors and motor controllers. Both of them work with regard to a vertical axis, the former is used to block the rotation while the latter measures the angle of rotation. This angle represents also the direction of movement of the vehicle.

Computer Integration to Develop Robot

Computer simulation in general offers tools that make research of distant, dangerous, or very expensive phenomena easily possible. Computer simulation is a method for forecasting consecutive states of complex systems basing on their models. The simulation can also serve to foresee behaviors of objects which not yet exist, as for instance the prototype of the Production Robot. Recently years, the significance of computer simulation considerably grew thanks to accessible high-end PC equipment, large memory, equipped with very fast processors, and high-resolution graphic cards.

Integration of CAD/CAM, CAE tools with simulation software is becoming more and more common and significant. The usage of virtual reality and animations helps presenting outcomes of such simulations. The simulation should be able to depict all possible errors already on the virtual stage of designed system. The testing designed possibility of system through simulations is commonly used during designing control systems.

The synthesizing advantage a dynamic model of the system is the possibility to verify the compatibility with requirements and to verify the design before actually building first physical prototypes. The modification needs to be applied only to the model in the case of any errors on the design stage, what is considerably cheaper and quicker than applying changes to the product already build.

Special attention during the modeling process should be paid to the Matlab/Simulink environment. A huge advantage of this software is the possibility to design complex objects of mixed natures such as modeling of the mechanical, electric and control system in one common environment. From the correctly prepared model the environments allows generating C code. This article described closely the process of modeling dynamics in the SimMechanics environment on the example of a current research project. The research will focus on the synthesis of inverse dynamics model for the robot, and finally on de- signing appropriate controller on the real object.

Matlab Aided Modeling and Simulation for Robotic Development

In the example SimMechanics and Matlab/Simulink was applied in order to design the dynamic models and perform simulations of robots. The graphic interface allows the synthesis and analysis of the object on the functional blocks level. An advantage of such solution is that designers are no longer obligated to write manually movement equations and complicated control code. Simulink is a universal Matlab toolbox for simulation and modeling of both non-continuous and continuous dynamic models. Simulink is a tool which allows subsystems integrating with different physical natures. It is realized as result of the connection between mechanical parts actuators e.g. electric motors and drives and sensors as well as the control system.

It is a big advantage that the designer can integrate electronics, mechanics, and software into a whole complex system in a single environment. Simulations help fitting all system elements together in one product in an optimal manner where constructional assumptions are taken into account. Taking the Production Robot as an example, the simulations help with choosing proper parameters of propulsion system, gears and later control system.

If the simulation process will show that the robot does not fulfill assumptions related to its dynamic properties, corrections are necessary to be applied to the mechanical design. As a result, the model of an improved design should be simulated. The dynamic model of the mechanical parts was prepared using the SimMechanics library. SimMechanics is a toolbox which is available in the Matlab environment. It contains of libraries with blocks designed for modeling complex mechanical systems with any number of stiff bodies with couplings representing the degrees of freedom (translational and rotational).

The Matlab environment provides a full integration between Simulink and SimMechanics. It allows using blocks like ‘actuators’ and ‘sensors’ in modeled systems. SimMechanics can represent mechanical systems in a hierarchical manner in subsystems similarly to Simulink. The models can contain kinematic restrictions, can act with forces or torques, Newton equations can be integrated and movement combinations can be measured.

Build Robotic Model and Controller within Matlab Environment

The dynamic controller and the dynamic model can be built within the Matlab environment. The model is built in SimMechanics while the controller is built in Simulink. SimMechanics disposes of a library from which different bodies, constraints, joints, drivers, force elements, actuators, and sensors can be selected and used to synthesize models of multi-body system. The ground block defines the ground and the world coordinates. The coordinates of the center of gravity (COG) can be provided in each body block. In this way bodies can be placed within the 3-D space. Bodies are connected to each other by means of joints, force elements or constraints/drivers. It is possible to obtain different properties such as positions and speeds and plot the results with respect to the time, or actuate bodies with e.g. torques. Joints can be actuated and their position can be obtained. The model is described by a right- handed orthogonal set of axes and the coordinates and angles are defined. It is essentials to note that blocks in SimMechanics do not model mathematical functions directly; instead they have specified physical meaning.

The SimMechanics toolbox can be used both to calculate the forces that are required to realize specified movement and to calculate the movement that is the result of forces applied. The type of analysis must be chosen for selecting what to calculate. SimMechanics provides four modes for analyzing mechanical systems:
• Forward dynamics calculates the motion of the mechanism resulting from the applied torques or forces and constraints;
• Inverse dynamics finds the torques or forces necessary to produce a specified motion for open loop systems;
• Kinematics does the same for closed loop systems including the extra internal invisible constraints arising from those structures;
• Trimming searches for equilibrium or steady states of a system’s motion with the Simulink trim command. It is used mostly to find a starting point for linearization analysis.

Using e-Learning to Deliver Robotic Material Course

Industrial Robotics is a core module being taught to the final year students in the Mechatronics Engineering course. This core module consist many abstract principles and concepts, which are needed in solving problems such as an industrial robot programming. To ensure that students are able to grasp the concepts and apply the principles to solve authentic programming problems, students should be allowed to observe different types of robots in action. However, the cost of acquiring all the necessary robots is high and thus not practical nor possible to provide students with all the robots needed for effective learning.

The lecturers must stick to traditional methods of drawing simple diagrams on the whiteboard and using PowerPoint presentations with static graphics to illustrate the concepts and principles, and to explain the various programming methods. Where robots were available, students were given the opportunities to engage in activities that require them to apply these principles, concepts, and programming methods.

Lecturers found it a great challenge to prepare students for the real work situation with the limited number and type of robots available as teaching aids,. In addition, preparing students effectively for both the theory and practical was an issue.

When e-learning gained more widespread adoption in the organization, the decision was made to deliver the theory component of this module through e-learning as it was felt that the capabilities of technology could be the solution to this problem of lack of robots as teaching aids. Dynamic simulations and visuals made possible by technology could be used to help students relate abstract content, such as the programming of robots to real experiences. Animated visuals could provide explicit demonstrations of dynamic processes and help to reduce the cognitive load imposed on learners by maintaining visual coherence. Simulations, defined as a presentation or model of an event, an object, or some phenomenon, could provide students with the realistic model of robots, as well as representation of related real-world phenomena, helping them to better visualize abstract concepts, thereby fostering conceptual learning. Through such instructional simulation, students could gain a better understanding of the real robotic system, problem-solving process or real work application.

Computer Aided Modeling of Mobile Wheels Robots

It is necessary to generate the appropriate inverse kinematics or the dynamic model for robot motion description. The dynamic model allows considering such properties as: mass, friction forces, mass-moment of inertia, torque, centrifugal force, etc. Such models are built in order to better under- stand the operation and structure the future mechatronic product. The model elaboration becomes even more important if high complex systems are to be developed. The preparation and application of the model allows detecting the mistakes and imperfections in the description (model) of the real system.

Their modification is less expensive and simpler on the virtual stage in comparison with the costs of the improvement of already existing solutions – physical prototypes. Generally, several models are developed for one system in order to describe its properties in variety of ways. It is necessary to comply with certain principles during elaborating the models. The designer of the model should adapt the circumstantialities of the model to the needs of application. Both oversimplification and excessive circumstantialities of the model may be wrong. The kind of the chosen model decisively influences both the design process and final result (linear, non-linear, kinematic or dynamic model). The same object intended for different applications will require different models (there is no universal model). The modeling and the simulation has become an indispensable part during design of the mechatronic systems. These systems are mainly built-up from other systems and subsystems of different nature.

The mechatronic product may be composed of mechanical, hydraulic, electric, and pneumatic elements where some elements require also appropriate control systems. The elaboration of exact models requires modeling of all these elements, not depending on their nature. The packet Mat- lab/Simulink is one of not too many tools that allow simulating and modeling interdisciplinary systems in one common environment. Practically, this packet makes simulation possible for any object, provided its model exists in the form of the system of differential equations or differential-algebraic equations. This environment enables also to elaborate the model in the form of a block diagram, provided that such blocks are accessible in the program library. If a block diagram is not accessible it is however possible to design it.

Series Elastic Actuators of Legged Robots

Actuator’s muscle like properties could allow legged robots to achieve the performance approaching that of their biological counterparts. Some of the beneficial properties of muscle include its low impedance, low friction, high force fidelity, and good bandwidth. Series Elastic Actuators share the properties beneficial with muscle and are well suited for legged robots. These high quality controllable actuators of force allow the control system to exploit the natural dynamics of the robot, to distribute forces among the legs, and to provide an active suspension that is robust to rough terrain.

Most airplanes are designed to have wings so that they glide stably, requiring only a simple power source and simple control to fly, early locomotives used fly ball governors, a mechanical feedback device, to help maintain constant speed, satellites and rifle bullets spin to stabilize their trajectory. These machines were designed so that their natural dynamics allow minimal control effort. Animals have evolved into similar mechanisms that exploit natural dynamics. Birds have wings that allow gliding stably.

The natural dynamics can be exploited in the control of bipedal walking robots: the swing leg can swing freely once started; a kneecap can used to prevent the leg from inverting; and a compliant ankle can be used to naturally transfer the center of pressure along the foot and help in toe off. Each of the mechanisms helps to make the easier control to achieve and results in motion that is natural and smooth looking.

The actuators must show the extremely low impedance and friction to the system in order to exploit passive dynamics in a robot. The output impedance and inertia are high with traditional actuation systems such as hydraulics and highly geared motors. In contrast, Series Elastic Actuators show extremely low impedance and low friction and thus may be used in robots that exploit their natural dynamics.

Robotic Actuator Impedance

Robots perform repetitious and tedious tasks with great speed and precision in traditional manufacturing operations. In this setting where the environment is controlled and the tasks are repetitious, position controlled robots which trace predefine joint trajectories are optimal. However, in unstructured environments where little is known of the environment, force controlled robots that can comply with the surroundings are desirable. This is the case for legged robots which walking over rough terrain, robotic arms interacting with people, wearable performance haptic interfaces, enhancing exoskeletons, and other robotic applications.
A controllable actuator could be a perfect force source, outputting exactly the commanded force independent of load movement. All of force controllable actuators will have limitations which resulting in deviations from a perfect force source in the real world. These limitations include impedance, bandwidth and stiction. An actuator’s impedance is the additional force created at the output by load motion. Impedance is a function of the frequency of the load motion, typically increasing with frequency of load motion. A system of easily backdriveable is considered to have low impedance. Sticton describes the phenomenon of stick slip or sticky friction, which is [resent in most devices where mechanical components are in sliding contact. Stiction must be overcome by a breakaway force, which limits the smallest force the actuator can output. The bandwidth of an actuator is the frequency up to which forces can be accurately commanded. Bandwidth is affected by saturation of power elements, control system gain among other things, and mechanical stiffness. Impedance is zero, stiction is zero, and bandwidth is infinite. Muscle has extremely low impedance and stiction and moderate bandwidth and is the currently best known actuation technology that approaches a perfect force source.
Present day of technologies actuator have characteristics that have severely limited their use in force controlled applications. A geared electric motor has a high reflected inertia, a lot of stiction and is difficult to back drive. Hydraulic systems have high seal friction and are often impossible to back drive.

The Global Robotics Industry

In the early 1960s the United States was virtually without competition in robot research and production and led Japan, Soviet Union and Europe by several years. One of the first industrial robots, the Unimate, was manufactured in the United States in 1961 by Unimation, based on a patent filed in 1954. The Unimate, also called a programmable transfer machine, was designed for material handling. It utilized hydraulic actuators and was programmed in joint coordinates during teaching by a human operator. The angles of the various joints were stored and played back in operation mode. An all electric, six axis articulated arm designed for tracking arbitrary paths in three dimensional space, increased the applicability of robots to more sophisticated applications such as welding and assembly.

Unimation acquired and further developed the Stanford Arm with support from General Motors, and later commercialized it as the Programmable Universal Machine for Assembly (PUMA) model. The Japanese robot industry was jump started in 1967 when the Tokyo Machinery Trading Company began importing the Versatran robot from AMF Corporation. Kawasaki Heavy Industries entered a license technology agreement with Unimation in 1968 and began to produce robots in Japan in 1969.

The robot boom, which automated manufacturing on a large scale during the 1980s, the Japanese industrial robot industry, grew at a faster pace than anyone had estimated. From 1978 to 1990 JICA (Japanese Industrial Robot Association) repeatedly corrected its forecast by +80% and more. Japan used a broader definition for industrial manipulator than the Europe and USA. The International Federation of Robotics estimates that the worldwide operational industrial robots stock had reached almost one million in 2007.

JIRA attributes this success to three characteristic of industrial robots:
• Industrial robots are programmable automation devices.
• Industrial robots exceed the physical and mechanical abilities of humans.
• Industrial robots perform with high fidelity and accuracy.

Microcontroller Components for Robotic Constructing

The method in designing and constructing the robotic arm are based on the operational characteristics and features of the microcontrollers, the electronic circuit diagram, stepper motors, and features of the microcontroller and stepper motors.

Circuit Diagram
The components of the electronic circuit diagram are the MCU, the LATCH 74LS373, the EPROM 2732, Intel 8255 PIO, resistors, diodes, capacitors, inductors, op-amp, and transistors. This components work together to achieve the set target of controlling the anthropomorphic like arrangement of the stepper motor. The microcontroller is the processing device that coordinates all the activities of all the components for proper functioning.

Power Supply
This is used to power the whole system, e.g. the control unit, magnetic sensing unit, and the stepper motors. The transformer is a 220/12V step down transformer. We used a bridge rectifier to convert the 12V alternating current to direct current.

The unregulated output from the filtering circuit is fed into a voltage regulator LM7805 and LM7812. These two are chosen for the design because the LM 7805 has an output of +5V which we required to power the control unit, and the Magnetic coil while the LM7812 has an output of +12V which is required to power the stepper motors.

MCU 8051
This is the processor. It coordinates the robotic arm operation by collecting information from the LATCH, the EPROM, and the PIO; interprets and then execute the instructions. It is the heart of the whole system.

LATCH 74LS373
This is a D-type transparent latch. It is an 8 bit register that has 3 state bus driving outputs, full parallel access for loading, and buffer control inputs. It is transparent because when the enable EN (enable) input is high, the output will look exactly like the D input.

8225 PIO
This is a programmable input/output device. It interfaces the connection between the 8051, the LATCH 74LS373, and the EPROM 2732 to external devices such as the stepper motors thereby allowing for communication.

EPROM 2732
It use this external EPROM specifically because it makes the controller cheaper, allows for longer programs, and its content can be changed during runtime and can be saved after power off.

Robotic Arm based 8051 Microcontroller

Robotic arm has become popular in the robotic world. The essential part of the robotic arm is a programmable microcontroller based brick capable of driving basically three stepper motors design to form an anthropomorphic structure. The first design was for experimental use on a human size industrial robot arm called PUMA 560 used to explore issues in versatile objects handling and compliance control in grasp actions. The interfacing method of the robotic arm stepper motors with the programmed 8051 based microcontroller which is used to control the robot operations. We have employed the assembly language in programming of microcontroller. A sample robot which can grab and release small objects is built for demonstrating this method.

Taking a look at the history of robot development, a special kind of human size industrial robotic arm called programmable Universal machine for Assembly (PUMA) came into existence. This robot type is often term anthropomorphic because of the similarities between its structure and the human arm. The individual joints are named after their counterparts of human arm. It is worth noting that in our work, the hand is magnetic and not a generalized manipulator. Manipulation is the function of the arm in the proper sense of the world. The function of the arm is to orient and position the hand, act as a mechanical connection and power and sensing transmission link between the hand and the main body of the person. The full functional meaning of the arm rests in the hand. The working provides important elements that are required to build a simple robotic arm of very high quality.

As stated earlier it is making use of the 8051 based microcontroller. The 8051’s instruction set is optimized for one bit operations that are often desired in real world, real time operations. The primary objective is to make the robotic arm, which comprises of three stepper motors, to interface with the Intel 8051 based microcontroller. It has larger memory to store many programs and provides more interfaces to the outside world.

Mechanical Robotic Arm Structure

The world Robotics, meaning the study of robot was coined by Isaac Asimov. Robotics involves elements of both mechanical and electrical engineering, as well as control theory, computing and artificial intelligence. According to Robot Institute of America, a robot is a reprogrammable, multifunctional manipulator designed to move parts, materials, tools or specialized device through variable programmed motions for the performance of a variety of tasks”.

The fact that the robot can be reprogrammed is important. It is definitely a characteristic of robots. In order to perform nay useful task the robot must interface with the environment, which may comprise feeding devices, other robots, and most importantly people.

Mechanical Structure of the arm
It made use of three stepper motors and gears since the structure is a three dimensional structure in constructing of the arm. There is a stepper motor at the base, which allows for circular movement of the whole structure, the other at the shoulder which allows for downward and upward movement of the arm; while the last stepper motor at the wrist allows for the picking of objects by the magnetic hand.

A microcontroller is an entire computer on a single chip manufactured. Microcontrollers are generally dedicated devices embedded within an application e.g. as engine controllers in automobiles and as exposure and focus controllers on cameras. They have a high concentration of on chip facilities such as parallel inputs/outputs port, serial ports, counters, timers, interrupt control, analog to digital converters, random access memory, read only memory, etc in order to serve these applications.

The I/O, memory, and on chip peripherals of a microcontrollers are powerful digital processors, the degree of control and programmability they provide enhances significantly the effectiveness of the application.

The applications of embedded control also distinguish the microcontroller from its relative, the general purpose microprocessor. Embedded systems often require multitasking capabilities and real time operation.

Existing Industrial Robots on the Market

Gantry/Cartesian robot are made up of three prismatic joints. It has 3 degrees of freedom and the axes coincide with Cartesian coordinates. They provide a structure that can be useful for heavy loading situations. They are used commonly for pick and place operations, machine loading, part insertion and stacking part. A gantry is simple to visualize and program and can manipulate large loads for larger working areas, but they occupy lots of space and have a low ration of robot size to operating workspace. The axes are vulnerable to dust and dirt if not protected.

A Cylindrical robot works with a cylindrical coordinate system. It has the ability to reach any point within a specific height and radius. It is usually made up the two prismatic joints and one revolute joint. The revolute joint provides the rotation to access up to a 360 degree rotation and the prismatic joints establish the height and reach of the robot. They are used commonly for pick and place operations, machine loading from pallets and part insertion. Although the cylindrical robot is simple to visualize and program and they can manipulate large loads for larger working areas they can be restricted to areas close to their vertical base and have difficulty going around obstacles.

A spherical robot works with a polar coordinate system. Similar to the cylindrical robot, the spherical robot has the ability for 360 rotations. In addition, spherical robot has an extra revolute joint that gives it the ability to access a spherical area.

A SCARA (Selective Compliance Articulated/Assembly Robot Arm) is a special combination of an articulated robot and a parallel robot. It is comprised of two parallel rotary joints, which enables access up to 360 degrees, but limits vertical movements. SCARA are often used for high speed part insertions, drilling and welding operations and occasionally light to medium load pick and place operations.

History of Industrial Robotics

Robots visions and inventions can be traced back to ancient Greece. In about 322 Before Century the philosopher Aristotle wrote: “If every tool, when ordered, or even of its own accord, could do the work that befits it, then there would be no needed either of apprentices for the master workers or of slaves for the lords.” Aristotle seems to hint at the comfort such tools could provide to humans. In 1495 Leonardo da Vinci designed a mechanical device that resembled an armored knight, whose internal mechanisms were designed to move the device as if controlled by a real person hidden inside the structure.

The term “robot” was introduced centuries later by the Czech writer Karel Capek in his play Rossum’s Universal Robots (RUR), premiered in Prague in 1921. Robot derives from the Czech “Robota” meaning forced labor, and “robotnik”, a slave or servant. In RUR the rebel of robots against their human creators and eventually kill them, assuming control of the world. Capek seemed surprised by the enormous interest in his robots. Another art that influential piece, Fritz Lang’s seminal movie ‘Metropolis’, was released in 1926. Maria the female robot in the film was the first robot to appear on screen.

Isaac Asimov, the ingenious fiction author, is generally credited with the popularization of the term ‘robotics’. He used it in 1941 to describe the study of robots and predicted the rise of a powerful robot industry. Hartenberg and Denavit in 1955 applied homogeneous transformations for modeling the kinematics of robotic manipulators. The advent of automated flexible manufacturing systems (FMS) in the 1960s established robotics as a scientific discipline. The primary objectives for FMS are reduced labor costs, a high mix product, and factory utilization near factory capacity. A typical FMS combines industrial robots, an automated warehouse, automated material handling, and complex software systems for simultaneously modeling, operating, and monitoring the plant.

Commercial Functions of an Industrial Robot

Robots are used heavily in manufacturing industry. Robots applications in the industry can be generally categorized into three categories: material handling, processing and assembly/inspection. The robot is installed to improve a specific task or process. A robot can take a medial task done by an operator and maximize productivity by working in minimal time with maximum results. A robot replacing a human in a particular operation can increase productivity by improving quality and eliminating the small medial tasks. One example where a robot is more effective than a human is inserting small components onto a circuit board. A human can insert small components into a circuit board, however it would prove to be a difficult task because of the size of the human’s hands, the human eye and general human error. A robot can do the same task much faster and with greater accuracy.

Thus task now becomes simpler and more efficient because it is automated. Initial costs of robots can prove to be expensive, but the ROI is higher than most companies can expect. The hourly cost of a human operator has increase over the last few years, but the implementation cost and support for a robot grows to be significantly less than the cost of a human.

Manufacturers will take many things into consideration when making the choice between robotics and manual operation. The challenges that are posed with robotics include setup time, the manufacturing to space ratio, the quality of the work completed, inventory, flexibility, distance and uptime. These factors will determine directly the ROI for the Plant. The advantages a robot will give to the manufacturing processes to enable quick changeovers in design by merely changing the robot’s programming. Industry is learning towards lower volume specialty robots and products have the flexibility to excel. The original way of designing a robot involved conceptual design, and CAD software, and detailed design drawings.

Emerging Technologies in Industrial Robotics

George Devol acquired the first patent for an industrial robot in 1965. He developed the idea of having a mechanical system that was pre-programmed to handle objects. In the beginning the idea was fairly complex, it gave industry a good start to begin the industrial robot era. Since then there have been many patents exploring the world of robotics. One in particular was a design building on the initial patent. The Canadian Patent #2026008 claimed to establish an industrial robot that was built with revolute joints for more flexibility. This contributed to the classification of the articulated robots.

There have been additional patents issued that also aided in the development of a more flexible robot. Patent # CA 1307306 was claimed to invent a moving mechanism for a robot which provided the industrial robot with additional degrees of freedom. The space constraints and the view of the operator require the robot to clear the extrusion area during each cycle in the case of Wolverine Tubes. A moving mechanism will enable the robot to clear the area without affecting the abilities of the robot.

A patent filed more presently has the ability to add more flexibility to a robot by running the cables inside the robot arms. The Canadian patent #2196517 by Nachi Robotic System Inc. established a hollow tube for the robot arms that freed the robot from cables. The contributions that this patent could have to our design of the automated mandrel lubricator are essential. A robot manufactured with a hollow arm would add to the ease of lubrication transfer. By installing the lubrication tubes inside the hollow arm, not only will the tubes be more protected, but it will ensure the robot will have a clear are to move within. One challenge that comes with industrial robots is the training of the robot to a specific application.

Robots Interaction with and Acceptance by Humans

Interaction
Interaction between human and robot will be easier if the robots are humanoid. The more humanoid the robot, the easier it will be for a human to intuitively understand its limitations and capabilities, to plan its actions and to communicate directions clearly. Ideally, interacting should be natural that even a child should utilize easily robot assistance.

It needs a large number of interactions for a human level intelligent robot to gain experience in interacting with humans. If the robot has humanoid form, then it will be both natural and easy for humans too interact with it in a human like way. Actually it has been the observation that with just a very few human like cues from a humanoid robot, people fall naturally into the pattern of interacting with it as if it were a human. Thus, we can get a large dynamic interaction source of examples for the robot to participate in. these examples can be used with various external and internal evaluation functions to provide experiences for learning in the robot.

Acceptance
One of the most dedicate and important factors to take into consideration for the success of service robots relates to the psychological aspects and to the implementation of techniques for human robot interaction in ‘unstructured’ and ‘unprotected’ environments such as a house.

Humans have a tendency to develop resemblance affinities. We can relate better to a chimpanzee than to a snake. It is similar that we find it easier to interact with a humanoid than with a large insect like robot.

One should mention that beyond the to anthropomorphization of the robots, some studies and theories such as the theory of Social Responses to Communication Technologies, indicate that on a more fundamental level, people’s interaction with computers are identical to those between others human beings. The recent field of interactive robotics, which includes service robotics and personal robotics, will play an important role in developing appropriate robot-human interaction means.

Fostering Techniques for Cognitive and Motor Development

The degree of the human controls robot behaviors tends to polarize toward the extremes. At one end the human is in charge of: controlling the robot as a marionette, or feeding its brain with everything one assumes the robot should know.

At the other hand, the robot is seldom provided with a set of learning algorithms and left alone in the world to learn by built maps, exploration, make sense of it by itself. This is very challenging job, and in many respects we throw the robot to the lions, e.g. the danger in the real, unpredictable world.

The midway is to have a continuous, active involvement of a human during the development of the set of capabilities the robot needs in the world. In the fostering of animal world is considered an important component to ensure survival of the species. It is been observed that the more ”advanced” a species, the longer the period of immaturity of its offspring, in other words the longer the parents need to foster their children. It is the period when the young ones develop the skills that would make them successful in life.

Humans learn by themselves or from others. In the initial learning phase they may learn movements with out any control or information from others, while later they may learn under total guidance and control. When the learner does not know what controls to give to the muscles, he can learn by exploration, children learned their first movements this way, for instance, during learning eye-hand coordination they flail randomly their hands and record the perceptions they get for the applied controls, associating perceptions with actions.

Robots could learn in the same ways what humans do. Robots learned the control of sensory motor by exploration, following the circular reaction mechanism. Exploration is one way how to generate examples of associations between perceptions and actions. The guidance can be done by analogic teaching, which is useful particularly when the precise coordinates where the robot should go are not known exactly, but the operator can see where he or she wants the robot to move.

Robots Artifact of the Humans and Environments

Humans absorb information from the surrounding environment and act upon the environment, transforming it for their benefit. They introduced robots as intermediates for some tasks. Robots are defined by their relationship with the humans and the environment. Robots receive order and report on their interaction with the environment from which they extract information and upon which they act. Robots can be seen as artifacts that:
• Replace humans in some of their roles in this interaction
• Extend human capability to interact with the environment

The robot shape is chosen to fit the environment and the task to be performed. There are situations in which a non-human size and/or shape is not only desirable, but in fact necessary. For instance, a worm like shape is more appropriate than human shape for small robots that would burrow to penetrate ice in Europe. This is an extension of a capability since humans might have never performed directly this task. In other word, where humans have already been performing the tasks, the choice of the robot form is more subtle. One can design tailored solutions that are more efficient than humans by specifically defining the roles in which the humans are to be replaced. For instance, industrial robots on the fabrication lines are more efficient solution than humanoid robots at handling machine customized tools. It should be noted, that those robots function in fully artificial, structured environments, doing repetitive tasks mainly.

When tasks and roles performed previously by humans are very broad, environments in which they operate are human oriented, and interaction with humans are a primary factor, anthropomorphic designs may offer some advantages.

The main goals of this article are:
1. To argue for the need of humanoid robots.
2. To introduce the concepts and bring justification for developmental robotics and robot fostering.
3. To provide an example on fostering humanoid robots to learn motor skills by imitation.
This article will continue to robot fostering technique subjects.

Robot Learning by Imitation

Humans prefer to demonstrate movements, rather than describe linguistically. They offer a visual model by demonstration, which can be used for learning by imitation. Thus, from the learning motor perspective of skills humanoid robots have an unmatched advantage on other robots. They have a body shape that allows them to imitate humans.

The most straightforward way to force a robot to imitate of human movements is to take control completely over its actions, moving it by tele-manipulation. For instance, NASA JSC (Johnson Space Center) has a full immersion tele-presence testbed, which allows operators to be virtually immersed in the environment where a two-arm dexterous anthropomorphic robot operates. The operator headset allows the human to see trough eyes of robot, the cameras mounted on the robot head, and special gloves allow the operator to move the robot arms, while also getting force feedback.

It is possible for extension forcing overall body imitation if the body is covered with appropriately placed sensors. Capturing and imitation of elements of human movement is of great interest not only to robotics engineers but also to computer assisted movie and game makers. For such users, Sarcos has developed the SenSuit, which enables real time tele-operator control of computer generated icons and robotic figures.

Early references of the use of imitation for anthropomorphic or humanoid robots include. The topic was not much in the attention of researchers, partly because the whole field of humanoids was largely non existent outside Japan. The only notable was the COG project, which in its early phase focused more on using ideas related to the subsumption architecture and behavioral robotic, that has changes a few years later to emphasize the interaction with human users. Learning by watching was a precursor of the learning by imitation. The focus was on task learning and not how to move. More currently, imitation learning has received a much larger attention, the role of learning by imitation for humanoid robots being well argued in the work of Schaal and Vijayakumar and also Mataric.

CoSARC Robot Controller Architecture based Language

The CoSARC language is devoted to the robot controller architectures of design and implementation. This language draws from existing software component technologies such as CCM or Fractal and Architecture Description Language such as Meta-H or ArchJava. It proposes a structures set to describe the architecture in term of composition of cooperating software components. A software component is a reusable entity subject to “late composition”: the assembly of components is not defined at ‘component development time’ but at ‘architecture description time’.

The components main features in the CoSARC language are ports, internal properties, connections and interfaces. A component encapsulates internal properties such as operations and data that define the component implementation. A component’s port is a point of connection with other components. A port is typed by an interface, which is a contract containing the declaration of a set services. If a port is required, the component uses one or more services declared in the interface typing the port. If a port is provided, the component offers the service declared in the interface typing the port. All requires ports must always be connected whereas it is necessary for provided ones. The component internal properties implement services and service calls, all being defined in the interface typing each port of component connections are explicit architecture description entities, used to connect ports. A connection is used to connect required ports with provided one. When a connection is established, the interfaces compatibility is checked, to ensure ports connection consistency.

Components composition mechanism supports the “late composition” paradigm. The step when using a component based language is to separate the definition of components from software architecture description. There are four types of components in the CoSARC language: Control Component, Representation Components, Connectors and Configurations. Each of them is used to deal with a specific preoccupation of architecture controller design and implementation.

Robot Controller with Object Petri Net

A control component describes as a part of the control activities of a robot controller. It can represent several entities of the controller, as we decompose the controller into a set of interconnected entities, for instance Command (entity that executes a control law or a sequence control), Perception (entity in charge of sensor signal analysis, estimation etc), Event Generator ( entity that monitors event occurrences), Mode Supervisor (entity that pilots the use of a physical resource in a given mode as autonomous, teleoperation, cooperation), Mission Manger (entity that manages the execution of a given mission), etc. a control component manages and incorporates a set of representation components which define the knowledge it uses to determine the contextual state and to make its decisions.

The control of components is active entities. They can have one or more activities, and they can send messages to other control components. Internal properties of a control component are operations, attributes and an asynchronous behavior. Representation components are incorporated as attributes and as formal parameters of its operations. Each control component operation represents a context change during its execution. The asynchronous behavior of the control component is described by an OPN (Object Petri Net), that models its control logic. Tokens inside the OPN refer to representation components used by the control component are managed such as parallelism, synchronization, concurrent access to its attributes, etc. the control component operations are executed when firing OPN transitions. This OPN based behavior describes the exchanges, message reception and emission, performed by the control component, as well as the way it synchronizes its internal activities according to these messages. Thus OPN correspond to the reaction of component control according to the context evolution, received message, occurring events, etc. we chose OPN for modeling and implementation purposes. The use of Petri nets with objects is justified by the need of formalism to describe synchronizations precisely, concurrent data access and parallelism within control components, but also interaction between them.

Component-based Software Architecture of Robot Controller

CoSARC (Component-based Software Architecture of Robot Controller) methodology provides a generic view on robot control architecture design by means of an architecture pattern. The proposed pattern is adaptable to a large set of hybrid architectures. It provide conceptual framework of the developers, useful for controller analysis. The phase of analysis is an important stage because it allows outlining of all the entities involved in the actions and reactions of the controller and the interactions between them. It is made by following concepts and organization describes in the pattern. It takes account robot controller description depending on robot physical portion, to make the analysis more intuitive.

The central abstraction in the architecture is the Resource. A resource is a part of the intelligence robot that is responsible for the control of a given set of independently controllable of physical elements.

A resource corresponds to a sub-architecture decomposed into a set of hierarchically organized interacting entities. They are:
• A set of commands is in charge of the periodical generation of command data to actuators.
• A set of perceptions, a perception is responsible for periodical transformation of sensor data into more abstract data.
• A set of event generators, it is ensures the detection of predefined events and their notification to higher level entities.
• A set of actions, represents an atomic activity that the resource can carry out. An action is in charge of reconfigurations and commutations of commands.
• A set of modes, each mode describes the behavior of a resource and defines the set of orders the resource is able to perform in the given mode.
• A resource supervisor is the entity in charge of the mode strategy of commutation, which depends on the current context of execution, the context being defined by the corresponding operative portion state, the state of environment, and the orders to be performed. Robot control architecture contains of a set resources.