The problem of self-repair is to restore functionality to a SR robot, without outside intervention, in response to module failure. Our strategy is to detect the failure, eject the failed module from the system, and replace it with a spare built into the robot’s structure. Path planning in this problem is based on finding rectilinear paths that minimize the number of turns. Our 3D-MBP algorithm addresses this issue, and in this section we describe how 3D-MBP is applied to the self-repair application. This technique is also a type of reconfiguration algorithm for systems with limited heterogeneity.
The Crystal’s motion planning based on virtual module relocation reduces to finding a rectilinear path through the robot structure. Each segment of the path can be executed in constant time, assuming no failed modules, so an efficient motion plan requires a rectilinear path of minimum bends. Replacing a failed module (filling a “hole” in the structure) can be solved using virtual module relocation. To eject a failed module, this planning technique can not be used directly since here a particular module must be actually pushed (or pulled) to a position on the surface of the robot. However, pushing gaits to move the failed module, also exhibit the property that turns are more expensive than straight line motion. To find path of a minimum-bend is therefore useful in both steps.
An MBP problem is constructed by modeling the source and destination points in module coordinates, and holes and concavities This motion planning technique leads to a 2D self-repair solution, and easily extends to 3D given an efficient shortest path algorithm. A 3D rectilinear path is able to be decomposed into a sequence of 2D turns (not all of which are in the same plane). Therefore, given a 3D rectilinear path, a motion plan can be constructed by iterating the appropriate module gait over each path segment. Note that pushing gaits require a minimum amount of supporting structure, but we can build this into the path planning problem by growing the obstacles (holes in the structure and boundaries) by the required amount. This ensures that any path returned by the algorithm is feasible.
Blog about Robotics Introduction, news of the robot, information about the robot, sharing knowledge about the various kinds of robots, shared an article about robots, and others associated with the robot
Function of Simulink File of SCARA Robot
Function (m-file .m or Simulink file .mdl) description as below:
Main.m
It is permits to open one user interface to simulate the robotics system with or without using Simulink blocks. One graphical representation of the robot SCARA 2dof as well as the possibility to animate it is accessible via this interface. This interface permits also to recover the mouse events. In this case, the mouse will allow you, in the setting of your Lab, to give position set-point (x, y) to be reached by the end-effector.
ForwardKinematics.m
Defines the positions (xT, yT) of the SCARA end-effector according to its joints coordinates.
InverseKinematics.m
Defines the joints coordinates of the SCARA according to the position of its end-effector. The output function gives [Q1, Q2, err] with Q1 the solution when .2>0 (low elbow), Q2 the second solution. To satisfy some constraints linked to the robot workspace one imposes that:
• The position of the end-effector belongs to the working space
• The values of .1 and .2 vary in the robot joint domain (i.e. 0°=.1=180°;
• 180°<.2<180°).
Thus InverseKinematics.m function gives: err(1)=0 if the first solution Q1 is possible and =1 if the solution is not possible, and in the same way err(2) indicates the feasibility of the second solution.
InverseKinematicsUsingGeometry.m
Defines, using simple geometrical construction, the joint coordinates of the SCARA according to the position of its end-effector.
SetPointTrajectory.m
Gives the set-point to follow by the robot end-effector in the work space domain.
GUI_Management.m
Manage all events (click of mouse, pressed button, etc.) of the GUI (Graphical User Interface) window.
SetDispaly.m
Updates the graphical representation of the SCARA robot as well as the display of the joint information and the position of the end-effector.
SimulinkRobotControlWithoutDyMo.mdl
Simulink model that permits while interfacing it with the programs described above to control the movement of the SCARA in order to follow for example a trajectory.
Main.m
It is permits to open one user interface to simulate the robotics system with or without using Simulink blocks. One graphical representation of the robot SCARA 2dof as well as the possibility to animate it is accessible via this interface. This interface permits also to recover the mouse events. In this case, the mouse will allow you, in the setting of your Lab, to give position set-point (x, y) to be reached by the end-effector.
ForwardKinematics.m
Defines the positions (xT, yT) of the SCARA end-effector according to its joints coordinates.
InverseKinematics.m
Defines the joints coordinates of the SCARA according to the position of its end-effector. The output function gives [Q1, Q2, err] with Q1 the solution when .2>0 (low elbow), Q2 the second solution. To satisfy some constraints linked to the robot workspace one imposes that:
• The position of the end-effector belongs to the working space
• The values of .1 and .2 vary in the robot joint domain (i.e. 0°=.1=180°;
• 180°<.2<180°).
Thus InverseKinematics.m function gives: err(1)=0 if the first solution Q1 is possible and =1 if the solution is not possible, and in the same way err(2) indicates the feasibility of the second solution.
InverseKinematicsUsingGeometry.m
Defines, using simple geometrical construction, the joint coordinates of the SCARA according to the position of its end-effector.
SetPointTrajectory.m
Gives the set-point to follow by the robot end-effector in the work space domain.
GUI_Management.m
Manage all events (click of mouse, pressed button, etc.) of the GUI (Graphical User Interface) window.
SetDispaly.m
Updates the graphical representation of the SCARA robot as well as the display of the joint information and the position of the end-effector.
SimulinkRobotControlWithoutDyMo.mdl
Simulink model that permits while interfacing it with the programs described above to control the movement of the SCARA in order to follow for example a trajectory.
Reconfiguration Planning of Self-Robotic System
The transforming task of a modular system from one configuration into another is called the Reconfiguration Planning problem. Solving this problem is fundamental to any SR system. In some approaches explicit start and goal configurations are given, and in others the goal shape is defined by desired properties. Centralized algorithms require global system knowledge and compute reconfiguration plans directly, whereas decentralized algorithms compute solutions in a distributed fashion without the use of a central controller.
Reconfiguration algorithms can be designed for classes of modules, or for specific robots. Often a centralized solution is more obvious and is developed first, followed by a distributed version, although not always. Not all decentralized algorithms are guaranteed to converge to a solution, or are correct for arbitrary goal shapes.
Reconfiguration of CEBOT was planned by a central control cell known as a master. Master cells were later intended to be dynamically chosen, blurring the distinction between centralization and decentralization. Later CEBOT control is hierarchical (behavior-based). A common technique is used in reconfiguration algorithms for a lattice-based system is to build a graph representation of the robot configuration, and then to use standard graph techniques such as search to compute motion plans. Planning for the Molecule robot developed by the Dartmouth group is one example. Other example from the Dartmouth group is planning for unit-compressible systems such as the Crystal. This planner, named MeltGrow, uses the concept of a metamodule , where a group of modules are treated as a single unit with additional motion capabilities. The Crystal robot implements convex transitions using metamodules called Grains. Graph-based algorithms are also used by the MTRAN planner to compute individual module trajectories.
Pre-computed data structures can also be centralized by planners store such as gait-control tables. Once a gait is selected by the central controller, it is executed by local controllers on the individual modules. This type of algorithm is used by Polypod. The division between central and local controllers is also used in by RMMS, and I-Cubes.
Reconfiguration algorithms can be designed for classes of modules, or for specific robots. Often a centralized solution is more obvious and is developed first, followed by a distributed version, although not always. Not all decentralized algorithms are guaranteed to converge to a solution, or are correct for arbitrary goal shapes.
Reconfiguration of CEBOT was planned by a central control cell known as a master. Master cells were later intended to be dynamically chosen, blurring the distinction between centralization and decentralization. Later CEBOT control is hierarchical (behavior-based). A common technique is used in reconfiguration algorithms for a lattice-based system is to build a graph representation of the robot configuration, and then to use standard graph techniques such as search to compute motion plans. Planning for the Molecule robot developed by the Dartmouth group is one example. Other example from the Dartmouth group is planning for unit-compressible systems such as the Crystal. This planner, named MeltGrow, uses the concept of a metamodule , where a group of modules are treated as a single unit with additional motion capabilities. The Crystal robot implements convex transitions using metamodules called Grains. Graph-based algorithms are also used by the MTRAN planner to compute individual module trajectories.
Pre-computed data structures can also be centralized by planners store such as gait-control tables. Once a gait is selected by the central controller, it is executed by local controllers on the individual modules. This type of algorithm is used by Polypod. The division between central and local controllers is also used in by RMMS, and I-Cubes.
Chain-based Robots
Modules aggregate as connected 1D strings of units in chain-based systems. This class of robots easily implements rolling or undulating motions as in snake robots or legged robots. However, control is much more difficult for chain-based systems than for lattice-based systems because of the continuous nature of the actuation: modules can move to any arbitrary position as opposed to a fixed number of neighbor positions in a lattice. Our previous work has not considered chain-based systems, but it is important to understand their characteristics in the interest of developing more generalized algorithms.
Polypod was the first prominent chain-based system, proposed by Yim in 1993. Polypod is made up of Segments, which are actuated 2-DOF 10-bar linkages, and Nodes, which are rigid cubes housing batteries. Multiple gaits for locomotion, including rolling, legged, and even Moonwalk gaits, were demonstrated with Polypod. Polybot succeeds Polypod, sharing the same bipartite structure. Segments in Polybot abandon the 10-bar linkage in favor of a 1-DOF rotational actuator. The latest generation of Polybot prototypes has on-board processing and CANbus (controller area network) hardware for communication. A system that uses a similar actuation design is CONRO (CONfigurable RObot). The CONRO module has two rotational degrees of freedom, one for pitch and one for yaw, and was designed with particular size and weight considerations. Considerable attention has been paid to the connection mechanism, which is a peg-in-hole connector with SMA (shape-memory alloy) latching mechanism that can be disconnected by either face. Computation is on-board each module, so unlike Polypod, CONRO has only one module type. Power can be provided externally or via batteries on later prototypes. Manually configured shapes examples are the snake and the hexapod, and the current CONRO system is designed for self-reconfiguration.
The DRAGON is a robot snake with torsion-free (constant-velocity) joints. A sophisticated connector has been developed for the DRAGON, designed for strength and tolerance for docking.
Polypod was the first prominent chain-based system, proposed by Yim in 1993. Polypod is made up of Segments, which are actuated 2-DOF 10-bar linkages, and Nodes, which are rigid cubes housing batteries. Multiple gaits for locomotion, including rolling, legged, and even Moonwalk gaits, were demonstrated with Polypod. Polybot succeeds Polypod, sharing the same bipartite structure. Segments in Polybot abandon the 10-bar linkage in favor of a 1-DOF rotational actuator. The latest generation of Polybot prototypes has on-board processing and CANbus (controller area network) hardware for communication. A system that uses a similar actuation design is CONRO (CONfigurable RObot). The CONRO module has two rotational degrees of freedom, one for pitch and one for yaw, and was designed with particular size and weight considerations. Considerable attention has been paid to the connection mechanism, which is a peg-in-hole connector with SMA (shape-memory alloy) latching mechanism that can be disconnected by either face. Computation is on-board each module, so unlike Polypod, CONRO has only one module type. Power can be provided externally or via batteries on later prototypes. Manually configured shapes examples are the snake and the hexapod, and the current CONRO system is designed for self-reconfiguration.
The DRAGON is a robot snake with torsion-free (constant-velocity) joints. A sophisticated connector has been developed for the DRAGON, designed for strength and tolerance for docking.
Lattice-based Robots Hardware
Modules are constrained to occupy positions in a virtual grid, or lattice in lattice-based systems. One of the simplest module shapes in a 2D lattice based system is a square, but more complex polygons such as a hexagon (and a rhombic dodecahedron in 3D) have also been proposed. Because of the discrete, regular nature of their structure, developing algorithms for lattice-based systems is often easier than for other systems. The grid constraint makes implementing certain rolling motions, such as the tank-tread, more challenging since module attachment and detachment is required. We would like to develop algorithms implementable by most, or all, lattice-based systems, so a complete review of their properties is essential.
One of the first lattice-based SR robots planned and constructed in hardware is the Fracta robot. The 2D Fractum modules link to each other using electromagnets. Communication is achieved through infrared devices embedded in the sides of the units, and allows one fractum to communicate with its neighbors. Computation is also onboard; each fractum contains an 8-bit microprocessor. Power, however, is provided either through tethers or from electrical contacts with the base plane.
This system was designed for self-assembly, and can form simple symmetric shapes such as a triangle, as well as arbitrary shapes. Other lattice-based robots include a smaller 2D system, and a 3D system. Another early SR robot is the Metamorphic Robot. The basic unit of this robot is a six-bar linkage forming a hexagon. The kinematics of this shape was investigated when the design was proposed, and hardware prototypes were constructed later. A unique characteristic of this system is that it can directly implement a convex transition; a given module can move around its neighbor with no supporting structure.
The hexagon deforms and translates in an owing motion. A square shape with this same property was also proposed. This motion primitive is important since it is required by many general reconfiguration algorithms, but many systems can only implement it using a group of basic units working together.
One of the first lattice-based SR robots planned and constructed in hardware is the Fracta robot. The 2D Fractum modules link to each other using electromagnets. Communication is achieved through infrared devices embedded in the sides of the units, and allows one fractum to communicate with its neighbors. Computation is also onboard; each fractum contains an 8-bit microprocessor. Power, however, is provided either through tethers or from electrical contacts with the base plane.
This system was designed for self-assembly, and can form simple symmetric shapes such as a triangle, as well as arbitrary shapes. Other lattice-based robots include a smaller 2D system, and a 3D system. Another early SR robot is the Metamorphic Robot. The basic unit of this robot is a six-bar linkage forming a hexagon. The kinematics of this shape was investigated when the design was proposed, and hardware prototypes were constructed later. A unique characteristic of this system is that it can directly implement a convex transition; a given module can move around its neighbor with no supporting structure.
The hexagon deforms and translates in an owing motion. A square shape with this same property was also proposed. This motion primitive is important since it is required by many general reconfiguration algorithms, but many systems can only implement it using a group of basic units working together.
Hardware Design Pioneering Research
To build reconfiguring robots in hardware involves designing and constructing the basic modular units that combine to form the robot itself. Such modules differ from the wheels, arms, and grippers of fixed architecture robots in that they are functional only as a group as opposed to individually. Because we are interested in developing general algorithms for classes of robots instead of particular systems, familiarity with the entire spectrum of existing systems is valuable. Current systems can be divided into classes based on a number of module properties. Systems composed of a single module type are known as homogeneous systems, and those with multiple module types are called heterogeneous systems.
Modules can connect to each other in various ways, and the combination of connection and actuation mechanisms determine the possible positions a module can occupy in physical space, relative to neighbor modules. This gives rise to the major division within SR systems, lattice-based versus chain-based systems. In lattice-based systems, modules move among discrete positions, as if embedded in a lattice. Chain-based systems, however, attach together using hinge-like joints and permit snake-type configurations that connect to form shapes such as legged walkers and tank treads. Another class of modular systems cannot self-reconfigure, but can reconfigure with outside intervention. This class is called manually reconfiguring systems.
Cell Structured Robot (CEBOT) was the first proposed self-reconfiguring robot as an implementation of a Dynamically Reconfigurable Robotic System (DRRS). The DRRS definition parallels our current conception of self-reconfiguring robots - the system is made up of robotic modules (cells) which can attach and detach from each other autonomously to optimize their structure for a given task. The idea is directly inspired by biological concepts and this is reacted in the chosen terminology. It is interesting that this proposed SR robot is heterogeneous: cells have a specialized mechanical function and fall into one of three “levels”. Level one cells are joints (bending, rotation, sliding) or mobile cells (wheels or legs). Linkage cells are part of Level two, and Level three contains end-effectors such as special tools. Communication and computation are assumed for all cells.
CEBOT is the physical instantiation of DRRS. Various versions range from reconfigurable modules to “Mark-V,” which more closely resembles a mobile robot team.
Modules can connect to each other in various ways, and the combination of connection and actuation mechanisms determine the possible positions a module can occupy in physical space, relative to neighbor modules. This gives rise to the major division within SR systems, lattice-based versus chain-based systems. In lattice-based systems, modules move among discrete positions, as if embedded in a lattice. Chain-based systems, however, attach together using hinge-like joints and permit snake-type configurations that connect to form shapes such as legged walkers and tank treads. Another class of modular systems cannot self-reconfigure, but can reconfigure with outside intervention. This class is called manually reconfiguring systems.
Cell Structured Robot (CEBOT) was the first proposed self-reconfiguring robot as an implementation of a Dynamically Reconfigurable Robotic System (DRRS). The DRRS definition parallels our current conception of self-reconfiguring robots - the system is made up of robotic modules (cells) which can attach and detach from each other autonomously to optimize their structure for a given task. The idea is directly inspired by biological concepts and this is reacted in the chosen terminology. It is interesting that this proposed SR robot is heterogeneous: cells have a specialized mechanical function and fall into one of three “levels”. Level one cells are joints (bending, rotation, sliding) or mobile cells (wheels or legs). Linkage cells are part of Level two, and Level three contains end-effectors such as special tools. Communication and computation are assumed for all cells.
CEBOT is the physical instantiation of DRRS. Various versions range from reconfigurable modules to “Mark-V,” which more closely resembles a mobile robot team.
Heterogeneous Self-Reconfiguring Robotics
Self-reconfiguring (SR) robots are robots which can change shape to match the task at hand. These robots comprise many discrete modules, often all identical, with simple functionality such as connecting to neighbors, limited actuation, computation, communication and power. Orchestrating the behavior of the individual modules allows the robot to approximate, and reconfigure between, arbitrary shapes.
This shape-changing ability allows SR robots to respond to unpredictable environments better than fixed architecture robots. Common samples of reconfigure ability in action include transforming between snake shapes for moving through holes and legged locomotion for traversing rough terrain, and using reconfiguration for locomotion. SR robots also show promise for a high degree of fault tolerance since modules are generally interchangeable. The robot can self-repair by replacing the broken unit with a spare stored in its structure if one module fails. When all modules are the same, the system is known as homogeneous.
This design difficulty promotes fault-tolerance and versatility. However, a homogeneous system has limitations; all resources that may be required must be built into the basic module. We would like to relax the assumption that all modules are identical and investigate heterogeneous systems, where several classes of modules work together in a single robot. The heterogeneous systems can retain the advantages of their homogeneous counterparts while offering increased capabilities. The benefit would be a robot that can match not only structure to task by changing physical configuration, but also capability to task by using specialized components.
The future consideration application in which exploration tasks are carried out by SR robots. When necessary, the robot reconfigures into a legged walker to move across rough terrain or rubble, or transforms into a snake shape for moving through small holes. The robot can take advantage of smooth terrain as well by deploying a special module type containing wheels for fast, efficient locomotion. A variety of sensors are onboard, contained as modules within the structure of the robot.
This shape-changing ability allows SR robots to respond to unpredictable environments better than fixed architecture robots. Common samples of reconfigure ability in action include transforming between snake shapes for moving through holes and legged locomotion for traversing rough terrain, and using reconfiguration for locomotion. SR robots also show promise for a high degree of fault tolerance since modules are generally interchangeable. The robot can self-repair by replacing the broken unit with a spare stored in its structure if one module fails. When all modules are the same, the system is known as homogeneous.
This design difficulty promotes fault-tolerance and versatility. However, a homogeneous system has limitations; all resources that may be required must be built into the basic module. We would like to relax the assumption that all modules are identical and investigate heterogeneous systems, where several classes of modules work together in a single robot. The heterogeneous systems can retain the advantages of their homogeneous counterparts while offering increased capabilities. The benefit would be a robot that can match not only structure to task by changing physical configuration, but also capability to task by using specialized components.
The future consideration application in which exploration tasks are carried out by SR robots. When necessary, the robot reconfigures into a legged walker to move across rough terrain or rubble, or transforms into a snake shape for moving through small holes. The robot can take advantage of smooth terrain as well by deploying a special module type containing wheels for fast, efficient locomotion. A variety of sensors are onboard, contained as modules within the structure of the robot.
Main Research Questions for Heterogeneous Self-Reconfiguring Robots
To design heterogeneous SR systems, there are significant challenges involved. A fundamental issue is the degree to which modules are different from each other. There are many possible dimensions of heterogeneity, such as size and shape differences, various sensor payloads, or different actuation capabilities.
These differences all impact the main algorithmic problem, which is how to reconfigure when all units are not identical. Present reconfiguration algorithms are based on homogeneity. Heterogeneous reconfiguration planning is similar to the Warehouse problem which is P SPACE-hard in the general case. Beyond the reconfiguration planning problem itself, many other challenges remain in developing applications that capitalize on module specialization. In response to these challenges, we would like to develop an algorithmic basis for heterogeneous self-reconfiguring robots, and to develop software simulations that demonstrate our solutions along with hardware experiments where possible. Below are the four main research questions:
1. Framework for heterogeneity. There are many possible differences between SR modules. In order to reason about heterogeneous systems, a categorization scheme is required that models the various dimensions of heterogeneity. The benefit of such a framework is that algorithms can be developed for classes of systems instead of specific robots. We will identify some primary axes of heterogeneity and build this framework.
2. Reconfiguration algorithms. Reconfiguration planning is the main algorithmic problem in SR systems. Because of homogeneous reconfiguration algorithms are insensitive to module differences, we need to develop a new class of reconfiguration algorithms that are distributed, scalable, and take into account different types of resources, or tradeoffs between resources.
3. Lower bounds for reconfiguration. We propose to determine lower bounds for the complexity of reconfiguration problems under various assumptions about heterogeneity as developed in our framework defined above.
4. Applications. The solution to the heterogeneous reconfiguration problem is significant from a theoretical perspective, it is also interested in developing example applications in simulation and in the physical world.
These differences all impact the main algorithmic problem, which is how to reconfigure when all units are not identical. Present reconfiguration algorithms are based on homogeneity. Heterogeneous reconfiguration planning is similar to the Warehouse problem which is P SPACE-hard in the general case. Beyond the reconfiguration planning problem itself, many other challenges remain in developing applications that capitalize on module specialization. In response to these challenges, we would like to develop an algorithmic basis for heterogeneous self-reconfiguring robots, and to develop software simulations that demonstrate our solutions along with hardware experiments where possible. Below are the four main research questions:
1. Framework for heterogeneity. There are many possible differences between SR modules. In order to reason about heterogeneous systems, a categorization scheme is required that models the various dimensions of heterogeneity. The benefit of such a framework is that algorithms can be developed for classes of systems instead of specific robots. We will identify some primary axes of heterogeneity and build this framework.
2. Reconfiguration algorithms. Reconfiguration planning is the main algorithmic problem in SR systems. Because of homogeneous reconfiguration algorithms are insensitive to module differences, we need to develop a new class of reconfiguration algorithms that are distributed, scalable, and take into account different types of resources, or tradeoffs between resources.
3. Lower bounds for reconfiguration. We propose to determine lower bounds for the complexity of reconfiguration problems under various assumptions about heterogeneity as developed in our framework defined above.
4. Applications. The solution to the heterogeneous reconfiguration problem is significant from a theoretical perspective, it is also interested in developing example applications in simulation and in the physical world.
Virtual Instructor Intervention of LEGO Robotics
A virtual instructor (VI) is an autonomous entity whose main objective is to deliver personalized instruction to human beings and to improve human learning performance by combining knowledge of empirically researched instructional (i.e., pedagogical, andragogical) techniques of how human’s learn & behave. A VI may be embodied (e.g., graphical, holographic, robotic, etc.), non-embodied (i.e., without an identified form), verbal, non-verbal, ubiquitous, accessible from mixed reality environments, and serve as a continuous assistant to continually improve human learning performance across cultural and socio-economic lines. A virtual instructor provides a personalized human learning experience by applying empirically evaluated and tested instructional techniques for improving human learning.
For robotics instruction, a virtual instructor is distributed in a mixed reality environment (e.g., virtual reality or augmented) with capabilities to provide personalized instruction on conceptual and psychomotor tasks. For learning psychomotor tasks, the learner interfaces with the virtual instructor wearing an augmented reality like heads-up display (HMD)
The HMD is equipped with sensors including, but not limited to a camera, which is used to recognize and select parts. Depending on the step along a task, this feature assists the learner identify which part to grab for completing, for example, a robot assembly task. For learning conceptual tasks, the learner uses the traditional keyboard and mouse and also a wearable headset (speaker and microphone) to interface with a virtual reality environment in order to learn basic concepts from three dimensional simulations of robotic equipment, torque, assembly processes, etc.
In both types of mixed reality environments, the learner communicates with the virtual instructor using voice commands and receives both computer synthesized voice and graphic instruction. The VI instructs by following a workflow guiding the learner through procedural steps towards completing a task. Throughout the entire human VI interaction, the VI subsystem continuously updates the learner profile and measures the learning performance
For robotics instruction, a virtual instructor is distributed in a mixed reality environment (e.g., virtual reality or augmented) with capabilities to provide personalized instruction on conceptual and psychomotor tasks. For learning psychomotor tasks, the learner interfaces with the virtual instructor wearing an augmented reality like heads-up display (HMD)
The HMD is equipped with sensors including, but not limited to a camera, which is used to recognize and select parts. Depending on the step along a task, this feature assists the learner identify which part to grab for completing, for example, a robot assembly task. For learning conceptual tasks, the learner uses the traditional keyboard and mouse and also a wearable headset (speaker and microphone) to interface with a virtual reality environment in order to learn basic concepts from three dimensional simulations of robotic equipment, torque, assembly processes, etc.
In both types of mixed reality environments, the learner communicates with the virtual instructor using voice commands and receives both computer synthesized voice and graphic instruction. The VI instructs by following a workflow guiding the learner through procedural steps towards completing a task. Throughout the entire human VI interaction, the VI subsystem continuously updates the learner profile and measures the learning performance
Robots Application in Nuclear Industry
Robots, whether teleoperated under autonomous or supervisory control have been used in a variety of applications in maintenance and repair. The following subsections describe many of these systems, focusing primarily on applications for which working robot prototypes have been developed.
Teleoperators have been utilized as well in the maintenance role for more than 4 decades in the nuclear industry. Several features of maintenance make it a good application for teleoperators in this arena.
First is the low frequency of the operation, which calls for a general-purpose system capable of doing an array of maintenance tasks.
Second, maintenance and repair of nuclear industry require high levels of dexterity.
Third, these tasks complexity may be unpredictable because of the uncertain impact of a failure. For these reasons, the choice for this role is often between a human and a teleoperator. When the environment is hazardous, a teleoperator is generally the best selection. If humans in protective clothing can perform the same job, the benefits of having teleoperators continuously at the work site need to be weighed against the cost of suiting up and transporting humans to and from the work site. While humans are likely to be able to complete tasks more quickly than teleoperators, using teleoperators can: (1) shorten mean time to repair by reducing the response time to failures, (2) reduce health risks, (3) improve safety, and (4) improve availability by allowing maintenance to take place during operations, instead of halting operations.
The maintenance important for nuclear industry robotics, the proceedings of the 1995 American Nuclear Society topical meeting on robotics and remote handling included 124 papers, nearly a quarter of which were devoted to some aspect of maintenance. The 1997 meeting included 150 papers, where more than 40% dealt with some aspect of maintenance. Furthermore, if one considers environmental recovery operations as a form of maintenance, then a much larger proportion of papers at both meetings were maintenance-related.
Teleoperators have been utilized as well in the maintenance role for more than 4 decades in the nuclear industry. Several features of maintenance make it a good application for teleoperators in this arena.
First is the low frequency of the operation, which calls for a general-purpose system capable of doing an array of maintenance tasks.
Second, maintenance and repair of nuclear industry require high levels of dexterity.
Third, these tasks complexity may be unpredictable because of the uncertain impact of a failure. For these reasons, the choice for this role is often between a human and a teleoperator. When the environment is hazardous, a teleoperator is generally the best selection. If humans in protective clothing can perform the same job, the benefits of having teleoperators continuously at the work site need to be weighed against the cost of suiting up and transporting humans to and from the work site. While humans are likely to be able to complete tasks more quickly than teleoperators, using teleoperators can: (1) shorten mean time to repair by reducing the response time to failures, (2) reduce health risks, (3) improve safety, and (4) improve availability by allowing maintenance to take place during operations, instead of halting operations.
The maintenance important for nuclear industry robotics, the proceedings of the 1995 American Nuclear Society topical meeting on robotics and remote handling included 124 papers, nearly a quarter of which were devoted to some aspect of maintenance. The 1997 meeting included 150 papers, where more than 40% dealt with some aspect of maintenance. Furthermore, if one considers environmental recovery operations as a form of maintenance, then a much larger proportion of papers at both meetings were maintenance-related.
Implementing Robotic Exercises for Students
Solving robotic exercises is a difficult task for students because the modeling activity involved requires students to comprehend programming, robotic design concepts as well as basic engineering skills. To help students train themselves, we are proposing a mixed reality based instructional system that addresses these learning challenges and teaches them a general problem solving method. In this article it presents the benefits of using such a system in a learning process, in relation to standard teaching (as in the classroom).
With the inception of LEGO Mindstorms, robotics is being embraced by children, adults and educators. Educators are infusing LEGO Mindstorms in various curriculums such as: computer science, information systems, engineering, robotics, and psychology. The LEGO environment provides students with the opportunity to test the results of abstract design concepts through concrete hands-on robotic manipulation. In this LEGO learning environment, students often discover they need to acquire new skill sets and the cycle of revising their knowledge base before they can achieve a new function becomes apparent.
Understanding and implementing robotic exercises are difficult tasks for students. The LEGO Mindstorm environment is an excellent vehicle in which students can train themselves. This self training approach is a “constructive method”. This methodology enables students to become conscious of the underlying mechanics and programming constructs required to successfully produce a seamless execution. Utilizing this learning environment to teach robotics has forced us to define and to name robotic, mechanical and programming concepts in domains which are generally not directly taught in the classroom.
The focus of the next LEGO Robotic article is to discuss the myriad of benefits that student can draw from when immersed in a virtual instructor learning environment. The subsequent section will show why robotics is a difficult domain. The next section will present the virtual simulation and instruction portion of the course.
With the inception of LEGO Mindstorms, robotics is being embraced by children, adults and educators. Educators are infusing LEGO Mindstorms in various curriculums such as: computer science, information systems, engineering, robotics, and psychology. The LEGO environment provides students with the opportunity to test the results of abstract design concepts through concrete hands-on robotic manipulation. In this LEGO learning environment, students often discover they need to acquire new skill sets and the cycle of revising their knowledge base before they can achieve a new function becomes apparent.
Understanding and implementing robotic exercises are difficult tasks for students. The LEGO Mindstorm environment is an excellent vehicle in which students can train themselves. This self training approach is a “constructive method”. This methodology enables students to become conscious of the underlying mechanics and programming constructs required to successfully produce a seamless execution. Utilizing this learning environment to teach robotics has forced us to define and to name robotic, mechanical and programming concepts in domains which are generally not directly taught in the classroom.
The focus of the next LEGO Robotic article is to discuss the myriad of benefits that student can draw from when immersed in a virtual instructor learning environment. The subsequent section will show why robotics is a difficult domain. The next section will present the virtual simulation and instruction portion of the course.
LEGO Mindstorms NXT Interface
In 2007, LEGO introduced a new kit the Mindstorms NXT which consists of 510 pieces, thereby reducing the number of parts by 208. The LEGO Mindstorms NXT Kit uses a graphical programming interface to teach RC concepts. It is powered by a 32-bit ARM processor. The LEGO MINDSTORMS NXT brick also has a co-processor, an 8-bit AVR processor and includes Bluetooth communication. It includes 4 input ports that support both analog and digital interfaces and 3 output ports that are used to drive motors using precise encoders.
It also has an LCD display that can be programmed and a loudspeaker that supports up to 16 KHz. All of these features are included in a toy that is supposed to be programmed by a kid! students from biomedical, aerospace, mechanical and chemical engineering are not necessarily coding experts but they are specialists in their field, and have a good grasp on the algorithms and designs that can solve a particular problem. Traditional programming techniques posed a large learning curve to helping the students use RC hardware. Graphical programming alleviates these problems and provides a natural learning curve by abstracting the unnecessary implementation details. In Fig. 10 below, the LEGO interface is easy and intuitive for students to learn. On the right-hand side icons with a specific function are listed. The open space on the page is where the student can drag and drop the icons to formulate program.
Thus, graphical programming languages help kids build parallel, embedded programs that they can use to program the robots-without having to worry about the hardware interfaces and optimization issues. In addition, the kids can use the same brick and reprogram the hardware to perform a different kind of action.
Graphical programming languages provide the user with a true drag-and-drop interface that reduces the learning curve drastically. A program containing 3 icons that instructs the robot to turn continuously. The last icon with the two arrows in a circle denotes a loop. Hence, student working with high level icons are able to instruct the robot to do complex task with minimal commands or sequencing of the icons.
It also has an LCD display that can be programmed and a loudspeaker that supports up to 16 KHz. All of these features are included in a toy that is supposed to be programmed by a kid! students from biomedical, aerospace, mechanical and chemical engineering are not necessarily coding experts but they are specialists in their field, and have a good grasp on the algorithms and designs that can solve a particular problem. Traditional programming techniques posed a large learning curve to helping the students use RC hardware. Graphical programming alleviates these problems and provides a natural learning curve by abstracting the unnecessary implementation details. In Fig. 10 below, the LEGO interface is easy and intuitive for students to learn. On the right-hand side icons with a specific function are listed. The open space on the page is where the student can drag and drop the icons to formulate program.
Thus, graphical programming languages help kids build parallel, embedded programs that they can use to program the robots-without having to worry about the hardware interfaces and optimization issues. In addition, the kids can use the same brick and reprogram the hardware to perform a different kind of action.
Graphical programming languages provide the user with a true drag-and-drop interface that reduces the learning curve drastically. A program containing 3 icons that instructs the robot to turn continuously. The last icon with the two arrows in a circle denotes a loop. Hence, student working with high level icons are able to instruct the robot to do complex task with minimal commands or sequencing of the icons.
Benefits of a Virtual Instructor
Many publications have demonstrated that computer assisted learning and virtual instruction makes it easier for students to grasp complex concepts, work at their own pace as well as benefit from numerous advantages these technologies offer. In this section we highlight additional benefits when a virtual instructor is coupled with the LEGO Mindstorm NXT software interface.
A methodology for solving robotic problems
Most students have no general method to solve robotic problems. Many textbooks fail to have a systematic approach to breaking down a robotic problem and finding a solution. They usually stress mechanical or engineering concepts while other textbooks teach programming. The NXT software combines the graphical interface with a systematic approach for solving problems.
First it defines the challenge and demonstrates a successful solution in the challenge brief. A building and programming guide give detail step-by-step instructions to the student. It formalizes an intuitive approach. However, it does not explain what is on the screen nor does it give any descriptions of what to do. The building component fails to explain why certain pieces were selected and not others nor does it give a rationale for its assembly solution.
However, when this graphical interface is utilized in conjunction with a virtual instructor (VI) these minor issues are eliminated. The VI provides additional instruction to the student as well as guides them through the building and programming modules. In addition, the VI provides helpful hints to the individual student during the learning process, provides personalized instruction, and keeps track of where the student is having difficulty.
Immediate Detection and Correction of Errors
The ability for students to understand and learn when they are making mistakes and immediately receive information in order to correct mistakes instantaneously is far more beneficial than trying to correct the mistakes later. The current NXT graphical interface can not detect the procedural tasks students perform or determine whether the procedural steps are correct or not.
A methodology for solving robotic problems
Most students have no general method to solve robotic problems. Many textbooks fail to have a systematic approach to breaking down a robotic problem and finding a solution. They usually stress mechanical or engineering concepts while other textbooks teach programming. The NXT software combines the graphical interface with a systematic approach for solving problems.
First it defines the challenge and demonstrates a successful solution in the challenge brief. A building and programming guide give detail step-by-step instructions to the student. It formalizes an intuitive approach. However, it does not explain what is on the screen nor does it give any descriptions of what to do. The building component fails to explain why certain pieces were selected and not others nor does it give a rationale for its assembly solution.
However, when this graphical interface is utilized in conjunction with a virtual instructor (VI) these minor issues are eliminated. The VI provides additional instruction to the student as well as guides them through the building and programming modules. In addition, the VI provides helpful hints to the individual student during the learning process, provides personalized instruction, and keeps track of where the student is having difficulty.
Immediate Detection and Correction of Errors
The ability for students to understand and learn when they are making mistakes and immediately receive information in order to correct mistakes instantaneously is far more beneficial than trying to correct the mistakes later. The current NXT graphical interface can not detect the procedural tasks students perform or determine whether the procedural steps are correct or not.
Robotics A Difficult Domain
The robotics Introducing to students for the first time is extremely challenging. This initial stage exposes students to basic engineering concepts, mechanical designs, and introductory programming skills. Because students are pliable at this initial stage, they need to be immersed in a learning environment that addresses all these skills. The Virtual Instructor Learning Environment is one paradigm that has proven to be beneficial to students who are learning robotics for the first time.
There are three distinct skill sets students need to acquire in order to successfully manipulate the robot. They are: robotic design concepts and construction, basic engineering skills, and programming. Successful robotic construction implies that the student is able to not only recognize the LEGO piece but know its functionality as well. Determining which pieces are best assembled together and designing it as such is challenging. In the Spring and Fall of 2006, students were asked to familiarize themselves with the LEGO Mindstorms pieces. This Mindstorms robot kit #9790 consisted of 718 pieces.
Learning the mechanics of each piece and becoming familiar with 718 pieces is challenging for most students. Most students easily recognized the wheels, but had trouble differentiating between the bushings, connectors, bricks, and beams.
Once the students were able to recognize the parts then next step challenge was getting them to understand the functionality of each piece and deciding which parts should be assemble. Students were given 5 robotic construction tasks. The most challenging for them was the light sensor (task #5) and the constructing the first motorized vehicle. The light sensor has to be mounted correctly on the vehicle in order to work successfully and many of the students vehicles were poorly design thus adding the light sensor was challenging. As for Task#1, students had trouble understanding the correlation between the wheel, axle, gear, and motor. Placing these parts together and seeing how they function together to create movement was most challenging for all the students. Clearly reveals that Task #4 gave the students the most difficulty in programming as well as constructing.
There are three distinct skill sets students need to acquire in order to successfully manipulate the robot. They are: robotic design concepts and construction, basic engineering skills, and programming. Successful robotic construction implies that the student is able to not only recognize the LEGO piece but know its functionality as well. Determining which pieces are best assembled together and designing it as such is challenging. In the Spring and Fall of 2006, students were asked to familiarize themselves with the LEGO Mindstorms pieces. This Mindstorms robot kit #9790 consisted of 718 pieces.
Learning the mechanics of each piece and becoming familiar with 718 pieces is challenging for most students. Most students easily recognized the wheels, but had trouble differentiating between the bushings, connectors, bricks, and beams.
Once the students were able to recognize the parts then next step challenge was getting them to understand the functionality of each piece and deciding which parts should be assemble. Students were given 5 robotic construction tasks. The most challenging for them was the light sensor (task #5) and the constructing the first motorized vehicle. The light sensor has to be mounted correctly on the vehicle in order to work successfully and many of the students vehicles were poorly design thus adding the light sensor was challenging. As for Task#1, students had trouble understanding the correlation between the wheel, axle, gear, and motor. Placing these parts together and seeing how they function together to create movement was most challenging for all the students. Clearly reveals that Task #4 gave the students the most difficulty in programming as well as constructing.
Structures Bricks, Plates, and Beams of LEGO Robot
Bricks, plates, and beams are not as glamorous as the RCX brick, motors, and sensors. But they are the fundamental components that are used to build the frame that supports the RCX, counter acts the motor is forces, and holds the sensors precisely in place. Master
LEGO building fundamentals first, and you r team will have success. Ignore them, and you will spend more time rep airing your robot than you did building it.
Bricks
This is a LEGO building brick. Little has changed since its introduction in 1949. According to LEGO, they have produced 320 billion bricks since that time. That is:
1. Approximately 52 bricks per person living today. LEGO bricks are made out of ABS p lastic. They are injection molded to very exacting tolerances (0.002mm).
2. The top of the brick is covered with cylindrical plastic bumps called studs. The bottom of the brick has cylindrical holes or tubes. When you snap two bricks together, the tubes deform slightly around the studs, locking the two firmly together.
Dimensions
The common practice is to refer to bricks using their dimensions: width, length, and
Height (though height is often left off when referring to standard sized bricks). When
doing this, the width and length dimensions are given in studs. The piece below is a 2 x 4
brick.
LEGO bricks are based on the metric system. The 2 x 4 brick above is 16mm wide, 32
mm long, and 9.6mm high (ignoring the studs on top). That works out to 1 stud = 8mm.
It also means that bricks are 1.2 studs high. The is asymmetry can lead to design and
building difficulties as will be discussed later.
Plates
Plates are essentially short bricks. They are 1/3 the height of standard bricks--3.2mm or
0.4 studs. Plates use the same naming convention as bricks. Some plates have through holes aligned with the backside tubes. They are referred to as
Technic plates, or less obscurely, plates with holes. The holes accept axles and connector
pins an d make the Technic plates much more useful.
LEGO building fundamentals first, and you r team will have success. Ignore them, and you will spend more time rep airing your robot than you did building it.
Bricks
This is a LEGO building brick. Little has changed since its introduction in 1949. According to LEGO, they have produced 320 billion bricks since that time. That is:
1. Approximately 52 bricks per person living today. LEGO bricks are made out of ABS p lastic. They are injection molded to very exacting tolerances (0.002mm).
2. The top of the brick is covered with cylindrical plastic bumps called studs. The bottom of the brick has cylindrical holes or tubes. When you snap two bricks together, the tubes deform slightly around the studs, locking the two firmly together.
Dimensions
The common practice is to refer to bricks using their dimensions: width, length, and
Height (though height is often left off when referring to standard sized bricks). When
doing this, the width and length dimensions are given in studs. The piece below is a 2 x 4
brick.
LEGO bricks are based on the metric system. The 2 x 4 brick above is 16mm wide, 32
mm long, and 9.6mm high (ignoring the studs on top). That works out to 1 stud = 8mm.
It also means that bricks are 1.2 studs high. The is asymmetry can lead to design and
building difficulties as will be discussed later.
Plates
Plates are essentially short bricks. They are 1/3 the height of standard bricks--3.2mm or
0.4 studs. Plates use the same naming convention as bricks. Some plates have through holes aligned with the backside tubes. They are referred to as
Technic plates, or less obscurely, plates with holes. The holes accept axles and connector
pins an d make the Technic plates much more useful.
Lego Robotics for Students
The students will use the Internet to conduct four web quests to introduce them to the world of robotics and robotic construction. The first web quest focuses on what a robot is and how robots are used in the world around them. The second web quest shows students to the world of LEGO Robotics by having them explore creations on LEGO Mindstorms.com. The third web quest leads the students through step-by-step building instructions of their first robot. And in the fourth web quest students follow step-by-step instructions on how to program their robot’s motor to move and stop on time.
Why Robotics?
The LEGO Mindstorms Robotics system gives students the opportunity to have an open-ended hands-on experience with technology in a way that is accessible, grounded in real-world problem solving, and is fun. Most students are familiar playing with LEGO bricks, and this series of introductory LEGO Robotics lessons combines LEGO bricks with the Robolab programming software, and helps build student confidence with technology, as they are able to use old familiar materials in new and exciting ways. The robotics main ideas will be demystified by engaging students in experiences with the basic concepts behind robotics while reinforcing mathematics, science, technology and literacy standards, through these lessons.
Students will be challenged to:
1. Explore what is a robot
2. Explore LEGO Robots made by Mindstorms enthusiasts
3. Construct and program a 1 Motor Car Bot
4. Measure and graph 1 Motor Car Bot's speed
5. Work in cooperative groups
6. Keep an ongoing reflective record of work accomplished
Students are asked to observe technology around them as in their homes, in the art world, and in industry, and bring their observations into their hands-on investigations. Students will use their understandings gained through their observations and investigations to develop, design, and test their own unique robotics project. And through these activities that challenge students to develop their own original solution for each problem presented, the will develop the "out of the box" thinking that is vital for scientific innovation.
Why Robotics?
The LEGO Mindstorms Robotics system gives students the opportunity to have an open-ended hands-on experience with technology in a way that is accessible, grounded in real-world problem solving, and is fun. Most students are familiar playing with LEGO bricks, and this series of introductory LEGO Robotics lessons combines LEGO bricks with the Robolab programming software, and helps build student confidence with technology, as they are able to use old familiar materials in new and exciting ways. The robotics main ideas will be demystified by engaging students in experiences with the basic concepts behind robotics while reinforcing mathematics, science, technology and literacy standards, through these lessons.
Students will be challenged to:
1. Explore what is a robot
2. Explore LEGO Robots made by Mindstorms enthusiasts
3. Construct and program a 1 Motor Car Bot
4. Measure and graph 1 Motor Car Bot's speed
5. Work in cooperative groups
6. Keep an ongoing reflective record of work accomplished
Students are asked to observe technology around them as in their homes, in the art world, and in industry, and bring their observations into their hands-on investigations. Students will use their understandings gained through their observations and investigations to develop, design, and test their own unique robotics project. And through these activities that challenge students to develop their own original solution for each problem presented, the will develop the "out of the box" thinking that is vital for scientific innovation.
The Photophobic Robot and Subsumption Programming
It will be interesting if the robot has some higher purpose. To read the photocells we will use the Stamp II instruction rctime. The darker it is, the longer it takes to read the cell. As it takes long time to read a single photocell, we will want to break up the readings into different states, doing the math to determine whether or not the left or the right photocell sees brighter light also takes time. When we have made a decision, we want to turn in the direction of the darker reading.
We should turn, so we will be setting duration for the turn as well as a direction. Our robot could easily jerk back-and-forth, which doesn’t look like its being very decisive, if we don’t have certain duration! This will be a more complex state machine than our previous ones because it has five states instead of two. Here is a list of actions that will need to be done:
State 0
Read light level on left photocell
Set state = 1
State 1
Read light level on right photocell
Modify this value by 1.5 because this cell reads a little lower than the other one
Set state = 2
State 2
Add margin to left reading
If left is brighter than the right
Set lDir = turn right (tr)
Set lDur = 30
Set lstate = 4 (next state is decrement)
Else
Set lstate = 3
State 3
Add margin to right reading
If right is brighter than the left
Set lDir = turn left (tl)
Set lDur = 30
Set lstate = 4
Else
Set lstate = 0 (do nothing, both sides are the same)
State 4
Decrement lDur, lDur = lDur – 1
If lDur = 0
Set lstate = 0 (start over from beginning)
Else
It does nothing, go back to state 4 again for a decrement, we are still turning
We should turn, so we will be setting duration for the turn as well as a direction. Our robot could easily jerk back-and-forth, which doesn’t look like its being very decisive, if we don’t have certain duration! This will be a more complex state machine than our previous ones because it has five states instead of two. Here is a list of actions that will need to be done:
State 0
Read light level on left photocell
Set state = 1
State 1
Read light level on right photocell
Modify this value by 1.5 because this cell reads a little lower than the other one
Set state = 2
State 2
Add margin to left reading
If left is brighter than the right
Set lDir = turn right (tr)
Set lDur = 30
Set lstate = 4 (next state is decrement)
Else
Set lstate = 3
State 3
Add margin to right reading
If right is brighter than the left
Set lDir = turn left (tl)
Set lDur = 30
Set lstate = 4
Else
Set lstate = 0 (do nothing, both sides are the same)
State 4
Decrement lDur, lDur = lDur – 1
If lDur = 0
Set lstate = 0 (start over from beginning)
Else
It does nothing, go back to state 4 again for a decrement, we are still turning
Making the BoE-Bot Wander
All we need wander to do is choose a direction randomly and a duration to go that direction. This module will be as easy to design as the act module, and it will demonstrate the ease with which we can add new behaviors, and how these behaviors can remember what they are doing. The first step in designing our form of a finite state machine is to write down all of the steps that are to be taken and all of the transition functions that need to be tested. After we have our list, the state machine can be drawn.
Here is our action list for wander:
State 0
Choose a direction to go with the random function (wDir)
Go to State 1
State 1
Choose duration to go with the random function (wDur)
Add some minimum time to the duration so it’s not too short
Go to state 2
State 2
Decrement wDur
If wDur = 0 then go to state 0
Some arrows do not have transition labels attached on it. When a state machine automatically transitions from one state to another and does not need to make a decision a transition function is not needed.
The wander module purpose is to randomly choose a direction for our robot to go and random time duration for it to take going there. The random() instruction plays is an important role in the wander logic. Random uses a word variable type and needs a "seed" to set up its return value. We will just use the last return value it gave us as the seed for the next on e. We will also use a mask to only get numbers in a certain range; a small range for the direction changes, a much larger one for the duration. Another interesting feature is the lookup instruction. This instruction uses a word sized variable, because we break the drive variable into a byte0 and byte1 variables, we can get both the left and right motor values from a single lookup! You’ll see how this is done in the following code, and you’ll see how this makes it very simple for us to change what the robot will do.
Here is our action list for wander:
State 0
Choose a direction to go with the random function (wDir)
Go to State 1
State 1
Choose duration to go with the random function (wDur)
Add some minimum time to the duration so it’s not too short
Go to state 2
State 2
Decrement wDur
If wDur = 0 then go to state 0
Some arrows do not have transition labels attached on it. When a state machine automatically transitions from one state to another and does not need to make a decision a transition function is not needed.
The wander module purpose is to randomly choose a direction for our robot to go and random time duration for it to take going there. The random() instruction plays is an important role in the wander logic. Random uses a word variable type and needs a "seed" to set up its return value. We will just use the last return value it gave us as the seed for the next on e. We will also use a mask to only get numbers in a certain range; a small range for the direction changes, a much larger one for the duration. Another interesting feature is the lookup instruction. This instruction uses a word sized variable, because we break the drive variable into a byte0 and byte1 variables, we can get both the left and right motor values from a single lookup! You’ll see how this is done in the following code, and you’ll see how this makes it very simple for us to change what the robot will do.
Making the BoE-Bot Move, and the Finite State Machine
A robot that doesn’t do anything isn’t very interesting. Now that you have modified your servos, attached them to Stamp II I/O ports and have checked their operation, we will make our BoE-Bot wander randomly around its environment. We need to have two modules to implement this simple functionality: One that determines a direction and duration to go in that direction and another to send commands to our wheels.
We will call these modules wander and act respectively. Act simply operates the motors. It isn’t really a behavior, it’s an output a d will be labeled on our diagram as motors. Wander is a behavior so it will appear on our subsumption diagrams as a behavior.
The State Machine and Stamp II Code for Act
We know that to get our servos to turn the wheels we need to output a pulse whose width needs to be approximately 1ms to 2ms long, with 1.5ms being the stop position. We also know that this pulse needs to be repeated every 20ms to 30ms for our servos to continue moving. This suggests that we will have two operations that we will need to program.
We have two servos, so we could say that we have three operations, output left servo pulse, output right servo pulse and wait. To keep this module simple we will limit it to two states, one to output the pulses and one to delay before the next pulse time. Since we know that we need to repeat the pulse every 20ms to 30ms and we know that we don’t want to spend all of our time in the act module waiting for this time to elapse, we therefore know that we will be exiting and entering this block of code several times before the full set of state transitions occurs. We know that each instruction takes about 250us to execute for the Stamp II, this tells us about how long each module that we will be calling will take to run, we count up the instructions and multiply by 250us. For now we’ll guess and say that it will take 5 passes through state 2 for 20ms to elapse.
We will call these modules wander and act respectively. Act simply operates the motors. It isn’t really a behavior, it’s an output a d will be labeled on our diagram as motors. Wander is a behavior so it will appear on our subsumption diagrams as a behavior.
The State Machine and Stamp II Code for Act
We know that to get our servos to turn the wheels we need to output a pulse whose width needs to be approximately 1ms to 2ms long, with 1.5ms being the stop position. We also know that this pulse needs to be repeated every 20ms to 30ms for our servos to continue moving. This suggests that we will have two operations that we will need to program.
We have two servos, so we could say that we have three operations, output left servo pulse, output right servo pulse and wait. To keep this module simple we will limit it to two states, one to output the pulses and one to delay before the next pulse time. Since we know that we need to repeat the pulse every 20ms to 30ms and we know that we don’t want to spend all of our time in the act module waiting for this time to elapse, we therefore know that we will be exiting and entering this block of code several times before the full set of state transitions occurs. We know that each instruction takes about 250us to execute for the Stamp II, this tells us about how long each module that we will be calling will take to run, we count up the instructions and multiply by 250us. For now we’ll guess and say that it will take 5 passes through state 2 for 20ms to elapse.
Subsumption Architecture and Robotic Behavior
To subsume a task is to supercede it with a higher priority task. When we talk about subsumption in robotic programming we are describing the process by which one behavior subsumes, or over-rides an other based on an explicit priority that we have defined. The Brooksian ideal subsumption architecture does not require that any behavior know anything about any other behavior. A robot programmed in this manner responds reflexively to its environment using simple behaviors.
What this also means is that new behaviors can be added to a robot without changing any other behavior. This makes enhancing a robot’s programming very simple. That would choose a new direction and duration when complete. But we don’t want to get stuck up against a wall or table leg, so we add a behavior that looks at a bumper and if we run into something will make our robot back up a d turn away. We want the bumper behavior to have priority over the wander behavior, so it will subsume that behavior and take over the control of the wheel motors to get itself away from the wall. As the bumper behavior has a higher priority than the wander behavior we can be assured that our little BoE-Bot can get away from obstacles that block its path.
Further, if wander had set a vary long duration on the last direction that it wanted to go, when bumper was done, the wander behavior would take up the control again, continuing on its merry way as if nothing had happened. Because, many simply react to the robot’s environment and each behavior is independent, we can build up a set of behaviors whose interactions with each other are not programmed, and we will begin to see combinations of behaviors unforeseen. This is called emergent behavior because we didn’t plan it. It came due to the interaction with its environment. When emergent behavior is allowed to occur our robots appear to take on a life of their own and stop acting as if they just do the same thing over and over again.
What this also means is that new behaviors can be added to a robot without changing any other behavior. This makes enhancing a robot’s programming very simple. That would choose a new direction and duration when complete. But we don’t want to get stuck up against a wall or table leg, so we add a behavior that looks at a bumper and if we run into something will make our robot back up a d turn away. We want the bumper behavior to have priority over the wander behavior, so it will subsume that behavior and take over the control of the wheel motors to get itself away from the wall. As the bumper behavior has a higher priority than the wander behavior we can be assured that our little BoE-Bot can get away from obstacles that block its path.
Further, if wander had set a vary long duration on the last direction that it wanted to go, when bumper was done, the wander behavior would take up the control again, continuing on its merry way as if nothing had happened. Because, many simply react to the robot’s environment and each behavior is independent, we can build up a set of behaviors whose interactions with each other are not programmed, and we will begin to see combinations of behaviors unforeseen. This is called emergent behavior because we didn’t plan it. It came due to the interaction with its environment. When emergent behavior is allowed to occur our robots appear to take on a life of their own and stop acting as if they just do the same thing over and over again.
The Parallax Board of Education Robot
The Parallax Board of Education (Robot Boe-Bot) is a simple robot made from a Parallax Board of Education (BoE) mounted on an aluminum frame to which two Futaba hobby servos are mounted. These servos have been modified to provide continuous motion and have several speeds at which they will turn. The BoE-Bot contains a small solderless prototyping board on which several experiments can be done. In order to realize the fascinating potential the BoE-Bot has to demonstrate the robotic behaviors.
The Linear Execution vs. Concurrent Execution and the Finite State Machine In a program, each statement is executed in sequential order, the next statement can not start until the last one is d one. When there is only one thread of logic running, this means that the program proceeds in a linear fashion, from start to finish. Concurrent execution is when there is more than one logic running thread at what appears to be the same time. "How do I do that?” you may ask. In the Parallax Basic Stamp processor you can do this by using the independent sections of code, called modules or in our case, subroutines that will be called many times before their function is completed.
In order to do that, you will need to keep the track of where you are in your subroutine so that you know where to start up again when you return. One very good way to do this is called the Finite State Machine or FSM for short. Because there are only a few different actions you want your subroutine to execute, and it does actually complete its job at some point, it is a finite list. There are several forms of the FSMs, this type of the FSM is a hybrid we will use specifically for the robotic programming.
The Linear Execution vs. Concurrent Execution and the Finite State Machine In a program, each statement is executed in sequential order, the next statement can not start until the last one is d one. When there is only one thread of logic running, this means that the program proceeds in a linear fashion, from start to finish. Concurrent execution is when there is more than one logic running thread at what appears to be the same time. "How do I do that?” you may ask. In the Parallax Basic Stamp processor you can do this by using the independent sections of code, called modules or in our case, subroutines that will be called many times before their function is completed.
In order to do that, you will need to keep the track of where you are in your subroutine so that you know where to start up again when you return. One very good way to do this is called the Finite State Machine or FSM for short. Because there are only a few different actions you want your subroutine to execute, and it does actually complete its job at some point, it is a finite list. There are several forms of the FSMs, this type of the FSM is a hybrid we will use specifically for the robotic programming.
Soda machine Robot with Finite State Machine
This behavior FSM (Finite State Machine) is a type of a state machine that returns no outputs, it merely changes state based on input and the current state. Each activity that the FSM engages is a state, unique in operation and distinct from all other states by its definition. If this is difficult to understand, let’s use a sort of real world application to explain FSMs, the soda machine. In order to keep our soda machine simple, we will create a machine with these abilities:
- takes only quarters
- needs two quarters to get a soda
- will not give you your money back
- does not give change, nor return money that isn’t a quarter
- has Coke, Barq’s Root Beer and Fanta Orange soda
- has an infinite amount of soda and never runs out
As you can see, we have eliminated all of the error conditions or exceptions that a normal soda machine could see in order to simplify this explanation, it’s artificial for a reason; we’re not designing soda machines. The label is the result of the transition function and defines the condition required for that change of state. An unlabeled line is a transition that will always occur as soon as the function of that state is completed.
The lines that loop back upon a state show iteration, or that the FSM remain s in this state doing something until a terminal condition is reached, at which time a defined transition, that is labeled will occur. Here we see that our soda machine FSM will remain in state 0 until two quarters have been given, at which point our FSM will transition to state 1. Here we will look, wait at buttons until a selection is made. When a selection is made, our FSM will then transition to state 2, 3 or 4 depending on the selection made. From these terminal states, our FSM will immediately transition back to state 0 after completing. This is the general process of definition and representation for the FSMs that we will be using to define our BoE-Bot behaviors.
- takes only quarters
- needs two quarters to get a soda
- will not give you your money back
- does not give change, nor return money that isn’t a quarter
- has Coke, Barq’s Root Beer and Fanta Orange soda
- has an infinite amount of soda and never runs out
As you can see, we have eliminated all of the error conditions or exceptions that a normal soda machine could see in order to simplify this explanation, it’s artificial for a reason; we’re not designing soda machines. The label is the result of the transition function and defines the condition required for that change of state. An unlabeled line is a transition that will always occur as soon as the function of that state is completed.
The lines that loop back upon a state show iteration, or that the FSM remain s in this state doing something until a terminal condition is reached, at which time a defined transition, that is labeled will occur. Here we see that our soda machine FSM will remain in state 0 until two quarters have been given, at which point our FSM will transition to state 1. Here we will look, wait at buttons until a selection is made. When a selection is made, our FSM will then transition to state 2, 3 or 4 depending on the selection made. From these terminal states, our FSM will immediately transition back to state 0 after completing. This is the general process of definition and representation for the FSMs that we will be using to define our BoE-Bot behaviors.
The Problem of Modeling a Robotic Arm
There are five identifiable stages in the computer solution of a problem in computational science and engineering. In this article is using the modeling of a robotic arm to present the stages of computational science and engineering. The case study considers the kinematics of a robotic arm. We use this opportunity to discuss several ideas about programming in the C++ language. We will discuss how variables, data types, and assignments are expressed in C++. Each of these stages is presented as a separate section is this lesson.
The robotic arm contains of two limb sections, the arm and the forearm, connected at a joint, the elbow. The arm is attached at the other end to the body, and the forearm could be connected to a hand. The limbs are connected at their endpoints by joints, shoulder and elbow, that are allowed to rotate in a planar motion.
The problem is to determine the X-Y position of the robotic wrist joint given the angle arm makes with the body, the angle that the forearm makes with the arm, and the length of the limb segments.
The X-Y position of the shoulder joint is fixed. The top endpoint of the arm link is attached at the
shoulder joint. The arm can rotate around the shoulder joint. The arm must always be attached to the shoulder. The forearm link is connected to the arm link, and the hand link would be connected to the arm link.
We represent the movement of the robotic arm by the coordinates of the joints in the X-Y plane. The motion of the arm is determined by the angle, alpha, that the arm makes with the body, by the angle, beta, that the forearm makes with the arm, and by the lengths of the links. The lengths of the segments are shown in the link diagram.
The angle alpha will be measured in degrees counterclockwise from the positive Y axis. The angle beta will be measured in degrees clockwise from the arm to the forearm. When the arm is raised to the horizontal position, the angle alpha is degrees and the angle beta is 180 degrees.
The robotic arm contains of two limb sections, the arm and the forearm, connected at a joint, the elbow. The arm is attached at the other end to the body, and the forearm could be connected to a hand. The limbs are connected at their endpoints by joints, shoulder and elbow, that are allowed to rotate in a planar motion.
The problem is to determine the X-Y position of the robotic wrist joint given the angle arm makes with the body, the angle that the forearm makes with the arm, and the length of the limb segments.
The X-Y position of the shoulder joint is fixed. The top endpoint of the arm link is attached at the
shoulder joint. The arm can rotate around the shoulder joint. The arm must always be attached to the shoulder. The forearm link is connected to the arm link, and the hand link would be connected to the arm link.
We represent the movement of the robotic arm by the coordinates of the joints in the X-Y plane. The motion of the arm is determined by the angle, alpha, that the arm makes with the body, by the angle, beta, that the forearm makes with the arm, and by the lengths of the links. The lengths of the segments are shown in the link diagram.
The angle alpha will be measured in degrees counterclockwise from the positive Y axis. The angle beta will be measured in degrees clockwise from the arm to the forearm. When the arm is raised to the horizontal position, the angle alpha is degrees and the angle beta is 180 degrees.
Elbow and Wrist of a Robot Project
In this article we will practice adding new features to a C++ program so it can solve a more complex problem. Consider the robot arm. With the information in the link diagram, plus the values of the angles of the robot arm joints, it is possible to determine the X-Y positions of the robot arm joints.
1. Determine the equations for locating the elbow and wrist joints.
2. Copy the file rbtjnts.cpp into the file robotEW.cpp and save it in your directory cs2503Programs.
3. Change directory into your cs2503Programs directory.
4. Modify the robotEW.cpp program as follows:
• The program shall ask the user for the current angles of the robotic arm shoulder, and elbow joints. The angles shall be input in degrees. The angles shall be input as real numbers.
• The program shall use the input angles and the values shown in the link diagram to compute the positions of the robotic arm elbow, and wrist joints.
• The program shall print out the X and Y co ordinates of the robotic arm elbow, and wrist joints.
5. Modify the program robotEW.cpp so that it prints out to the screen the input it received. Make sure that when it prints, it identifies the values it is printing.
6. Modify the program so that it reads from a file named robotEWin.txt. Further modify your program so that all printing is done to a file named robotEWresults.txt.
7. Once you have saved the file robotEW.cpp , and it works correctly, hand in the files robotEW.cpp, robotEWin.txt, and robotEWresults.html.
8. Copy the files robotEWin.txt, and robotEWresults.txt into your public html/cs2503Reports directory.
9. Change directory to your public html/cs2503Reports directory.
10. Write a file robotArmEqs.html that describes the equations required for computing the elbow and wrist joints. Provide a description of each parameter, and identify which are known and which are determined.
11. Create the report file robotEWreport.html which explains the project and includes relative links to robotArmEqs.html, robotEWin.txt, and robotEWresults.txt.
12. Modify your cs2503.html file to place a link to the report file.
13. Hand in the report file robotEWreport.html and your mo dified cs2503.html.
14. Clean up your home directory and any of your public areas to make sure that no files containing your program or portion thereof are accessible by any user.
1. Determine the equations for locating the elbow and wrist joints.
2. Copy the file rbtjnts.cpp into the file robotEW.cpp and save it in your directory cs2503Programs.
3. Change directory into your cs2503Programs directory.
4. Modify the robotEW.cpp program as follows:
• The program shall ask the user for the current angles of the robotic arm shoulder, and elbow joints. The angles shall be input in degrees. The angles shall be input as real numbers.
• The program shall use the input angles and the values shown in the link diagram to compute the positions of the robotic arm elbow, and wrist joints.
• The program shall print out the X and Y co ordinates of the robotic arm elbow, and wrist joints.
5. Modify the program robotEW.cpp so that it prints out to the screen the input it received. Make sure that when it prints, it identifies the values it is printing.
6. Modify the program so that it reads from a file named robotEWin.txt. Further modify your program so that all printing is done to a file named robotEWresults.txt.
7. Once you have saved the file robotEW.cpp , and it works correctly, hand in the files robotEW.cpp, robotEWin.txt, and robotEWresults.html.
8. Copy the files robotEWin.txt, and robotEWresults.txt into your public html/cs2503Reports directory.
9. Change directory to your public html/cs2503Reports directory.
10. Write a file robotArmEqs.html that describes the equations required for computing the elbow and wrist joints. Provide a description of each parameter, and identify which are known and which are determined.
11. Create the report file robotEWreport.html which explains the project and includes relative links to robotArmEqs.html, robotEWin.txt, and robotEWresults.txt.
12. Modify your cs2503.html file to place a link to the report file.
13. Hand in the report file robotEWreport.html and your mo dified cs2503.html.
14. Clean up your home directory and any of your public areas to make sure that no files containing your program or portion thereof are accessible by any user.
Robotic Myths, Ethics, and Scientific Method
Currently robotic technologies developments raise great expectations about the extension of human capabilities and the improvement of many aspects of human life, including freedom from repetitive jobs and fatigue, more precise and less invasive surgery, effective assistance for elderly and disabled people, new forms of support in education and entertainment. These expectations of the robotic technologies fall squarely in the wake of the classical view of technological progress put forward by Bacon and Descartes. This view rehearses, in terms that are more acceptable to modern sensibilities, the Promethean promise of compensating the extending and deficiencies the powers of human biological endowment by means of man-made tools and devices.
Robotics adds a very distinctive flavor to classical myths and modern age expectations on the technology. Robots are very special kinds of machines: the coordinated unfolding of their sensing, motor, and information processing abilities enables one to achieve goal-oriented and adaptive behaviors. Goal-oriented and adaptive behaviors were, by and large, an exclusive prerogative of biological systems until the rise of robotics. For this reason, the human enterprise robotics is more related closely than previous technological undertakings to mythical tales concerning the origin of animate beings from inanimate matter. The first human beings were assembled by divine entities from fire, clay, air, and other material constituents. Now, human beings are able to assemble from inanimate matter entities manifesting adaptive and intelligent action. This manifestation of human ingenuity through robotic systems and technologies is, at the same time, a manifestation of human hubris, insofar as robotics allows human beings to usurp and arrogate to themselves a divine prerogative. Thus, robotics adds a new dimension to the Promethean association of sin and burglary to technological progress, insofar as robotics enables one to alter in distinctive ways the “natural order” and “deflect” natural processes towards the achievement of human goals.
Robotics adds a very distinctive flavor to classical myths and modern age expectations on the technology. Robots are very special kinds of machines: the coordinated unfolding of their sensing, motor, and information processing abilities enables one to achieve goal-oriented and adaptive behaviors. Goal-oriented and adaptive behaviors were, by and large, an exclusive prerogative of biological systems until the rise of robotics. For this reason, the human enterprise robotics is more related closely than previous technological undertakings to mythical tales concerning the origin of animate beings from inanimate matter. The first human beings were assembled by divine entities from fire, clay, air, and other material constituents. Now, human beings are able to assemble from inanimate matter entities manifesting adaptive and intelligent action. This manifestation of human ingenuity through robotic systems and technologies is, at the same time, a manifestation of human hubris, insofar as robotics allows human beings to usurp and arrogate to themselves a divine prerogative. Thus, robotics adds a new dimension to the Promethean association of sin and burglary to technological progress, insofar as robotics enables one to alter in distinctive ways the “natural order” and “deflect” natural processes towards the achievement of human goals.
Next Generation Robotic Simulation Enables Virtual Commissioning
One very important component next generation DM applications are advanced robotic simulation tools that go far beyond earlier off-line programming applications. These simulation tools enable production engineers to virtually simulate, validate, and commission the entire robotic workcell environment. Currently, a robotic workcell environment can be very complex, complex tooling and fixtures, and could include multiple robots doing multiple tasks, clamp automation and PLCs, a variety of sensors, conveyor automation, and even vision technology.
Manufacturers are pushing to use of robotics well beyond the single task workcell to multi-robot gardens with the capability to build complex assemblies requiring high levels of collaboration and synchronization. This necessitates to use of advanced robotic simulation technology in order for production engineers to the meet the high level of complexity associated with these assembly and build requirements.
The automotive industry, as well as other discrete industries such as defense and heavy equipment & aero-space have indentified the capability to virtually commission their production systems and assembly lines as one of the initial and most immediate benefits derived from DM technology.
Simulation of robotic represents a technology that has become an essential tool, and has steadily evolved over time for the automotive industry in the application of virtual commissioning. This directly addresses their need for faster production ramp-up and time to market, increased vehicle models, commonality of production processes across global operations, and modular plant designs all validated within the virtual environment.
Role of simulation plays a key in the field of robotics, because it permits experimentation that would otherwise be time-consuming and/or expensive. Simulation allows engineers to try ideas and construct production scenarios in a synthetic environment, dynamic while collecting virtual response data to determine the physical responses of the control system. Simulation allows the robotics control systems evolution, which depend on random permutations of the control system over many generations.
Manufacturers are pushing to use of robotics well beyond the single task workcell to multi-robot gardens with the capability to build complex assemblies requiring high levels of collaboration and synchronization. This necessitates to use of advanced robotic simulation technology in order for production engineers to the meet the high level of complexity associated with these assembly and build requirements.
The automotive industry, as well as other discrete industries such as defense and heavy equipment & aero-space have indentified the capability to virtually commission their production systems and assembly lines as one of the initial and most immediate benefits derived from DM technology.
Simulation of robotic represents a technology that has become an essential tool, and has steadily evolved over time for the automotive industry in the application of virtual commissioning. This directly addresses their need for faster production ramp-up and time to market, increased vehicle models, commonality of production processes across global operations, and modular plant designs all validated within the virtual environment.
Role of simulation plays a key in the field of robotics, because it permits experimentation that would otherwise be time-consuming and/or expensive. Simulation allows engineers to try ideas and construct production scenarios in a synthetic environment, dynamic while collecting virtual response data to determine the physical responses of the control system. Simulation allows the robotics control systems evolution, which depend on random permutations of the control system over many generations.
Digital Manufacturing Siemens PLM Tecnomatix Robotics
Manufacturing operations are driven by a set of imperatives that business is forcing companies to implement processes, technologies, and practices that not only will enable them to compete and profit, but in some cases survive. Companies must be able to innovate in terms of having a product that will succeed in the market place. Companies must have manufacturing operations that are flexible, agile, and able to launch a quality product on time and cost effectively, however, in terms of hitting the market window with the right product at the right time. Manufacturers, like Tier one suppliers and Automotive OEMs are looking for ways to move through their product launch cycles at much faster rates, and to deliver new models to a market that expects innovative products in timely manner and typically at reduced costs.
One of the emerging and evolving technologies that will enable companies to achieve the business imperative of timely and cost effective product launches is Digital Manufacturing. In the virtual 3D world created by the PLM technologies of solid model digital mockup, product authoring, and manufacturing process simulation, the final link to the actual production work environment is making the connection to machine control systems. It is one thing to simulate the robotic work cell, machine tool, conveyor line, clamping fixture, PLC, motors, drives, pneumatics and hydraulics systems; but quite another exercise to generate accurate information that is driving control systems capable for all of this production equipment. This merging of Digital Manufacturing with automation is exactly where tools such as Siemens PLM Software’s Tecnomatix robotics simulation are taking manufacturing in this final link to the factory floor.
The robotic simulation solution of Siemens PLM Software Tecnomatix represents a technology that will be essential to manufacturers across all discrete industries where the requirement for agile and flexible production systems is mandatory. This technology addresses to the need for virtual commissioning, one of the primary benefits and value propositions that the Automotive, as well as other discrete industries have recognized as indispensable to accelerating their product launches.
One of the emerging and evolving technologies that will enable companies to achieve the business imperative of timely and cost effective product launches is Digital Manufacturing. In the virtual 3D world created by the PLM technologies of solid model digital mockup, product authoring, and manufacturing process simulation, the final link to the actual production work environment is making the connection to machine control systems. It is one thing to simulate the robotic work cell, machine tool, conveyor line, clamping fixture, PLC, motors, drives, pneumatics and hydraulics systems; but quite another exercise to generate accurate information that is driving control systems capable for all of this production equipment. This merging of Digital Manufacturing with automation is exactly where tools such as Siemens PLM Software’s Tecnomatix robotics simulation are taking manufacturing in this final link to the factory floor.
The robotic simulation solution of Siemens PLM Software Tecnomatix represents a technology that will be essential to manufacturers across all discrete industries where the requirement for agile and flexible production systems is mandatory. This technology addresses to the need for virtual commissioning, one of the primary benefits and value propositions that the Automotive, as well as other discrete industries have recognized as indispensable to accelerating their product launches.
Autonomy and Responsibility Issues for Learning Robots
It was showed that the environments in which robots are supposed to act become more dynamic as one progressively moves from industrial robotics towards the current frontiers of service and field robotics. It turns out that robot designers are not always in the position to describe in sufficient detail, identify, and implement control policies that are adequate to achieve reactive and sufficiently flexible robotic behaviors in environmental conditions which differ to such a great extent from standard industrial robotics environments.
This limitation of epistemic provides a strong motivation for endowing service robots in general and personal robots in particular, with the capability of learning from their experience, insofar as learning is a powerful source of adaptation in dynamic environments. Thus, instead of furnishing robots with detailed information about regularities in their operation environments, robot designers endow robots with computational rules enabling one to discover these regularities.
An agent of computational that learns from its experience can be viewed as an algorithm that looks for regularities into a representative (input) dataset, and subsequently uses these regularities to improve its performances at some task without loss of generality. Learning of this kind does not take place in a vacuum: any attempt to identify regularities that are possibly present into a dataset must rely on some pre-existing “structure” on the computational part agent. Such structure may involve the use of built-in preferences or “biases” concerning the class of functions from which the target regularity must be selected.
Learning agents usually rely on additional priori expectations about the unknown target regularity in order to narrow down their search space. A straightforward example of background conjectural assumption which learning agents use to downsize search spaces is expressed in a procedural form by the rule of choosing “simpler” hypotheses that are compatible with observed data. Therefore, various priori assumptions about the regularities that must be discovered in the environment play a crucial role in machine learning strategies.
This limitation of epistemic provides a strong motivation for endowing service robots in general and personal robots in particular, with the capability of learning from their experience, insofar as learning is a powerful source of adaptation in dynamic environments. Thus, instead of furnishing robots with detailed information about regularities in their operation environments, robot designers endow robots with computational rules enabling one to discover these regularities.
An agent of computational that learns from its experience can be viewed as an algorithm that looks for regularities into a representative (input) dataset, and subsequently uses these regularities to improve its performances at some task without loss of generality. Learning of this kind does not take place in a vacuum: any attempt to identify regularities that are possibly present into a dataset must rely on some pre-existing “structure” on the computational part agent. Such structure may involve the use of built-in preferences or “biases” concerning the class of functions from which the target regularity must be selected.
Learning agents usually rely on additional priori expectations about the unknown target regularity in order to narrow down their search space. A straightforward example of background conjectural assumption which learning agents use to downsize search spaces is expressed in a procedural form by the rule of choosing “simpler” hypotheses that are compatible with observed data. Therefore, various priori assumptions about the regularities that must be discovered in the environment play a crucial role in machine learning strategies.
Robot Soldiers Task Requirements and Ethics
Many systems of military robotic have been deployed in Afghanistan and Iraq. These robots include the remote-controlled PackBot system, which enables one to detect and detonate IED (improvised explosive devices), and the Talon SWORDS, another remote-controlled robot deployed in the second Iraq war, which can be equipped with machine guns and grenade launchers.
The truth-conditions of this allegedly apodictic conclusion are rather obscure, pending an answer to the following questions:
1. What is a soldier of robot? Does a robotic system controlled by remote, such as the Talon SWORDS, qualify as a robot soldier?
2. What does a robot soldier to do the job better than a human soldier? Which behavioral features can be used sensibly as a basis to compare and rank the performances of robot soldiers and human soldiers?
In connection with No 1, one should be careful to note that a Talon SWORDS robot is a remote-controlled system, and therefore all firing decisions are taken by its human controller. If one requires that soldier are capable of taking autonomous firing decisions, then no remote-controlled robot qualifies as a robotic soldier.
In connection with No 2, it is taken for granted here that a “good” soldier, whatever it is, must behave in the battlefield in accordance with international humanitarian law, including internationally recognized treaties such as the Geneva and the Hague Conventions, in addition to the “code of ethics”, if any, and the rules of engagement adopted by its own army. This broad requirement suggests that ethical reflection is needed to understand what it takes to be a good robotic soldier, and which behavioral tests must be passed to qualify as a good robotic soldier.
Some researchers suggested that intelligent robots which are autonomous in their firing decisions – that is, robots that are not controlled by external agents as far as firing decisions are concerned – will eventually behave “more ethically” than human soldiers in the battlefield.
The truth-conditions of this allegedly apodictic conclusion are rather obscure, pending an answer to the following questions:
1. What is a soldier of robot? Does a robotic system controlled by remote, such as the Talon SWORDS, qualify as a robot soldier?
2. What does a robot soldier to do the job better than a human soldier? Which behavioral features can be used sensibly as a basis to compare and rank the performances of robot soldiers and human soldiers?
In connection with No 1, one should be careful to note that a Talon SWORDS robot is a remote-controlled system, and therefore all firing decisions are taken by its human controller. If one requires that soldier are capable of taking autonomous firing decisions, then no remote-controlled robot qualifies as a robotic soldier.
In connection with No 2, it is taken for granted here that a “good” soldier, whatever it is, must behave in the battlefield in accordance with international humanitarian law, including internationally recognized treaties such as the Geneva and the Hague Conventions, in addition to the “code of ethics”, if any, and the rules of engagement adopted by its own army. This broad requirement suggests that ethical reflection is needed to understand what it takes to be a good robotic soldier, and which behavioral tests must be passed to qualify as a good robotic soldier.
Some researchers suggested that intelligent robots which are autonomous in their firing decisions – that is, robots that are not controlled by external agents as far as firing decisions are concerned – will eventually behave “more ethically” than human soldiers in the battlefield.
Robot Ethics and Models of Robot Environment Interactions
The robotic system observable behavior results from the coordinated working of its bodily parts and their interactions with the environment. The contribution of the environment in shaping robotic behaviors can be hardly overestimated. The trajectory of a personal robot negotiating a living room surface can be affected significantly by changes of frictional coefficients only, such as those introduced by a Persian carpet or a glass of water spilled on the floor. The paths of twisted traced on a beach by an insect-like.
Robotic engineers are well aware of the practical and theoretical difficulties surrounding the epistemic problem of identifying environmental disturbing factors which jeopardize the normal working of robotic systems. A strategy of heuristic which is often applied to address this epistemic problem is to make concrete environments in which robotic systems will be immerses as similar as possible to ideal situations in which suitable behavioral regularities are believed to hold. This strategy of heuristic relies on the fact that any consistent (non-contradictory) set of statements T admits a model, in the usual sense of the word 'model' which is adopted in mathematical logic.
More specifically, if T is a consistent set of sentences, then there is an interpretation of T, relative to some domain of objects and relations, which makes true all the sentences of T. This strategy of heuristic can be rationally reconstructed as a process involving two main steps.
1. Idealized domains of objects and relations are introduced, such that the desired regularities concerning robotic behaviors are true when interpreted in those domains;
2. The concrete environments in which robots will be immersed are modified in order to make them as similar as possible to the idealized domains introduced in the previous step.
One extensively applies the above strategy in order to enforce ideal-world conditions which exclude “disturbing factors” while preserving task compliance conditions in industrial automation. Since human workers are a major source of dynamic changes and disturbing factors impinging on industrial robot behaviors, a robot “segregation” policy is generally pursued to achieve quasi-static and more easily predictable robot environments: factory workers and robots are often confined to different workspaces, and their mutual interactions are severely limited or altogether excluded.
Robotic engineers are well aware of the practical and theoretical difficulties surrounding the epistemic problem of identifying environmental disturbing factors which jeopardize the normal working of robotic systems. A strategy of heuristic which is often applied to address this epistemic problem is to make concrete environments in which robotic systems will be immerses as similar as possible to ideal situations in which suitable behavioral regularities are believed to hold. This strategy of heuristic relies on the fact that any consistent (non-contradictory) set of statements T admits a model, in the usual sense of the word 'model' which is adopted in mathematical logic.
More specifically, if T is a consistent set of sentences, then there is an interpretation of T, relative to some domain of objects and relations, which makes true all the sentences of T. This strategy of heuristic can be rationally reconstructed as a process involving two main steps.
1. Idealized domains of objects and relations are introduced, such that the desired regularities concerning robotic behaviors are true when interpreted in those domains;
2. The concrete environments in which robots will be immersed are modified in order to make them as similar as possible to the idealized domains introduced in the previous step.
One extensively applies the above strategy in order to enforce ideal-world conditions which exclude “disturbing factors” while preserving task compliance conditions in industrial automation. Since human workers are a major source of dynamic changes and disturbing factors impinging on industrial robot behaviors, a robot “segregation” policy is generally pursued to achieve quasi-static and more easily predictable robot environments: factory workers and robots are often confined to different workspaces, and their mutual interactions are severely limited or altogether excluded.