The concept of autonomous ground-robots is by bringing to the military effort a decade of experience working on planetary rovers for NASA. Much of more DARPA previous work had involved some the size of tanks, large vehicle, and had addressed successfully technical specifications and requirements to the point where they could be spun off to the development community.
The planetary rover technology was promising and NASA was doing what they could, but DARPA was the place to develop the smaller idea vehicles for military applications. It spent the majority times to formulate a new program that ultimately would be called TMR (Tactical Mobile Robotics). TMR’s primary goals were to make it mobile for fit into backpack, urban terrain and travel 100 meter per operator intervention. So it was an order of smaller magnitude and an order of smarter magnitude. We came through on the mobility part pretty well and go things moving in the right direction on autonomy.
The military was interested in small robotics air, ground and under water, in portable formats that could fit into areas a human could not, areas that would be left unguarded by the enemy in urban environment. It would separate the objectives of the program into the operational focus was to revolutionize urban warfare by penetrating denied areas with robots.
There are six primary imperatives for general objectives that crucial to making tactical robot functional:
1. Response to lost communication.
2. Tumble recovery, the robot must be invertible.
3. Anti handling, the robot must feature a method of keeping the enemy from picking it up while not endangering innocent civilians, especially children.
4. Self location, the robot should fuse GPS, odometry and visual inputs to determine its location.
5. Complex obstacle negotiation.
6. Self cleaning, the robots must have traceability to clear the dust and mud, etc.
Blog about Robotics Introduction, news of the robot, information about the robot, sharing knowledge about the various kinds of robots, shared an article about robots, and others associated with the robot
DARPA Killer Robots
While Asimo’s ability to walk up and down stairs was a breakthrough in robot mobility, DARPA has been pushing the envelope to develop robots that are able to autonomously and safely navigate a battlefield in combat.
Robots have been part of DARPA culture almost from day one. The agency worked to improve UAVs (Unmanned Aerial Vehicles), and then known as remotely piloted vehicles, used for low altitude tactical reconnaissance over enemy territory.
DARPA work in AI (Artificial Intelligence) technologies in the 1970s spurred an interest in robotics applications, especially in academia, but the policy at DARPA itself at the time was not to emphasize robotics. Even so, the agency continued to support robotics research at Stanford and MIT through its information IPTO (Information Processing Techniques Office).
DARPA took a more direct role as it began work on a family of autonomous ground, air and sea vehicles known as the killer robots in the early 1980s. Killer Robots did lay the groundwork for DARPA’s future work while never achieving program status, as did the Strategic Computing program, a major program created in the same time frame to fund all DARPA work then rapidly evolving field of computers.
The Strategic Computing Initiative of the 1980s produced its share of disruptive technology in developing reduced instruction set processors, RAID disks, robotics, specialized graphic engines, and AI tools which are now currently mainstream. The investments by DARPA in this technology in their early and formative years have paid rich dividends.
Various DARPA efforts (including Killer Robots) become part of the new TTO Tactical Technology Office) Smart weapons program (SWP) to develop a modular family of autonomous weapons, smart. Advances in computer technology and software such as Automatic Target Recognition) were belief to finally have made a true autonomous loitering weapon possible. The size, power demands, weight, and capability of computers had been perhaps the greatest limitation on reality catching.
Robots have been part of DARPA culture almost from day one. The agency worked to improve UAVs (Unmanned Aerial Vehicles), and then known as remotely piloted vehicles, used for low altitude tactical reconnaissance over enemy territory.
DARPA work in AI (Artificial Intelligence) technologies in the 1970s spurred an interest in robotics applications, especially in academia, but the policy at DARPA itself at the time was not to emphasize robotics. Even so, the agency continued to support robotics research at Stanford and MIT through its information IPTO (Information Processing Techniques Office).
DARPA took a more direct role as it began work on a family of autonomous ground, air and sea vehicles known as the killer robots in the early 1980s. Killer Robots did lay the groundwork for DARPA’s future work while never achieving program status, as did the Strategic Computing program, a major program created in the same time frame to fund all DARPA work then rapidly evolving field of computers.
The Strategic Computing Initiative of the 1980s produced its share of disruptive technology in developing reduced instruction set processors, RAID disks, robotics, specialized graphic engines, and AI tools which are now currently mainstream. The investments by DARPA in this technology in their early and formative years have paid rich dividends.
Various DARPA efforts (including Killer Robots) become part of the new TTO Tactical Technology Office) Smart weapons program (SWP) to develop a modular family of autonomous weapons, smart. Advances in computer technology and software such as Automatic Target Recognition) were belief to finally have made a true autonomous loitering weapon possible. The size, power demands, weight, and capability of computers had been perhaps the greatest limitation on reality catching.
The robot soldier is coming
The Pentagon predicts that robots will be a major army force in the American military in less than a decade, hunting and killing the enemies in the wars. Robots are a crucial part of the Army’s effort to rebuild itself as a century of 21st fighting force, and a $127 billions project called Future Combat Systems is the biggest military contract in American history.
Robot soldiers will increasingly think, see, and react like humans. At the starting, they will be remote controlled, looking and acting like lethal toy trucks. They may take many shapes as the technology develops. And because their intelligence grows, so will their autonomy.
The robot soldier has been planned for 30 years. And the engineers who involve in this project say it may take at least 30 more years to realize in full. The military have to answer tough questions if it intends to trust robots with the responsibility of distinguishing friend from foe.
Robots in combat as envisioned by their builders may look and move like hummingbirds, tractors or tanks, cockroaches or crickets, even like a humans. They may become swarms of smart dust with the development of nanotechnology. The military robot is concerning for gather intelligence, haul munitions, search buildings or blow them up.
Several hundred robots are scouring caves in Afghanistan, digging up roadside bombs in Iraq, and serving as armed sentries at depots of weapon. An armed version of the bomb disposal robot is in Baghdad, capable of firing 1000 rounds a minute. The robot will be the first thinking machine of its kind to take up a front line infantry position though controlled by a soldier with a laptop. Within a decade the ground vehicles and a third of deep strike aircraft in the military may become robotic. It is mandated by Congress of United States, so it will spend many billion of dollars on military robots.
Robot soldiers will increasingly think, see, and react like humans. At the starting, they will be remote controlled, looking and acting like lethal toy trucks. They may take many shapes as the technology develops. And because their intelligence grows, so will their autonomy.
The robot soldier has been planned for 30 years. And the engineers who involve in this project say it may take at least 30 more years to realize in full. The military have to answer tough questions if it intends to trust robots with the responsibility of distinguishing friend from foe.
Robots in combat as envisioned by their builders may look and move like hummingbirds, tractors or tanks, cockroaches or crickets, even like a humans. They may become swarms of smart dust with the development of nanotechnology. The military robot is concerning for gather intelligence, haul munitions, search buildings or blow them up.
Several hundred robots are scouring caves in Afghanistan, digging up roadside bombs in Iraq, and serving as armed sentries at depots of weapon. An armed version of the bomb disposal robot is in Baghdad, capable of firing 1000 rounds a minute. The robot will be the first thinking machine of its kind to take up a front line infantry position though controlled by a soldier with a laptop. Within a decade the ground vehicles and a third of deep strike aircraft in the military may become robotic. It is mandated by Congress of United States, so it will spend many billion of dollars on military robots.
Developing Military Robotic Vehicles
Since autonomous machines in the military became possible in the mid of 1980s, when processor computer became faster and faster. The development of improved sensor technology in the 199s allowed machines to pick up more information about their environment. Autonomous systems now can keep track of their whereabouts using global-positioning satellite links, and talk to commanders and comrades through wireless links that shut off automatically if the signal is in danger of being intercepted.
The first unmanned military vehicles in the early 1980s by the Defense Department of USA were huge vans the size of UPD delivery trucks, filled with hundreds of pounds of clunky computers that could barely navigate at 5 miles an hour in relatively flat terrain. For the comparison, Stryker can navigate through desert and forests environments, or drive on the road at top speeds of 60 miles an hour.
Now they have the basic functioning down, and they are trying to make it smarter at something or better. They have tested a four-wheeled robot called Short for Mobile Detection Assessment and Response System (MDARS), a robotic watchdog that patrols the Westminster lab’s snow covered back yard looking for intruders. It drives several feet, apparently puzzled, eyes a parking sign and halts, until a human attendant reprograms MDARS to move on.
Compared with a human, MDARS is not really that smart. Developing a robot is like raising children. Even Stryker’s most movement rudimentary requires complex calculations that must be taught to its brain, using hundreds of thousands of mathematical algorithms and programming codes. When it hits a fork on the road, it selects the gravel route instead of the dirt road. When it finds itself trapped in a cul-de-sac, it backs up to re-evaluate alternative paths. Stryker in the future will learn more tactical behaviors mimicking a human’s, like running and hiding in behind hills or trees in the presence of enemies.
The first unmanned military vehicles in the early 1980s by the Defense Department of USA were huge vans the size of UPD delivery trucks, filled with hundreds of pounds of clunky computers that could barely navigate at 5 miles an hour in relatively flat terrain. For the comparison, Stryker can navigate through desert and forests environments, or drive on the road at top speeds of 60 miles an hour.
Now they have the basic functioning down, and they are trying to make it smarter at something or better. They have tested a four-wheeled robot called Short for Mobile Detection Assessment and Response System (MDARS), a robotic watchdog that patrols the Westminster lab’s snow covered back yard looking for intruders. It drives several feet, apparently puzzled, eyes a parking sign and halts, until a human attendant reprograms MDARS to move on.
Compared with a human, MDARS is not really that smart. Developing a robot is like raising children. Even Stryker’s most movement rudimentary requires complex calculations that must be taught to its brain, using hundreds of thousands of mathematical algorithms and programming codes. When it hits a fork on the road, it selects the gravel route instead of the dirt road. When it finds itself trapped in a cul-de-sac, it backs up to re-evaluate alternative paths. Stryker in the future will learn more tactical behaviors mimicking a human’s, like running and hiding in behind hills or trees in the presence of enemies.
Types of Robot based on the Function
Just as there are multiple tool boxes, multiple robotic formats exist that are suited to different application requirements:
SCARA robots are best used in dispensing, pick and place and gang-picking applications for assembly and packaging where loads are moderate and high precision accuracy is not the top priority – for instance, in assembling cell phone to place covers or buttons in the right location. SCARA robots are appropriate for plane-to-plane moves and have a small footprint, making them an ideal choice for manufacturers with space constraints.
Delta robots also are useful in pick-and-place applications for assembly and packaging when the load is light – typically less than one kilogram – like candy or lids for jars, and they are capable of operating at very high speeds. Delta robots are ideal for plane-to-plane moves. However, they are only able to move up and down relatively short distances in the Z axis – typically less than 100 millimeters.
Articulated arm robots are ideal for applications with a large work envelope and heavier payloads. In addition to plane-to-plane moves, they are also well suited to painting or welding applications where movement over and under objects is necessary.
Cartesian robots are frequently used in everything from life sciences application to cartooning, dispensing, palletizing and large assembly projects. A Cartesian robot is a good choice for any system that has clearly defined x, y, and z axes.
SCARA robots are best used in dispensing, pick and place and gang-picking applications for assembly and packaging where loads are moderate and high precision accuracy is not the top priority – for instance, in assembling cell phone to place covers or buttons in the right location. SCARA robots are appropriate for plane-to-plane moves and have a small footprint, making them an ideal choice for manufacturers with space constraints.
Delta robots also are useful in pick-and-place applications for assembly and packaging when the load is light – typically less than one kilogram – like candy or lids for jars, and they are capable of operating at very high speeds. Delta robots are ideal for plane-to-plane moves. However, they are only able to move up and down relatively short distances in the Z axis – typically less than 100 millimeters.
Articulated arm robots are ideal for applications with a large work envelope and heavier payloads. In addition to plane-to-plane moves, they are also well suited to painting or welding applications where movement over and under objects is necessary.
Cartesian robots are frequently used in everything from life sciences application to cartooning, dispensing, palletizing and large assembly projects. A Cartesian robot is a good choice for any system that has clearly defined x, y, and z axes.
Technology and Market Trends of Robot Effect
For years, robots have represented the cutting edge in manufacturing. And until recently, they have been viable only for manufacturers with complex needs and big budgets. But today, robotic solutions are becoming more versatile and easier to attain. They are a practical solution for a broadening range of manufacturing applications – from vision-directed, high speed pick and place packaging, to high precision automotive assembly and semiconductor handling.
Technology and market trends are helping bring robots to the forefront of machine design, including:
• Flexibility demands – constantly shifting consumer preferences put increased pressure on manufacturers to offer a greater variety of product styles, shapes and sizes. A programmable robot that can perform different tasks with quick changeover helps end users produce this variety for less money and using less space.
• Worker safety imperative – manufacturers around the world are increasingly focused on corporate responsibly, including making sure manufacturing operations protect the company’s most asset – its workers.
• Declining hardware costs – over the past 10 years, more robot sourcing options have entered a competitive market, helping lower the cost of the hardware used for robots.
• Quality improvements – the increase in robot sourcing options also has helped drive improvements in the quality of the hardware and controls available.
Technology and market trends are helping bring robots to the forefront of machine design, including:
• Flexibility demands – constantly shifting consumer preferences put increased pressure on manufacturers to offer a greater variety of product styles, shapes and sizes. A programmable robot that can perform different tasks with quick changeover helps end users produce this variety for less money and using less space.
• Worker safety imperative – manufacturers around the world are increasingly focused on corporate responsibly, including making sure manufacturing operations protect the company’s most asset – its workers.
• Declining hardware costs – over the past 10 years, more robot sourcing options have entered a competitive market, helping lower the cost of the hardware used for robots.
• Quality improvements – the increase in robot sourcing options also has helped drive improvements in the quality of the hardware and controls available.
History of the Robot
Robotics is based ob two enabling technologies: telemanipulators and the ability of numerical control of machines.
Telemanipulators are remotely controlled machines which usually consist of an arm and a gripper. The movement of arm and gripper follow the instructions the human gives through his control device. First telemanipulators have been used to deal with radio-active material.
Numeric control allows controlling machines very precisely in relation to a given coordinate system. It was first used in 1952 at the MIT and lead to the first programming language for machines called APT, Automatic Program Tools.
The combination of both of these techniques leads to the first programmable telemanipulator. The first industrial robot using these principles was installed in 1961. These are the robots one knows from industrial facilities like car construction plant.
The development of mobile robots was driven by the desire to automate transportation in production processes and autonomous transport systems. The former lead to driver-less transport system used on factory floors to move objects to different points in the production processes in the late seventies.
Humanoid robots are developed since 1975 when Wabot-I was presented in Japan. The current Wabot-III already has some minor cognitive capabilities. Another humanoid robot is Cog, developed in the MIT-AI-Lab since 1994. Honda’s humanoid robot became well known in the public when presented back in 1999.
Telemanipulators are remotely controlled machines which usually consist of an arm and a gripper. The movement of arm and gripper follow the instructions the human gives through his control device. First telemanipulators have been used to deal with radio-active material.
Numeric control allows controlling machines very precisely in relation to a given coordinate system. It was first used in 1952 at the MIT and lead to the first programming language for machines called APT, Automatic Program Tools.
The combination of both of these techniques leads to the first programmable telemanipulator. The first industrial robot using these principles was installed in 1961. These are the robots one knows from industrial facilities like car construction plant.
The development of mobile robots was driven by the desire to automate transportation in production processes and autonomous transport systems. The former lead to driver-less transport system used on factory floors to move objects to different points in the production processes in the late seventies.
Humanoid robots are developed since 1975 when Wabot-I was presented in Japan. The current Wabot-III already has some minor cognitive capabilities. Another humanoid robot is Cog, developed in the MIT-AI-Lab since 1994. Honda’s humanoid robot became well known in the public when presented back in 1999.
Plane to Plane Homography of the Robot
The robot exists in planar environment. This is an approximation as the robot has a finite height above the table on which it moves. Provided the distance of the camera from the robot is sufficiently large in comparison to this height, the error from this approximation is acceptable.
The camera views scene from an arbitrary position. The frame grabbed from it is a second 2D environment. To infer the robot position from the frame buffer, it is necessary to know the transformation between the two planes. This transformation can be decomposed into three matrix operations on the homogeneous coordinates of the robot position.
Homogeneous coordinates are a method of representing point in n-space n+1-dimensional vectors with arbitrary scale. They have two inherent advantages in the application:
1. To return from the homogeneous coordinate to the n-space point, it is necessary to divide the first n elements of the vector by the (n+1)th. This allows certain non-linear transformations, such as projective one, to be represented by a matrix multiplication.
2. The second advantage is that an addition/subtraction operation in n-space can also be condensed into a single matrix multiplication.
The first matrix multiplication is the rigid-body transformation from world-centred coordinates on the table, to camera-centred coordinates. This is a transform from the 3-element homogeneous coordinate representing the 2D point on the ground plane into a 4-element coordinate reflecting its 3D position in camera-centred coordinates.
The camera views scene from an arbitrary position. The frame grabbed from it is a second 2D environment. To infer the robot position from the frame buffer, it is necessary to know the transformation between the two planes. This transformation can be decomposed into three matrix operations on the homogeneous coordinates of the robot position.
Homogeneous coordinates are a method of representing point in n-space n+1-dimensional vectors with arbitrary scale. They have two inherent advantages in the application:
1. To return from the homogeneous coordinate to the n-space point, it is necessary to divide the first n elements of the vector by the (n+1)th. This allows certain non-linear transformations, such as projective one, to be represented by a matrix multiplication.
2. The second advantage is that an addition/subtraction operation in n-space can also be condensed into a single matrix multiplication.
The first matrix multiplication is the rigid-body transformation from world-centred coordinates on the table, to camera-centred coordinates. This is a transform from the 3-element homogeneous coordinate representing the 2D point on the ground plane into a 4-element coordinate reflecting its 3D position in camera-centred coordinates.
The Motion of Mobile Robot in Unknown Environment
A robotic vehicle is an intelligent mobile machine capable of autonomous operations in structured an unstructured environment, it must be capable of sensing (perceiving its environment), thinking (planning and reasoning), and acting (moving and manipulating). Thus, the recent developments in autonomy requirements, intelligent components, multi robot system, and massively parallel computer have made the IAS (Intelligent Autonomous System) very used, notably in the planetary explorations, mine industry, and highways. But, the current mobile robots do relatively little that is recognizable as intelligent thinking, this is because:
• Perception does not meet the necessary standards.
• Much of the intelligence is tied up in task specific behavior and has more to do with particular devices and missions than with the mobile robots in general.
• Much of the challenge of the mobile robots requires intelligence at subconscious level.
The motion of mobile robots in an unknown environment where are stationary unknown obstacles requires the existence of algorithms that are able to solve the path and motion planning problem of these robots so that collisions are avoided. In order to execute the desired motion, the mobile robot navigates intelligibly and avoids obstacles so that the target is reached. The problem becomes more difficult when the parameters that describe the model and/or the workspace of the robot are not exactly known.
• Perception does not meet the necessary standards.
• Much of the intelligence is tied up in task specific behavior and has more to do with particular devices and missions than with the mobile robots in general.
• Much of the challenge of the mobile robots requires intelligence at subconscious level.
The motion of mobile robots in an unknown environment where are stationary unknown obstacles requires the existence of algorithms that are able to solve the path and motion planning problem of these robots so that collisions are avoided. In order to execute the desired motion, the mobile robot navigates intelligibly and avoids obstacles so that the target is reached. The problem becomes more difficult when the parameters that describe the model and/or the workspace of the robot are not exactly known.
Navigation the Key Important in Autonomous Robot
Navigation is the ability to move and on being self-sufficient. The IAS (Intelligent Autonomous System) must be able to achieve these tasks: to avoid obstacles, and to make one way toward their target. In fact, recognition, learning, decision making, and action constitute principal problem of navigation. One of the specific characteristic of mobile robot is the complexity of their environment. Therefore, one of the critical problems for the mobile robots is path planning, which is still an open one to be studying extensively. Accordingly, one of the key issues in the design of an autonomous robot is navigation, for which navigation planning is one of the most vital aspect of an autonomous robot.
Several models have been applied for environment where the principle of navigation is applied to do path planning. For example, a grid model has been adopted by many researchers, where the robot environment is dividing into many line squares and indicated to the presence of an object or not in each square.
Besides, the most important key of the navigation system of mobile robot is to move the robot to a named place in known, unknown or partially known environments. In most practical situations the mobile robot can not take the most direct path from the start to goal point.
Several models have been applied for environment where the principle of navigation is applied to do path planning. For example, a grid model has been adopted by many researchers, where the robot environment is dividing into many line squares and indicated to the presence of an object or not in each square.
Besides, the most important key of the navigation system of mobile robot is to move the robot to a named place in known, unknown or partially known environments. In most practical situations the mobile robot can not take the most direct path from the start to goal point.
Intelligent Autonomous Robot in Industrial Fields
The theory and practice of IAS are currently among the most intensively studied and promising areas in computer science and engineering which will certainly play a primarily goal role in future. These theories and applications provide a source linking all fields in which intelligent control plays a dominant role. Cognition, perception, action and learning are essential components of such systems and their use is tending extensively towards challenging applications (service robots, micro robots, bio robots, guard robots, warehousing robots). Many traditional working machines already used e.g. in agriculture or construction mining are going through changes to become remotely operated or even autonomous. Autonomous driving in certain conditions is then realistic target in near future.
Industrial robots used for manipulations of goods; typically consist of one or two arms and a controller. The term controller is used in at least two different ways in this context, we mean the computer system used to control the robot, often called a robot work-station controller. The controller may be programmed to operate the robot in a number of way; thus distinguishing it from hard automation.
Most often, industrial robot are stationary, and work is transported to them by conveyer or robot carts, which are often called autonomous guided vehicles (AVG). Autonomous guided vehicles are becoming increasing used in industrial for material transport. Most frequently, these vehicles use a sensor to follow a wire in the factory floor.
Industrial robots used for manipulations of goods; typically consist of one or two arms and a controller. The term controller is used in at least two different ways in this context, we mean the computer system used to control the robot, often called a robot work-station controller. The controller may be programmed to operate the robot in a number of way; thus distinguishing it from hard automation.
Most often, industrial robot are stationary, and work is transported to them by conveyer or robot carts, which are often called autonomous guided vehicles (AVG). Autonomous guided vehicles are becoming increasing used in industrial for material transport. Most frequently, these vehicles use a sensor to follow a wire in the factory floor.
Autonomy Requirements of Autonomous Robot
Several autonomy requirements must be satisfied to well perform the tasks. These are some requirements:
Thermal
To carry out tasks in various environments as in space applications, the thermal design must be taken into account, especially when the temperature can vary significantly. At ambient temperatures, the limited temperature-sensitive electronic equipment on-board must be placed in the thermally insulated compartments.
Energy
For specified period, IAV can operate autonomously, one very limited resource for under-water and space applications are energy. So IAS usually carries a rechargeable energy system, appropriately sized batteries on-board.
Communication Management
The components on board the vehicle and on-board the surface station must be inter-connected by a two-way communication link. As in both underwater and space applications, a data management system is usually necessary to transfer data from IAS to terrestrial storage and processing stations by two-way communication link. Indeed, the data management system must be split between components of the vehicle and surface station. Thus, the vehicle must be more autonomous and intelligent to perform and achieve the tasks. Due to limited resources and weight constraints, major data processing and storage capacities must be on the surface station. Although individual vehicles may have wildly different external appearances, different mechanisms of locomotion, and different missions or goals, many of the underlying computational issues involved are related to sensing and sensor modeling spatial data representation and reasoning.
Thermal
To carry out tasks in various environments as in space applications, the thermal design must be taken into account, especially when the temperature can vary significantly. At ambient temperatures, the limited temperature-sensitive electronic equipment on-board must be placed in the thermally insulated compartments.
Energy
For specified period, IAV can operate autonomously, one very limited resource for under-water and space applications are energy. So IAS usually carries a rechargeable energy system, appropriately sized batteries on-board.
Communication Management
The components on board the vehicle and on-board the surface station must be inter-connected by a two-way communication link. As in both underwater and space applications, a data management system is usually necessary to transfer data from IAS to terrestrial storage and processing stations by two-way communication link. Indeed, the data management system must be split between components of the vehicle and surface station. Thus, the vehicle must be more autonomous and intelligent to perform and achieve the tasks. Due to limited resources and weight constraints, major data processing and storage capacities must be on the surface station. Although individual vehicles may have wildly different external appearances, different mechanisms of locomotion, and different missions or goals, many of the underlying computational issues involved are related to sensing and sensor modeling spatial data representation and reasoning.
Material to Build Modular Robot
Modular robot hardware, such as Polybot, CONRO, M-Tran, Molecule, and Crystal, the I-cube, and Molecubes generally have nodes that are greater than 10 cm in smallest dimension and cost more than $50 per node to produce. This is because they are typically made using off-the shelf, macro-scale electronic and mechanical components, and a large number of standardized components are required to produce a functional system.
To make modular robots a useful raw material for building products, the per-node cost must be substantially reduced. Over the past several years, modular robot design has moved toward systems with few or no moving parts in the nodes. Some of these systems utilize an external fluid bath and external agitation to provide the force and energy to make and break connections, controlling node to node adhesion to steer the structure toward the desired result. Kirby described 24 mm diameter cylindrical nodes, capable of translating in a plane by rotating around one another by activating a radial positioned array of electromagnets.
Another strategy is employed by the Miche self-disassembling modular robot, which starts with all nodes connected, and then releases magnetic latches to disconnect node that are not part of the structure. These systems have lower per-node cost and are more amenable to micro fabrication than the previous generation of designs.
To make modular robots a useful raw material for building products, the per-node cost must be substantially reduced. Over the past several years, modular robot design has moved toward systems with few or no moving parts in the nodes. Some of these systems utilize an external fluid bath and external agitation to provide the force and energy to make and break connections, controlling node to node adhesion to steer the structure toward the desired result. Kirby described 24 mm diameter cylindrical nodes, capable of translating in a plane by rotating around one another by activating a radial positioned array of electromagnets.
Another strategy is employed by the Miche self-disassembling modular robot, which starts with all nodes connected, and then releases magnetic latches to disconnect node that are not part of the structure. These systems have lower per-node cost and are more amenable to micro fabrication than the previous generation of designs.
Scorpion Robot Vision
Robot vision is one of Scorpion Vision Software’s focus areas. Scorpion gives the robot the ability to pick products with high precision in 2D or 3D. Flexible automation means robots, automation and vision working together. This reduces cost and increases the flexibility and possibility to produce several product variants in one production line at the same time – 24 hours a day – with profits. The vision system’s ability to locate and identify objects is critical elements in making these systems.
Scorpion Vision Software has been used in robot vision and inspection systems for many years. Scorpion has a complete toolbox of robust and reliable 2D and 3D image processing tools needed for robot vision, gauging and assembly verification. Included in high accuracy and sub-pixel object location with 3DMaMa and PolygonMatch technology is making it a perfect companion to world class image components.
Scorpion Vision Software is flexible and easy interfacing to standard robots. With Scorpion Vision Software it is easy to implement reliable communication with robots from any vendor. Scorpion is used with robots from ABB, Motoman, Kuka, Fanuc, Kawasaki, Sony, and Rexroth Bosch over serial and TCP/IP ports. The specification of the robot is 2D and 3D robot vision, robot guiding. The robot will be inspected 100% inspection and the aim is zero defects.
Scorpion Vision Software has been used in robot vision and inspection systems for many years. Scorpion has a complete toolbox of robust and reliable 2D and 3D image processing tools needed for robot vision, gauging and assembly verification. Included in high accuracy and sub-pixel object location with 3DMaMa and PolygonMatch technology is making it a perfect companion to world class image components.
Scorpion Vision Software is flexible and easy interfacing to standard robots. With Scorpion Vision Software it is easy to implement reliable communication with robots from any vendor. Scorpion is used with robots from ABB, Motoman, Kuka, Fanuc, Kawasaki, Sony, and Rexroth Bosch over serial and TCP/IP ports. The specification of the robot is 2D and 3D robot vision, robot guiding. The robot will be inspected 100% inspection and the aim is zero defects.
Behavior Language of TDL
The robot control architectures can be developed as 3 interacting layers. The behavior level interacts with the physical world. The planning layer is used for defining how to achieve goals. The executable layer connects these two layers issuing commands to the behavior level which are results of the plans and passing sensory data taken from the behavior level to the planning layer to enable planning reactive to the real world. So the executive layer is responsible for expanding abstract goals into low level commands, executing them and handling exceptions.
The main motivation behind developing Task Description Language (TDL) is that using conventional programming languages for defining such task-level control functions result in highly non-linear code which is also difficult to understand, debug and maintain. TDL extends C++ with a syntactic support for task-level control. A compiler is available to translate TDL code into C++ code that will use the Task Control Management (TCM) libraries.
The basic data type of TDL is the task tree. The leaves of a task tree are generally commands which will perform some physical action in the world. Other types of the node are goals, representing higher level tasks, monitors and exceptions. An action associated with such nodes can perform computations, and change the structure of the task tree. The nodes of a task tree can be executed sequentially or in parallel. It is also possible to expand a sub-tree but wait for some synchronization constraints to hold before beginning executing it.
The main motivation behind developing Task Description Language (TDL) is that using conventional programming languages for defining such task-level control functions result in highly non-linear code which is also difficult to understand, debug and maintain. TDL extends C++ with a syntactic support for task-level control. A compiler is available to translate TDL code into C++ code that will use the Task Control Management (TCM) libraries.
The basic data type of TDL is the task tree. The leaves of a task tree are generally commands which will perform some physical action in the world. Other types of the node are goals, representing higher level tasks, monitors and exceptions. An action associated with such nodes can perform computations, and change the structure of the task tree. The nodes of a task tree can be executed sequentially or in parallel. It is also possible to expand a sub-tree but wait for some synchronization constraints to hold before beginning executing it.
Robotic Perception (Localization)
A robot receives raw sensor data from its sensors. It has to map those measurements into an internal representation to formalize this data. This process is called robotic perception. This is a difficult process in general the sensors are noisy and the environment is partially observable, unpredictable, and often dynamic.
Good representation should meet three criteria: they should
• Contain enough information for the robot to make a right decision.
• Be structured in a way that it can be updated efficiently.
• Be natural, meaning that internal variables correspond to natural state variables in the physical world.
Filtering and updating the belief state is not covered here as it was covered in earlier presentations. Some topics are Kalaman filters and dynamic Bayes nets.
A very generic perception task is localization. It is the problem of determining where things are. Localization is one of the most pervasive perception problems in robotics. Knowing the location of objects in the environment that the robot has to deal with is the base for making any successful interaction with the physical world. There are three increasingly difficult flavors of localization problems:
• Tracking – if the initial state of the object to be localized is known you can just track this object.
• Global localization – the initial location of the object is unknown you first have to find the object.
• Kidnapping – this is the most difficult task.
Good representation should meet three criteria: they should
• Contain enough information for the robot to make a right decision.
• Be structured in a way that it can be updated efficiently.
• Be natural, meaning that internal variables correspond to natural state variables in the physical world.
Filtering and updating the belief state is not covered here as it was covered in earlier presentations. Some topics are Kalaman filters and dynamic Bayes nets.
A very generic perception task is localization. It is the problem of determining where things are. Localization is one of the most pervasive perception problems in robotics. Knowing the location of objects in the environment that the robot has to deal with is the base for making any successful interaction with the physical world. There are three increasingly difficult flavors of localization problems:
• Tracking – if the initial state of the object to be localized is known you can just track this object.
• Global localization – the initial location of the object is unknown you first have to find the object.
• Kidnapping – this is the most difficult task.
Robot Behavior Language of Charon
Charon is a language for modular specification of interacting hybrid systems and can be also used for defining robot control strategies. The building blocks of the system are agents and modes.
An agent can communicate with its environment via shared variables and also communication channels. The language supports the operations of composition of agents for concurrency, hiding of variables for information encapsulation, and instantiation of agents for reuse. Therefore complex agents can be built from other agents to define hierarchical architectures.
Each atomic agent has a mode which represents a flow of control. Modes can contain sub-modes and transitions between them so it is possible to connect modes to others with well-defined entry and exit points. There are some specific entry and exit points. The former is used for supporting history retention, default entry transitions are allowed to restore the local state from the most recent exit. A default exit point can be used for group transitions which apply to the all sub-modes to support exceptions.
Transitions can be labeled by guarded actions to allow discrete updates. In a discrete round only an atomic agent will be executed and the execution will continue as long as there are enabled transitions. Since a mode can contain sub-modes group transitions are examined only when there are no enabled transitions in the sub-modes.
An agent can communicate with its environment via shared variables and also communication channels. The language supports the operations of composition of agents for concurrency, hiding of variables for information encapsulation, and instantiation of agents for reuse. Therefore complex agents can be built from other agents to define hierarchical architectures.
Each atomic agent has a mode which represents a flow of control. Modes can contain sub-modes and transitions between them so it is possible to connect modes to others with well-defined entry and exit points. There are some specific entry and exit points. The former is used for supporting history retention, default entry transitions are allowed to restore the local state from the most recent exit. A default exit point can be used for group transitions which apply to the all sub-modes to support exceptions.
Transitions can be labeled by guarded actions to allow discrete updates. In a discrete round only an atomic agent will be executed and the execution will continue as long as there are enabled transitions. Since a mode can contain sub-modes group transitions are examined only when there are no enabled transitions in the sub-modes.
Robot Programming Systems Review
A review of robot programming systems was conducted in 1983 by Tomas Lozano Perez. At that time, robots were only common in industrial environments, the range of programming methods was very limited, and the review examined only industrial robot programming systems. A new review is necessary to determine what has been achieved in the intervening time, and what the next steps should be to provide convenient control for the general population as robots become ubiquitous in our lives.
Lozano Perez divided programming systems into three categories: guiding systems, robot level programming systems, and task-level programming systems. For guiding systems the robot was manually moved to each desired position and the joint positions recorded. For robot-level systems a programming language was provided with the robot. Finally task-level systems specified the goals to be achieved (for example, the positions of the objects).
By contrast, it divides the field of robot programming into automatic programming, manual programming, and software architecture. The first two distinguish programming according to the actual method used, which is the crucial distinction for users and programmers. In automatic programming systems the user or programmer has little or no direct control over the robot code. These include learning systems, Programming by Demonstration and Instructive.
Lozano Perez divided programming systems into three categories: guiding systems, robot level programming systems, and task-level programming systems. For guiding systems the robot was manually moved to each desired position and the joint positions recorded. For robot-level systems a programming language was provided with the robot. Finally task-level systems specified the goals to be achieved (for example, the positions of the objects).
By contrast, it divides the field of robot programming into automatic programming, manual programming, and software architecture. The first two distinguish programming according to the actual method used, which is the crucial distinction for users and programmers. In automatic programming systems the user or programmer has little or no direct control over the robot code. These include learning systems, Programming by Demonstration and Instructive.
Multi Object Localization – Mapping
So far the discussion was only the localization of a single object, but often one seeks to localize many objects. The classical example of this problem is robotic mapping.
In the localization algorithms before we assumed that the robot knew the map of the environment a priori. But what if it does not? Then it has to generate such a map itself. Humans have already proven their mapping skills with maps of the whole planet. Now we will give a short introduction how robots can do the same.
This problem is often referred to as simultaneous localization and mapping (SLAM). The robot does only construct a map, but it must do this without knowing where it is. A problem is that the robot may not know in advance how large the map is going to be.
The most widely used method for the SLAM problem is EKF. It is usually combined with landmark sensing models and requires that the landmarks are distinguishable. Think of a map where you have several distinguishable landmarks of unknown location. The robot now starts to move and discovers more and more landmarks. The uncertainty about the location of the landmarks and itself increases with time. When the robot finds one landmark he already discovered earlier again the uncertainty of its position and of all landmarks decreases.
In the localization algorithms before we assumed that the robot knew the map of the environment a priori. But what if it does not? Then it has to generate such a map itself. Humans have already proven their mapping skills with maps of the whole planet. Now we will give a short introduction how robots can do the same.
This problem is often referred to as simultaneous localization and mapping (SLAM). The robot does only construct a map, but it must do this without knowing where it is. A problem is that the robot may not know in advance how large the map is going to be.
The most widely used method for the SLAM problem is EKF. It is usually combined with landmark sensing models and requires that the landmarks are distinguishable. Think of a map where you have several distinguishable landmarks of unknown location. The robot now starts to move and discovers more and more landmarks. The uncertainty about the location of the landmarks and itself increases with time. When the robot finds one landmark he already discovered earlier again the uncertainty of its position and of all landmarks decreases.
CORBA in Robot Control Application
CORBA is an acronym for the Common Object Request Broker Architecture. It is an open architecture, specified by the Object Management Group. The OMG exists to provide vendor independent software standards for distributed systems.
To design a CORBA object to operate in the robot control application, it was first established what data needed to be passed from one process to another. This would be done before designing any object whether it is in C++, Java or CORBA. A CORBA object, however, contains no data. It may have data-types providing application specific data structure, but apart from this, it will consist only of methods. The methods required for our application were decided to be the following:
Frame – a function taking no variables but returning a large array containing the last bitmap image grabbed by the server.
Calibrate – this function sends image coordinates corresponding to the ;ast frame sent over by frame allowing remote calibration.
Newsnake – instantiates a new B-Spline snake interpolating four or more image coordinates specified as an array in the parameters of the function.
Pos – returns the current snake coordinates.
Map – returns the topology of the environment the server is controlling. This allows low-bandwidth operation by eliminating the need to use the heavy weight frame function.
Send – finally, a function which takes image coordinates (or map coordinates if in low-bandwidth mode) and asks the server to navigate the robot to the world coordinates o which they correspond.
To design a CORBA object to operate in the robot control application, it was first established what data needed to be passed from one process to another. This would be done before designing any object whether it is in C++, Java or CORBA. A CORBA object, however, contains no data. It may have data-types providing application specific data structure, but apart from this, it will consist only of methods. The methods required for our application were decided to be the following:
Frame – a function taking no variables but returning a large array containing the last bitmap image grabbed by the server.
Calibrate – this function sends image coordinates corresponding to the ;ast frame sent over by frame allowing remote calibration.
Newsnake – instantiates a new B-Spline snake interpolating four or more image coordinates specified as an array in the parameters of the function.
Pos – returns the current snake coordinates.
Map – returns the topology of the environment the server is controlling. This allows low-bandwidth operation by eliminating the need to use the heavy weight frame function.
Send – finally, a function which takes image coordinates (or map coordinates if in low-bandwidth mode) and asks the server to navigate the robot to the world coordinates o which they correspond.
Robot Behavior Language of Signal
SIGNAL is a language designed for safe real-time system programming. It is based on semantics defined by mathematical modeling of multiple-clocked flows of data and events. Relations can be defined on such data and event signals to describe arbitrary dynamical systems and then constraints may be used to develop real time applications. There are operators to relate the clocks and values of the signals. SIGNAL can be also described as synchronous data-flow language.
Although SIGNAL is not designed specifically for robotics, the characteristic of robot programming and SIGNAL’s mentioned functionalities makes it possible to use it for active vision-based robotic systems. Since the vision data have a synchronous and continuous nature, it can be captured in signals and then the control functions between the sensory data and control outputs can be defined. The SIGNAL-GTi extension of SIGNAL can be used for task sequencing at the discrete level.
SIGNAL-GTi enables the definition of time intervals related to the signals and also provides methods to specify hierarchical preemptive tasks. Combining the data flow and multitasking paradigms results in having the advantages of both the automata and concurrent robot programming. Using these advantages a hierarchy of parallel automata can be designed. The planning level of robot control does not have a counterpart in SIGNAL but the task level can be used for this purpose.
Although SIGNAL is not designed specifically for robotics, the characteristic of robot programming and SIGNAL’s mentioned functionalities makes it possible to use it for active vision-based robotic systems. Since the vision data have a synchronous and continuous nature, it can be captured in signals and then the control functions between the sensory data and control outputs can be defined. The SIGNAL-GTi extension of SIGNAL can be used for task sequencing at the discrete level.
SIGNAL-GTi enables the definition of time intervals related to the signals and also provides methods to specify hierarchical preemptive tasks. Combining the data flow and multitasking paradigms results in having the advantages of both the automata and concurrent robot programming. Using these advantages a hierarchy of parallel automata can be designed. The planning level of robot control does not have a counterpart in SIGNAL but the task level can be used for this purpose.
Behavior Language for Robot Programming
Brooks is one of the first advocates of reactive behavior-based methods for robot programming. His subsumption architecture is based on different layers, each of which work concurrently and asynchronously to achieve individual goals. In the earlier designs, the behaviors were represented by augmented finite state machines (AFSMs), thus the Behavior Language still includes AFSMs as the low level building blocks. The behavior language has a Lisp-like syntax and compiler available even into programmable-array logic circuits.
An AFSM encapsulates a behavioral transformation function where the input to the function can be suppressed or the output can be inhibited by other components of the system. It is also possible to reset an AFSM to its initial state. Each layer in the subsumption architecture has a specific goal. The higher layers can use the output of the lower levels and also affect their input and output to achieve their goals which are generally more abstract then the goals of the lower layers. It is argued that this kind of hierarchical interaction between layers prohibits designing higher levels independently.
When an AFSM is started, it waits for a specific triggering event and then its body executed. Such events can depend on time, a predicate about the state of the system, a message deposited to a specific internal register, or other components being enabled or disabled. In the body it is possible to perform primitive actions or to put messages in order to interact with other AFSMs.
An AFSM encapsulates a behavioral transformation function where the input to the function can be suppressed or the output can be inhibited by other components of the system. It is also possible to reset an AFSM to its initial state. Each layer in the subsumption architecture has a specific goal. The higher layers can use the output of the lower levels and also affect their input and output to achieve their goals which are generally more abstract then the goals of the lower layers. It is argued that this kind of hierarchical interaction between layers prohibits designing higher levels independently.
When an AFSM is started, it waits for a specific triggering event and then its body executed. Such events can depend on time, a predicate about the state of the system, a message deposited to a specific internal register, or other components being enabled or disabled. In the body it is possible to perform primitive actions or to put messages in order to interact with other AFSMs.
Programming Language of Colbert
Colbert is called as a sequencer language by its developers. It is a part of the Saphira architecture. The Saphira architecture is an integrating sensing and control system for robotic applications. Complex operations like visual tracking of humans, coordination of motor controls, planning are integrated in the architecture using the concepts of coordination of behavior, coherence of modeling, and communication with other agents. The motion control layer of Saphira consists of a fuzzy controller and Colbert is used for the middle execution level between the motion control layer and planning.
Colbert programs are activities whose semantic is based on FSAs and are written in a subset of ANSI C. the behavior of the robot is controlled by activities like:
• Sequencing the basic actions that the robot will perform.
• Monitoring the execution of basic actions and other activities.
• Executing activity subroutines.
• Checking and setting the values of internal variables.
Robot control in Colbert means defining such activities as activity schema each of which corresponds to a finite state automation. The activity executive interprets the statements in an activity schema according to the associated FSA. The statements of a schema do not correspond directly to the states of the FSA. For instance, conditional and looping statements will be probably represented as a set of nodes.
Colbert programs are activities whose semantic is based on FSAs and are written in a subset of ANSI C. the behavior of the robot is controlled by activities like:
• Sequencing the basic actions that the robot will perform.
• Monitoring the execution of basic actions and other activities.
• Executing activity subroutines.
• Checking and setting the values of internal variables.
Robot control in Colbert means defining such activities as activity schema each of which corresponds to a finite state automation. The activity executive interprets the statements in an activity schema according to the associated FSA. The statements of a schema do not correspond directly to the states of the FSA. For instance, conditional and looping statements will be probably represented as a set of nodes.
Monte Carlo Localization (MCL)
MCL is essentially a particle filter algorithm. The requirements are a map of the environment showing the regions the robot can move to and appropriate motion model and sensor model. It assumes that the robot uses range scanners for localization. First we want to talk about theory. The algorithm takes the given map and creates a population of N samples by a given probabilistic distribution (if the robot have some knowledge about where it is – for example it knows that it is in a particular room-the algorithm could take care of this and we would see particles only in that room). Then we start a continuous update cycle. The localization stars at time t=0. Then the update cycle is repeated for each time step:
• Each sample is propagated forward by sampling the next state value Xt+1 given the current value Xt for the sample, and using the transition model given.
• Each sample is weighted by the likelihood it assigns to the new evidence, P(et+1l Xt+1).
• The population is re-sampled to generate a new population of N samples. Each new sample is selected from the current population; the probability that a particular sample is selected is proportional to its weight.
So as the robot gathers more knowledge about the environment by analyzing the range scanner data it re-samples the population and the particles concentrate at one point or more points. At some points in time all points are in one cluster point.
• Each sample is propagated forward by sampling the next state value Xt+1 given the current value Xt for the sample, and using the transition model given.
• Each sample is weighted by the likelihood it assigns to the new evidence, P(et+1l Xt+1).
• The population is re-sampled to generate a new population of N samples. Each new sample is selected from the current population; the probability that a particular sample is selected is proportional to its weight.
So as the robot gathers more knowledge about the environment by analyzing the range scanner data it re-samples the population and the particles concentrate at one point or more points. At some points in time all points are in one cluster point.
Mobile Robot Based on SLAM
A component of a mobile robot system is the ability to localize itself accurately and simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sensors or artificial landmarks. It is a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building.
Mobile robot localization and mapping, the process of simultaneously tracking the position of a mobile robot relative to its environment and building a map of the environment, has been a central research topic in mobile robotics. Accurate localization is a prerequisite for building a good map, and having an accurate map is essential for good localization. Therefore, simultaneous localization and map building (SLAM) is critical underlying factor for successful mobile robot navigation in a large environment, irrespective of what the high-level goals or applications are.
To achieve the SLAM, there are different types of sensor modalities such as sonar, laser range finders and vision. Sonar is fast and cheap but usually very crude, whereas a laser scanning system is active, accurate but slow. Vision systems are passive and high resolution.
Mobile robot localization and mapping, the process of simultaneously tracking the position of a mobile robot relative to its environment and building a map of the environment, has been a central research topic in mobile robotics. Accurate localization is a prerequisite for building a good map, and having an accurate map is essential for good localization. Therefore, simultaneous localization and map building (SLAM) is critical underlying factor for successful mobile robot navigation in a large environment, irrespective of what the high-level goals or applications are.
To achieve the SLAM, there are different types of sensor modalities such as sonar, laser range finders and vision. Sonar is fast and cheap but usually very crude, whereas a laser scanning system is active, accurate but slow. Vision systems are passive and high resolution.
Cell Decomposition for Free Space Mapping
One approach towards planning a path in configuration space is to simply map the free space onto a discrete raster which reduces the path planning problem within each of the cells to a trivial problem (move on a straight line) as it is completely traversable. By this means, we end up with a discrete graph search problem, which are relatively easy to solve. The simplest cell decomposition matches the free space onto a regularly spaced grid but more complex methods are possible.
Cell decomposition has advantage of an easy implementation, but also suffers some deficiencies:
• The amount of grid cells increases exponentially with the dimension of the configuration space.
• Polluted cells (cells which contain free and occupied space) are problematic. If included, they might lead to solutions, which are in reality blocked, if omitted, some valid solutions would be missing.
The latter problem could be solved by further subdivision of the affected cells or by requiring exact cell decomposition by allowing irregularly shaped cells. Those new cells would need to have an ‘easy’ way to compute a traversal through them, which would lead to complex geometric problem.
Another problem is the fact that a path through such a cell grid can contain arbitrary sharp corners, which would be impossible to perform by real life robots. Such a path can lead very close to obstacle, rendering even the slightest motion errors fatal.
Cell decomposition has advantage of an easy implementation, but also suffers some deficiencies:
• The amount of grid cells increases exponentially with the dimension of the configuration space.
• Polluted cells (cells which contain free and occupied space) are problematic. If included, they might lead to solutions, which are in reality blocked, if omitted, some valid solutions would be missing.
The latter problem could be solved by further subdivision of the affected cells or by requiring exact cell decomposition by allowing irregularly shaped cells. Those new cells would need to have an ‘easy’ way to compute a traversal through them, which would lead to complex geometric problem.
Another problem is the fact that a path through such a cell grid can contain arbitrary sharp corners, which would be impossible to perform by real life robots. Such a path can lead very close to obstacle, rendering even the slightest motion errors fatal.
Robot Functionality at Home or Office
The idea of a mobile robot to provide assistance either in the home, office or in more hostile environments (e.g. bomb-disposal or nuclear reactors) has existed for many years and such systems are available today. Unfortunately, they are typically expensive and by no means ubiquitous in the way that 1950s and 60s science fiction would have had us believe.
The major limitations to including robots in homes and offices are the infrastructure changes they require. Computer vision however means that robots can be monitored from just a few inexpensive cameras and the recent availability of wireless network solutions has decimated the infrastructure they demand.
The final step to the package is how humans are to interact with such a system. It may be that people wish to interact with their robot either “face to face”, via a home or office workstation or even using their mobile telephone on the train. Operations may then be invoked on our robot from anywhere in the world and it is functionality that increases the possible applications by orders of magnitude.
For example, Household robot. Imagine that you have a dog at home. Many people’s pet is able to alert them if there is an intruder in the home, but how many can do it whilst you are at work? And how many can pass a message to the kids that you’re going to be late home?
The major limitations to including robots in homes and offices are the infrastructure changes they require. Computer vision however means that robots can be monitored from just a few inexpensive cameras and the recent availability of wireless network solutions has decimated the infrastructure they demand.
The final step to the package is how humans are to interact with such a system. It may be that people wish to interact with their robot either “face to face”, via a home or office workstation or even using their mobile telephone on the train. Operations may then be invoked on our robot from anywhere in the world and it is functionality that increases the possible applications by orders of magnitude.
For example, Household robot. Imagine that you have a dog at home. Many people’s pet is able to alert them if there is an intruder in the home, but how many can do it whilst you are at work? And how many can pass a message to the kids that you’re going to be late home?
Types of Robots based on the Working
Now we will see, what robots are used for nowadays.
Hardworking Robots
Traditionally robots have been used to replace human workers in areas of difficult labor, which is structures enough for automation, like assembly line work in the automobile industry or harvesting machines in the agricultural sector. Some existing examples apart from the assembly robot are:
• Melon harvester robot.
• Ore transport robot for mines.
• A robot that removes paint from larger ships.
• A robot that generates high precision sewer maps.
if employed in a suitable environment robots can work faster, cheaper and more precise than humans.
Transporter
Although most autonomous transport robots still need environmental to find their way they are already widely in use. But building a robot which can navigate using natural landmarks is probably no more science fiction. Examples of currently available transporters are:
• Container transporters used to load and unload cargo ships.
• Medication and food transport systems in hospitals.
• Autonomous helicopters, to deliver goods to remote areas.
Insensible Steel Giants
As robots can be easily shielded against hazardous environments and are somewhat replaceable, they are used in dangerous, toxic or nuclear environments. Some places robots have helped cleaning up a mess:
• In Chernobyl robots have helped to clean up nuclear waste.
• Robots have entered dangerous areas in the remains of the WTC.
• Robots are used to clean ammunition and mines all around the world.
Hardworking Robots
Traditionally robots have been used to replace human workers in areas of difficult labor, which is structures enough for automation, like assembly line work in the automobile industry or harvesting machines in the agricultural sector. Some existing examples apart from the assembly robot are:
• Melon harvester robot.
• Ore transport robot for mines.
• A robot that removes paint from larger ships.
• A robot that generates high precision sewer maps.
if employed in a suitable environment robots can work faster, cheaper and more precise than humans.
Transporter
Although most autonomous transport robots still need environmental to find their way they are already widely in use. But building a robot which can navigate using natural landmarks is probably no more science fiction. Examples of currently available transporters are:
• Container transporters used to load and unload cargo ships.
• Medication and food transport systems in hospitals.
• Autonomous helicopters, to deliver goods to remote areas.
Insensible Steel Giants
As robots can be easily shielded against hazardous environments and are somewhat replaceable, they are used in dangerous, toxic or nuclear environments. Some places robots have helped cleaning up a mess:
• In Chernobyl robots have helped to clean up nuclear waste.
• Robots have entered dangerous areas in the remains of the WTC.
• Robots are used to clean ammunition and mines all around the world.
Model based Rescue Robot Control
The competitions of rescue have been established since 2000 to foster robot autonomy and an unknown completely and unsettled environment, and to promote the use of robots in high risk areas, for helping human rescue teams in the aftermath of disastrous events. In rescue competitions the task is not to prevent calamitous events but to support operators for people rescue where human accessibility is limited or most probably interdicted. This security topology tasks are crucial when the environment can not be accessed by rescue operators and the aid of robots endowed with good perceptual abilities can help to save the lives of human.
Autonomous robots have to accomplish such a tasks in complete autonomy, which is producing and exploring a map of the environment, recognizing via different perceptual skills the victims, correctly labeling the map with the victim position, and possibly, status and conditions.
DORO cognitive architecture purposely designed for the autonomous finding and exploration tasks required in rescue competitions and focus on hw exploit the ECLiPSe framework in order to implement its model based executive controller.
A model-based role monitoring system is to enhance the system safeness, pro-activity and flexibility. In this approach, the monitoring system is endowed with a declarative representation of the temporal and causal properties of the controlled processes. Given this explicit model, the executive control is provided by a reactive planning engine which harmonizes the mission goals, the reactive activity of the modules functional, and the operator interventions. The execution state of the robot can be compared continuously with a declarative model of the system behavior: the executive controller can track relevant parallel activities integrating them into a global view and time constraint violations and subtle resources can be detected. The system of planning is to compensate these misalignments/failures generating on the fly recovery sequences. Such features are designed and implemented by deploying a paradigm high level agent programming.
Autonomous robots have to accomplish such a tasks in complete autonomy, which is producing and exploring a map of the environment, recognizing via different perceptual skills the victims, correctly labeling the map with the victim position, and possibly, status and conditions.
DORO cognitive architecture purposely designed for the autonomous finding and exploration tasks required in rescue competitions and focus on hw exploit the ECLiPSe framework in order to implement its model based executive controller.
A model-based role monitoring system is to enhance the system safeness, pro-activity and flexibility. In this approach, the monitoring system is endowed with a declarative representation of the temporal and causal properties of the controlled processes. Given this explicit model, the executive control is provided by a reactive planning engine which harmonizes the mission goals, the reactive activity of the modules functional, and the operator interventions. The execution state of the robot can be compared continuously with a declarative model of the system behavior: the executive controller can track relevant parallel activities integrating them into a global view and time constraint violations and subtle resources can be detected. The system of planning is to compensate these misalignments/failures generating on the fly recovery sequences. Such features are designed and implemented by deploying a paradigm high level agent programming.
Model based Monitoring with ECLiPSe Framework
A model based monitoring system is to enhance both the operator situation awareness and the system safeness. Given a declarative representation of the temporal properties and the system causal, the flexible executive control is provided by a reactive planning engine which is to harmonize the operator activity such as tasks, commands, etc, with the mission goals and the reactive activity of the functional modules. Since the execution state of the robot is compare continuously with a declarative model of the system, all the main parallel activities are integrated a global view.
In order to get such a features it deploys high level agent programming in TCGolog (Temporal Concurrent Golog) which provides both a declarative language to represent the system properties and the planning engine to generate the control sequences.
Temporal Concurrent Situation Calculus, the SC (Situation Calculus) is a sorted first order language representing dynamic domains by means of situations, actions, e.g. sequences of actions, and fluent. TSCS (Temporal Concurrent Situation Calculus extends the SC with concurrent actions and time. Concurrent durative processes can be represented by fluent properties started and ended by duration less actions. For instance, the process going(p1,p2) is started by the action startGo(p1,t) and it is ended by endGo(p2,t’).
The main DORO processes and states are represented explicitly by a declarative dynamic temporal model specified in the Temporal Concurrent Situation Calculus. This model represents the cause effects relations and temporal constraint among the activities: the system is modeled as a set of component whose state changes over time. Each component is a concurrent thread, describing its history over time as a state sequence ago activities and states. For instance, in the rescue domain some components are: slam, navigation, pant-tilt, VisualPerception, etc. each of these associated with a set of processes, e.g. navigation can be nav_Wand (Wandering the arena), nav_GoTo (Navigate to reach given position) etc.
In order to get such a features it deploys high level agent programming in TCGolog (Temporal Concurrent Golog) which provides both a declarative language to represent the system properties and the planning engine to generate the control sequences.
Temporal Concurrent Situation Calculus, the SC (Situation Calculus) is a sorted first order language representing dynamic domains by means of situations, actions, e.g. sequences of actions, and fluent. TSCS (Temporal Concurrent Situation Calculus extends the SC with concurrent actions and time. Concurrent durative processes can be represented by fluent properties started and ended by duration less actions. For instance, the process going(p1,p2) is started by the action startGo(p1,t) and it is ended by endGo(p2,t’).
The main DORO processes and states are represented explicitly by a declarative dynamic temporal model specified in the Temporal Concurrent Situation Calculus. This model represents the cause effects relations and temporal constraint among the activities: the system is modeled as a set of component whose state changes over time. Each component is a concurrent thread, describing its history over time as a state sequence ago activities and states. For instance, in the rescue domain some components are: slam, navigation, pant-tilt, VisualPerception, etc. each of these associated with a set of processes, e.g. navigation can be nav_Wand (Wandering the arena), nav_GoTo (Navigate to reach given position) etc.
Control Architecture of DORO Robotic
Several activities need to be controlled and coordinate during mission. An execution model is thus a formal framework allowing for a consistent description of the correct timing of any kind of behavior the system has to perform successfully conclude a mission. However as the domain is uncertain, the result of any action can be unexpected, and the resources and time needed can not be rigidly scheduled. It is necessary to account for flexible behavior which means managing dynamic change of time and resource allocation.
A model based executive control system integrates and supervises both the robot modules activities and the operator interventions. The main robot and operator processes such as laser scanning, mapping, navigation etc, are represented explicitly by a declarative temporal context following this approach. A reactive planner can monitor the system status and generate the control on the fly performing continuously sense-plan-act cycles. Each cycle the reactive planner is to generate the robot activities managing failures. The short range planning activity can balance reactivity and goal oriented behavior: short term goals and internal/external events can be combined while the reactive planner tries to solve conflicts.
The physical layer is composed of all the robot devices, e.g. sonar’s, motors, and payload. The DORO payload consists of two stereo cameras, two microphones, one laser telemeter and the pan-tilt unit. The robot embedded components are accessible through the software provided by the vendor ActivMedia Robotics Interface Application (ARIA) libraries, while the payload software is custom.
The functional collects all the capabilities of robot basic. The functional module conducts some actions in order to accomplish basic tasks e.g. acquire image from the arena, collect sensor measures and similar activities. In particular DORO system is endowed with the following modules: acquisition, navigations, laser, joypad, laser and PTU. The navigation module controls the movements of robot.
A model based executive control system integrates and supervises both the robot modules activities and the operator interventions. The main robot and operator processes such as laser scanning, mapping, navigation etc, are represented explicitly by a declarative temporal context following this approach. A reactive planner can monitor the system status and generate the control on the fly performing continuously sense-plan-act cycles. Each cycle the reactive planner is to generate the robot activities managing failures. The short range planning activity can balance reactivity and goal oriented behavior: short term goals and internal/external events can be combined while the reactive planner tries to solve conflicts.
The physical layer is composed of all the robot devices, e.g. sonar’s, motors, and payload. The DORO payload consists of two stereo cameras, two microphones, one laser telemeter and the pan-tilt unit. The robot embedded components are accessible through the software provided by the vendor ActivMedia Robotics Interface Application (ARIA) libraries, while the payload software is custom.
The functional collects all the capabilities of robot basic. The functional module conducts some actions in order to accomplish basic tasks e.g. acquire image from the arena, collect sensor measures and similar activities. In particular DORO system is endowed with the following modules: acquisition, navigations, laser, joypad, laser and PTU. The navigation module controls the movements of robot.
Remote Object Control Interface of Robotic
An actuators, sensors, microprocessors and wireless networks become cheaper and more ubiquitous it has become increasingly attractive to consider employing teams of small robots to tackle various manipulations and sensing tasks. In order to exploit the full capabilities of these teams, we need to develop effective methods and models for programming distribute ensembles of actuators and sensors.
Application for distributed dynamic robotic teams require a different programming model than the one employed for most applications of traditional robotic. In the traditional model, the programmer is faced with the task of developing software for a single processor interacting with a prescribed set of actuators and sensors. She or he can assume typically that the configuration of the system goal is completely specified before the first line of code written. When developing code for multi robot dynamic teams, we must account for the fact that the type and number of robots available at runtime can not be predicted. We expect to operate in environment where robots will be removed and added continuously and unpredictably. It is expected an environment where the robots will have heterogeneous capabilities; for instance, some may be equipped with camera system, others with range sensors or specialized actuators, some agents may be stationary while others may offer specialized computational resources. This implies that the program must be able to identify and marshal all of the required resources to carry out the specified task automatically.
Remote Objects Control Interface (ROCI), objected oriented, a self describing, strongly typed programming framework that allows the development of robust applications for dynamic multi robot teams. The ROCI building blocks applications are self contained, reusable modules. Fundamentally, a module encapsulates a process which acts on data available on the module’s inputs and presents its result as outputs. Complex tasks can be built connecting inputs and outputs of specific modules.
Application for distributed dynamic robotic teams require a different programming model than the one employed for most applications of traditional robotic. In the traditional model, the programmer is faced with the task of developing software for a single processor interacting with a prescribed set of actuators and sensors. She or he can assume typically that the configuration of the system goal is completely specified before the first line of code written. When developing code for multi robot dynamic teams, we must account for the fact that the type and number of robots available at runtime can not be predicted. We expect to operate in environment where robots will be removed and added continuously and unpredictably. It is expected an environment where the robots will have heterogeneous capabilities; for instance, some may be equipped with camera system, others with range sensors or specialized actuators, some agents may be stationary while others may offer specialized computational resources. This implies that the program must be able to identify and marshal all of the required resources to carry out the specified task automatically.
Remote Objects Control Interface (ROCI), objected oriented, a self describing, strongly typed programming framework that allows the development of robust applications for dynamic multi robot teams. The ROCI building blocks applications are self contained, reusable modules. Fundamentally, a module encapsulates a process which acts on data available on the module’s inputs and presents its result as outputs. Complex tasks can be built connecting inputs and outputs of specific modules.
The Integrated Telerobotic Surgery System
The total system behaves as the human surgeon would if there were not an encumbering delay performance. Because the simulator through which the surgeon operates is running in real time the surgeon sees reaction the inceptor movements much more quickly than would be the case if she/he were required to wait while the signals made a complete round trip over the long haul network.
The signal path from the surgeon’s inceptor movement proceeds to the simulator and to the intelligent controller simultaneously, which commands the robot movement. This is simulator like any other in that it calculates all of the system dynamic in real time and from these computations come changes to the states of system, which alter the visual scene observed by the physician. The visual scene is generated by high speed computer graphics engines not unlike those employed by simulators of modern flight. However, the proposed embodiment of unique aspect is that the graphics image is updated periodically by the video image transmitted over the long haul network. This approach ensures that the visual scenes at the simulator and at the patient are never allowed to deviate perceptibly. This updated is generated by a complex scheme of image decoding, image format transformation and texture extraction.
The intelligent controller performs the dual role the performance of optimizing robot and preventing inadvertent incisions. The research will investigate two general approaches to the design. One approach will use optimal control theory and the other will utilize a hybrid of soft computing techniques such as neutral network, fuzzy control, and genetic algorithms. The both techniques have been used successfully to control autonomous aircraft.
The simulator also calculates appropriate the forces of inceptor. The drive signal math model for the haptic stimuli will be essentially the same as that in the actual robot although it will rely on a sophisticated organ dynamics model to compute the forces of appropriate organ interacting with the robot end effectors.
The signal path from the surgeon’s inceptor movement proceeds to the simulator and to the intelligent controller simultaneously, which commands the robot movement. This is simulator like any other in that it calculates all of the system dynamic in real time and from these computations come changes to the states of system, which alter the visual scene observed by the physician. The visual scene is generated by high speed computer graphics engines not unlike those employed by simulators of modern flight. However, the proposed embodiment of unique aspect is that the graphics image is updated periodically by the video image transmitted over the long haul network. This approach ensures that the visual scenes at the simulator and at the patient are never allowed to deviate perceptibly. This updated is generated by a complex scheme of image decoding, image format transformation and texture extraction.
The intelligent controller performs the dual role the performance of optimizing robot and preventing inadvertent incisions. The research will investigate two general approaches to the design. One approach will use optimal control theory and the other will utilize a hybrid of soft computing techniques such as neutral network, fuzzy control, and genetic algorithms. The both techniques have been used successfully to control autonomous aircraft.
The simulator also calculates appropriate the forces of inceptor. The drive signal math model for the haptic stimuli will be essentially the same as that in the actual robot although it will rely on a sophisticated organ dynamics model to compute the forces of appropriate organ interacting with the robot end effectors.
The Intelligent Controller Surgical Robotic
The intelligent controller is located on the robot/patient side of the communication link, perform in two critical roles. In the ultimate system for use on actual patients, the intelligent controller will be necessary to provide both an improved level of efficiency and an added measure of safety in the presence of time delay. Both of safety role and the efficiency enhancement role require intelligent behavior.
The requirement for an added element of safety in presence of time delay is quite obvious. Even when the surgeon as well as the various other components of the system are performing perfectly, the existence of prevent delay time the possibility of 100% certainty as to where various tissue will be in relation to the surgical instruments at any given instant in the future. Because the intelligent controller will be proximate to the robot, it will interact with the robot without delay significant, and thereby has the potential to control all robot movement instantaneously. As a last line of defense against the accidental collision possibility between surgical instrument and the patient’s vital organs, the intelligent controller will play ultimately a critical role.
The requirement for improving the level of efficiency over what it would be otherwise in the presence of time delay is also clear. Finishing surgery in a timely manner and preventing unnecessary frustration for the surgeon are always important aims. While it may be true that the delays associated with telerobotic surgery will never allow it to be quite as efficient as proximate robotic surgery, the aim at least must be to make it ultimately as efficient as possible.
We will attempt to apply variety of advanced approaches to machine intelligence in designing effective intelligent controller prototypes although in the course of the research, some basic aspects no need be particularly complex. A fairly effective controller could be based on nothing more than a three-dimensional geometric model of the surgical field combined with a production rule system type.
The requirement for an added element of safety in presence of time delay is quite obvious. Even when the surgeon as well as the various other components of the system are performing perfectly, the existence of prevent delay time the possibility of 100% certainty as to where various tissue will be in relation to the surgical instruments at any given instant in the future. Because the intelligent controller will be proximate to the robot, it will interact with the robot without delay significant, and thereby has the potential to control all robot movement instantaneously. As a last line of defense against the accidental collision possibility between surgical instrument and the patient’s vital organs, the intelligent controller will play ultimately a critical role.
The requirement for improving the level of efficiency over what it would be otherwise in the presence of time delay is also clear. Finishing surgery in a timely manner and preventing unnecessary frustration for the surgeon are always important aims. While it may be true that the delays associated with telerobotic surgery will never allow it to be quite as efficient as proximate robotic surgery, the aim at least must be to make it ultimately as efficient as possible.
We will attempt to apply variety of advanced approaches to machine intelligence in designing effective intelligent controller prototypes although in the course of the research, some basic aspects no need be particularly complex. A fairly effective controller could be based on nothing more than a three-dimensional geometric model of the surgical field combined with a production rule system type.
Organ Dynamics Modeling
Surgery simulation involves biological tissues, therefore it is essential to determine the deformation of the tissue while in contact with a surgical instrument. This is important to transfer a realistic picture of the tissue to the surgeon in real time. The problem is how to determine the interaction between the tissue and the tool. This requires a rigorous model of the material properties, an accurate simulation method to reflect the actual behavior of the tissue, and fast simulation results to enable the simulation of real-time interactive.
The organ dynamics model must be accurate, responsive and timely. Realistic visual deformation and haptic output of the simulator is essential for the success of its tele-robotic system. The surgeon must see and it is desirable to feel the organs presented by the simulator as if they were the actual organs of the patient. As an aside it should be noted that the daVinci robot does not provide significant, haptic the feel tissue. However, as part of this project it will add the capability, since haptic feedback has been demonstrated to mitigate some of the effects of the delay by virtue of the fact that the proprioceptive system is faster responding than vision. The output simulator is depending on the organ dynamic models.
The first requirement of the organ dynamic model of the biological system is accuracy; the force and deformation feedback of the simulated organ on which the surgeon is operating must be as close as possible to the actual patient organ. Secondly, the computation that is required to determine the simulated forces and deformation must be performed in a time interval that appears to the surgeon to be real time. The organ dynamic model must be capable of being quickly updated using the decompressed video feedback that provides corrections without haptic discontinuities or creating visual, that could disturb the surgeon.
The organ dynamics model must be accurate, responsive and timely. Realistic visual deformation and haptic output of the simulator is essential for the success of its tele-robotic system. The surgeon must see and it is desirable to feel the organs presented by the simulator as if they were the actual organs of the patient. As an aside it should be noted that the daVinci robot does not provide significant, haptic the feel tissue. However, as part of this project it will add the capability, since haptic feedback has been demonstrated to mitigate some of the effects of the delay by virtue of the fact that the proprioceptive system is faster responding than vision. The output simulator is depending on the organ dynamic models.
The first requirement of the organ dynamic model of the biological system is accuracy; the force and deformation feedback of the simulated organ on which the surgeon is operating must be as close as possible to the actual patient organ. Secondly, the computation that is required to determine the simulated forces and deformation must be performed in a time interval that appears to the surgeon to be real time. The organ dynamic model must be capable of being quickly updated using the decompressed video feedback that provides corrections without haptic discontinuities or creating visual, that could disturb the surgeon.
Performing Robotic Surgical
Robotic is one of key technologies that have a strong potential to change how we live in century 21st. we have already seen robots exploring surfaces of the depth of ocean and distant planets, streamlining and speeding up the assembly lines in manufacturing industry. Robotic lawn mover, vacuum cleaner and even pets found their ways to our houses. Among the medical applications of robotics the minimally invasive surgery was the first demonstrate a real benefits and advantages of introducing robotic devices into operating room over conventional surgical methods. These machines have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal heartburn and reflux.
After its inception in the late 1980s the utilization of laparoscopic cholecystectomy grew rapidly. However, minimally invasive surgery (MIS) for other operations has not experienced the same pattern of growth. The reason is that in general laparoscopic procedures are hard to learn, perform and master. This is a consequence of the fact that the camera platform is unstable, the instruments have a restrictive number of degrees of freedom and the imagery presented to the surgeon does not offer sufficient depth of information. The solutions seem to be at hand with the significant growth of robotic surgery. This is surgery where in the surgeon operate through a robot. This robot is a telemanipulator under the control of the surgeon. The system of robotic provides a stable video platform, added dexterity and in some cases a stereoscopic view of the field of surgical.
The surgical robots use technology that allows the human surgeon to get closer to the site of surgical than human vision will allow, and work at a smaller scale than conventional surgery permits. The robotic surgery system consist two primary components:
• A surgical arm unit (robotic)
• A viewing and control console (operating station)
After its inception in the late 1980s the utilization of laparoscopic cholecystectomy grew rapidly. However, minimally invasive surgery (MIS) for other operations has not experienced the same pattern of growth. The reason is that in general laparoscopic procedures are hard to learn, perform and master. This is a consequence of the fact that the camera platform is unstable, the instruments have a restrictive number of degrees of freedom and the imagery presented to the surgeon does not offer sufficient depth of information. The solutions seem to be at hand with the significant growth of robotic surgery. This is surgery where in the surgeon operate through a robot. This robot is a telemanipulator under the control of the surgeon. The system of robotic provides a stable video platform, added dexterity and in some cases a stereoscopic view of the field of surgical.
The surgical robots use technology that allows the human surgeon to get closer to the site of surgical than human vision will allow, and work at a smaller scale than conventional surgery permits. The robotic surgery system consist two primary components:
• A surgical arm unit (robotic)
• A viewing and control console (operating station)
Navigation Framework based on Implementation
The Morphin algorithm provides traversability data to the navigation system through the callback interface described earlier. Three register classes to receive this data:
• Goodness Callback: this class updates a local traversability map used a local function of cost.
• D* Goodness Callback: this class transfers the traversability analysis to the map used by the D* global function of cost.
• Goodness Streamer Callback: a debugging class which is used to broadcast the data to an off board of interface user.
The global and local cost functions are used by an implementation of the action selector interface which searches for the best arc to traverse.
This framework addresses many of the problems associated with prior algorithms implementations. The model used to describe the motion of the robot is centralized in this framework, so all cost evaluations are consistent. Furthermore, the scaling data in the cost functions allows for the arbitrary cost functions to be used while still providing reasonable summations.
Each component, e.g. D* cost function, the action selector, Morphin traversability analyzer, and locomotion model is independent of the others. Changing the types of motion the robot performs therefore does not require changes to any module other than the action selector. For instance, to have a robot move along straight line segments connected by point turns would require only the implementation of a new action selector. Similarly, by replacing the Morphin traversability analyzer with a port of the GESTALT terrain analyzer, the two analyzers of traversability could be compared directly, without the need to change the other software.
The fundamental nature of the algorithm can also be changed easily. For instance, by removing the D*global cost function and replacing the action selector with a new module, a new navigation system that resembles the Rover Bug algorithm could be implemented quickly.
• Goodness Callback: this class updates a local traversability map used a local function of cost.
• D* Goodness Callback: this class transfers the traversability analysis to the map used by the D* global function of cost.
• Goodness Streamer Callback: a debugging class which is used to broadcast the data to an off board of interface user.
The global and local cost functions are used by an implementation of the action selector interface which searches for the best arc to traverse.
This framework addresses many of the problems associated with prior algorithms implementations. The model used to describe the motion of the robot is centralized in this framework, so all cost evaluations are consistent. Furthermore, the scaling data in the cost functions allows for the arbitrary cost functions to be used while still providing reasonable summations.
Each component, e.g. D* cost function, the action selector, Morphin traversability analyzer, and locomotion model is independent of the others. Changing the types of motion the robot performs therefore does not require changes to any module other than the action selector. For instance, to have a robot move along straight line segments connected by point turns would require only the implementation of a new action selector. Similarly, by replacing the Morphin traversability analyzer with a port of the GESTALT terrain analyzer, the two analyzers of traversability could be compared directly, without the need to change the other software.
The fundamental nature of the algorithm can also be changed easily. For instance, by removing the D*global cost function and replacing the action selector with a new module, a new navigation system that resembles the Rover Bug algorithm could be implemented quickly.
Action Selection and Locomotion of Robotic
The class of action selector is where the specifics of a navigation algorithm are implemented. Basically, the role of the action selector is to determine the appropriate next action for the robot to perform given its current state and information from global and local cost functions.
Through the action selection decoupling from traversability analysis, it is straight forward to do modify the set of trajectories over which navigation algorithm searches. The generic action selector interface provide access or functions to get and set the current waypoint, and a function that returns the next action the robot should take.
To enable a generic algorithms implementation, action selectors are provided with a model of the locomotion capabilities of the rover. At minimum, the model provides kinematic properties, such as the number of wheels and the wheelbase of the robot. Action selectors may use this information to generically integrate terrain costs over the expected path of a robot.
The locomotors classes provide an abstract interface to the underlying the mechanism of robotic locomotion. The locomotion framework is structured in software of double-bridge, allowing independent specialization of the kinematic control interface and configuration. The wheel locomotors class provides the interface used by other components to maneuver the vehicle. The wheel locomotors interface defines functions that descendants must provide to encapsulate the specific of the protocol used to command the mechanism of locomotors.
The wheel locomotors model describes the robot kinematics. This structure allows for maximal code reuse, which is important particularly in a research environment where changes to the robot may occur incrementally, e.g. the control system may be redeveloped while the mechanical robot remains the same or vise versa.
Locomotors interface classes provide an abstraction to the different control interfaces potentially available on a robot, e.g. bank control, high level arc control, and independent motor control. Drive commands are used as a high level interface to the locomotors.
Through the action selection decoupling from traversability analysis, it is straight forward to do modify the set of trajectories over which navigation algorithm searches. The generic action selector interface provide access or functions to get and set the current waypoint, and a function that returns the next action the robot should take.
To enable a generic algorithms implementation, action selectors are provided with a model of the locomotion capabilities of the rover. At minimum, the model provides kinematic properties, such as the number of wheels and the wheelbase of the robot. Action selectors may use this information to generically integrate terrain costs over the expected path of a robot.
The locomotors classes provide an abstract interface to the underlying the mechanism of robotic locomotion. The locomotion framework is structured in software of double-bridge, allowing independent specialization of the kinematic control interface and configuration. The wheel locomotors class provides the interface used by other components to maneuver the vehicle. The wheel locomotors interface defines functions that descendants must provide to encapsulate the specific of the protocol used to command the mechanism of locomotors.
The wheel locomotors model describes the robot kinematics. This structure allows for maximal code reuse, which is important particularly in a research environment where changes to the robot may occur incrementally, e.g. the control system may be redeveloped while the mechanical robot remains the same or vise versa.
Locomotors interface classes provide an abstraction to the different control interfaces potentially available on a robot, e.g. bank control, high level arc control, and independent motor control. Drive commands are used as a high level interface to the locomotors.
Robot Programming Language Meld and LDP
Modular robot programming can be substantially more challenging than normal robot programming due to:
• Scale/number of modules.
• Concurrency and asynchronicity, both in physical interactions and potentially at the software level.
• The local scope of information naturally available at each module.
Recent declaration approaches such as P2 and Saphira have shown promise in other domains that share some of theses characteristics. Inspired by those results it has been developing two modular robot specific declarative programming languages, Meld and LDP. Both languages provide the illusion of executing a single program across an ensemble, while the runtime system of each language automatically distributes to computation and applies a variety of optimizations to reduce the net computational and messaging overhead.
It has previously described a whole motion based shape planning algorithm that exhibits constant planning complexity per module and requires information linear in the complexity of the target shape. Subsequently it generalized this approach to extend its functioning to other local metamorphic systems. The latter, generalized algorithm operates on sub-ensembles to accomplish both shape control and resource allocation while maintaining global connectivity of the ensemble.
Meld has proven more effective at many of the global coordination aspect of this algorithm, at efficiently tracking persistent operating conditions, and at coping with non linear topologies. LDP has proven more effective at local coordination, sophisticated temporal conditions, detecting local topological configurations, and more generally, at expressing variable-length conditional expressions.
• Scale/number of modules.
• Concurrency and asynchronicity, both in physical interactions and potentially at the software level.
• The local scope of information naturally available at each module.
Recent declaration approaches such as P2 and Saphira have shown promise in other domains that share some of theses characteristics. Inspired by those results it has been developing two modular robot specific declarative programming languages, Meld and LDP. Both languages provide the illusion of executing a single program across an ensemble, while the runtime system of each language automatically distributes to computation and applies a variety of optimizations to reduce the net computational and messaging overhead.
It has previously described a whole motion based shape planning algorithm that exhibits constant planning complexity per module and requires information linear in the complexity of the target shape. Subsequently it generalized this approach to extend its functioning to other local metamorphic systems. The latter, generalized algorithm operates on sub-ensembles to accomplish both shape control and resource allocation while maintaining global connectivity of the ensemble.
Meld has proven more effective at many of the global coordination aspect of this algorithm, at efficiently tracking persistent operating conditions, and at coping with non linear topologies. LDP has proven more effective at local coordination, sophisticated temporal conditions, detecting local topological configurations, and more generally, at expressing variable-length conditional expressions.
Navigation Architecture to Evaluate Terrain and Locomotion
The role of a navigation algorithm is to generate paths through terrain while achieving specific aims. The navigator uses sensor data to evaluate terrain and locomotion components to execute robot trajectories and motor.
The navigation architecture is divided into modules roughly along sense-think-act lines: traversabilty analysis components, action selection and cost functions. In the traversability analysis components, sensor data is converted into a model of the world. Action selectors use this planning space to determine how the robot should move. Cost functions transform these models into a form that can be used for planning. Once a course of actions is determined, the resulting trajectory is then passed to the locomotion system for execution. The navigator provides the basic interface between decision layer processes and locomotion and the navigation systems. Each component provides a standardized set of interface functions that can be overridden in descendant classes to provide specific behavior.
Navigation aims are expressed as waypoints. Waypoints provide a simple interface that returns whether or not a state is in a set of desired states. The basic implementation is a two dimensional goal location specified with some error tolerance. Descendent waypoint classes may provide more complicated goal conditions, such as a goal line to cross achieving a position with desired orientation.
The navigator tracks the progress of the robot as it progresses towards a waypoint. The decision layer may queue a list of waypoints for the robot to pass through this interface. The navigator also provides an interface through which callbacks can be registered. The callbacks can be used to monitor the progress of the navigator or, in a more complex way, to trigger non navigation tasks to execute (e.g. opportunistic science).
Traversability analysis involves the conversion of sensor data into a model in the world. This may be as simple as a binary occupancy or as complicated as a statistical evaluation of the terrain.
The navigation architecture is divided into modules roughly along sense-think-act lines: traversabilty analysis components, action selection and cost functions. In the traversability analysis components, sensor data is converted into a model of the world. Action selectors use this planning space to determine how the robot should move. Cost functions transform these models into a form that can be used for planning. Once a course of actions is determined, the resulting trajectory is then passed to the locomotion system for execution. The navigator provides the basic interface between decision layer processes and locomotion and the navigation systems. Each component provides a standardized set of interface functions that can be overridden in descendant classes to provide specific behavior.
Navigation aims are expressed as waypoints. Waypoints provide a simple interface that returns whether or not a state is in a set of desired states. The basic implementation is a two dimensional goal location specified with some error tolerance. Descendent waypoint classes may provide more complicated goal conditions, such as a goal line to cross achieving a position with desired orientation.
The navigator tracks the progress of the robot as it progresses towards a waypoint. The decision layer may queue a list of waypoints for the robot to pass through this interface. The navigator also provides an interface through which callbacks can be registered. The callbacks can be used to monitor the progress of the navigator or, in a more complex way, to trigger non navigation tasks to execute (e.g. opportunistic science).
Traversability analysis involves the conversion of sensor data into a model in the world. This may be as simple as a binary occupancy or as complicated as a statistical evaluation of the terrain.
Coupled Layer Architecture for Robotic Autonomy
As part of CLARAty (Coupled-Layer Architecture for Robotic Autonomy), this framework shares the design aims of maximizing code reuse while maintaining an efficient and accessible implementation.
CLARAty is designed to ease the transition from research to software of flight-ready. It attempts to achieve this aim by developing a set of standard interface and a basic set of reusable components. CLARAty is being developed using object oriented design principles to provide an avenue and to enable code reuse for extension. An open source development model is being used to allow collaborators to contribute components extension, which helps to maintain a critical mass and architecture achieve.
One novel feature of the CLARAty architecture is its two layer structure, the top layer called as decision layer provides a combination of operational executive and procedural planner. The lower level called as functional layer provides a hierarchical interface to hardware components and rover services. The decision layer may access services in the functional layer at any point in the hierarchy, allowing the decision layer to plan at a granularity appropriate for a given task.
The motivation for developing a generic navigation framework comes from the experiences navigation algorithms implementation for a variety of robots. For instance, a combination of a local obstacle avoidance algorithm and a real time path planner has been used on a number of robotic platforms. The first implementation was developed for Ratler. It has been used on progression of robots including Nomad, an ATRV, and most currently Hyperion.
Every new implementation has made gains in performance and capabilities but a major effort has been required to port the software, often involving a complete reimplementation. A target of this work is to simplify this process, allowing researchers to focus on developing and testing new capabilities rather than dealing with the mundane details of creating a specific platform implementation of an existing algorithm.
CLARAty is designed to ease the transition from research to software of flight-ready. It attempts to achieve this aim by developing a set of standard interface and a basic set of reusable components. CLARAty is being developed using object oriented design principles to provide an avenue and to enable code reuse for extension. An open source development model is being used to allow collaborators to contribute components extension, which helps to maintain a critical mass and architecture achieve.
One novel feature of the CLARAty architecture is its two layer structure, the top layer called as decision layer provides a combination of operational executive and procedural planner. The lower level called as functional layer provides a hierarchical interface to hardware components and rover services. The decision layer may access services in the functional layer at any point in the hierarchy, allowing the decision layer to plan at a granularity appropriate for a given task.
The motivation for developing a generic navigation framework comes from the experiences navigation algorithms implementation for a variety of robots. For instance, a combination of a local obstacle avoidance algorithm and a real time path planner has been used on a number of robotic platforms. The first implementation was developed for Ratler. It has been used on progression of robots including Nomad, an ATRV, and most currently Hyperion.
Every new implementation has made gains in performance and capabilities but a major effort has been required to port the software, often involving a complete reimplementation. A target of this work is to simplify this process, allowing researchers to focus on developing and testing new capabilities rather than dealing with the mundane details of creating a specific platform implementation of an existing algorithm.
Deployment of Robotic Sensors
An important problem for systems of wireless sensor is the effective deployment of the sensor within the target space S. the deployment have to satisfy some optimization criteria with respect to the space S. they are usually deployed by external means in case of static sensors, either carefully or randomly; the distribution of the sensors may not satisfy the desired optimization criteria in the latter case.
If the sensing entities are mobile, as in the case of mobile sensor networks, robotic sensor networks and vehicular networks, they are potentially capable to position themselves in appropriate locations without the help of any external control or central coordination. However to achieve such an aim is a rather complex task, and designing localized algorithms for effective and efficient deployment of the mobile sensors is a challenging research issue.
We are interested in a specific instance of the problem, called the Uniform Dispersal problem, where the sensors have to fill completely an unknown space S entering through one or more designated entry point called doors. The sensors must avoid colliding with each other, have to terminate within finite time. The space S is assumed to be simply connected, without holes, and orthogonal, e.g. polygonal with slides either perpendicular or parallel to one another. Orthogonal spaces are interesting because Orthogonal spaces can be used to model indoor and urban environment.
It considers the problem within the context of robotics sensor network: the mobile entities rely only on sensed local information within a restricted radius, called visibility range; when active they operate in a sense-compute-move cycle; and they usually have no explicit means of communication.
A crucial difference between traditional wireless sensor networks and robotic sensor networks is in the determination of an entity’s neighbors. In robotic sensor networks, the determination of one neighbor is done by sensing capabilities, i.e. vision: any sensor in the sensing radius will be detected even if inactive.
If the sensing entities are mobile, as in the case of mobile sensor networks, robotic sensor networks and vehicular networks, they are potentially capable to position themselves in appropriate locations without the help of any external control or central coordination. However to achieve such an aim is a rather complex task, and designing localized algorithms for effective and efficient deployment of the mobile sensors is a challenging research issue.
We are interested in a specific instance of the problem, called the Uniform Dispersal problem, where the sensors have to fill completely an unknown space S entering through one or more designated entry point called doors. The sensors must avoid colliding with each other, have to terminate within finite time. The space S is assumed to be simply connected, without holes, and orthogonal, e.g. polygonal with slides either perpendicular or parallel to one another. Orthogonal spaces are interesting because Orthogonal spaces can be used to model indoor and urban environment.
It considers the problem within the context of robotics sensor network: the mobile entities rely only on sensed local information within a restricted radius, called visibility range; when active they operate in a sense-compute-move cycle; and they usually have no explicit means of communication.
A crucial difference between traditional wireless sensor networks and robotic sensor networks is in the determination of an entity’s neighbors. In robotic sensor networks, the determination of one neighbor is done by sensing capabilities, i.e. vision: any sensor in the sensing radius will be detected even if inactive.
Model and Definitions of Robotic Sensor
The space to be filled by the sensors is a simply connected to orthogonal region S that is partitioned into square cells each of size roughly equal to the area occupied by a sensor. Simply connected means, it is possible to reach any cell in the space from any other cell and there are no obstacle surrounded completely by cell belonging to the space.
The system is composed of simple entities, called sensors, having locomotion and sensory capabilities. The entities can move and turn in any direction. The sensory devices on the entity allow it to have a vision of its surrounding: we assume the sensors to have restricted vision up to a fixed radius around it. They do not have any explicit means of communicating with each other even if two sensors see each other. Each sensor functions according to preprogrammed algorithm into it. The sensors have a (1) bits of working memory, and they have the orientation of local sense.
If two sensors are in the same cell at the same time then there is a collision. The algorithms executed by the sensor have to avoid collisions. The sensors enter the space through special cell that is called doors. A door is simply a cell in the space which always has sensors. Whenever the sensors in the door move to the neighboring cell, a new sensor appears instantaneously in the door.
The sensor first look s at its surrounding and then based on the rules of the algorithm during each step taken by a sensor, the sensor either chooses one of the neighboring cells to move to, or decides to remain stationary. Each step is atomic and during one step of sensor can only move to a cell of neighboring. However since the sensors are asynchronous, an arbitrary amount of time may lapse between two steps that taken by a sensor.
The system is composed of simple entities, called sensors, having locomotion and sensory capabilities. The entities can move and turn in any direction. The sensory devices on the entity allow it to have a vision of its surrounding: we assume the sensors to have restricted vision up to a fixed radius around it. They do not have any explicit means of communicating with each other even if two sensors see each other. Each sensor functions according to preprogrammed algorithm into it. The sensors have a (1) bits of working memory, and they have the orientation of local sense.
If two sensors are in the same cell at the same time then there is a collision. The algorithms executed by the sensor have to avoid collisions. The sensors enter the space through special cell that is called doors. A door is simply a cell in the space which always has sensors. Whenever the sensors in the door move to the neighboring cell, a new sensor appears instantaneously in the door.
The sensor first look s at its surrounding and then based on the rules of the algorithm during each step taken by a sensor, the sensor either chooses one of the neighboring cells to move to, or decides to remain stationary. Each step is atomic and during one step of sensor can only move to a cell of neighboring. However since the sensors are asynchronous, an arbitrary amount of time may lapse between two steps that taken by a sensor.
Human Tactile Sensing of Robots
Human dexterity is a marvelous thing: people can grasp a wide variety of sizes and shapes, perform complex tasks, and switch between grasp in response to changing task requirements. This is due to our sophisticated control part capabilities. In large measure this control capability is founded force and tactile sensing, especially the ability to sense conditions at the finger object contact.
For the last two decades robotics researchers have worked to create a touch artificial sense to give robots some of the same manipulation capabilities that humans posses. While vision has received the most attention in robot sensing research, touch is vital for many tasks. Dextrous manipulation requires control of motions and forces at the contact between the fingers and the environment, which can only be accomplished through touch. Tactile sensing is able to provide information about mechanical properties such as friction, compliance and mass. Knowledge of these parameters is essential if robots are to reliably handle unknown objects in unstructured environment.
Although touch sensing is the dextrous basis manipulation, early work in tactile sensing research focused on the creation of sensor device and object recognition algorithms. Particular attention has been devoted to skin-like array sensors that built a simple tactile array sensor and demonstrated recognition of flat objects such as washers. The multi-fingered hands creation increased interest in tactile sensing for manipulation, beginning with preliminary work on incorporating tactile information in manipulation. In the last few years studies on the use of sensing tactile in real time control of manipulation have begun to appear. Tactile sensors provided information that guided the execution of the tasks, including edge tracking, automatic grasping, and rolling manipulation. These experimental studies have begun to explain the ways that sensing tactile enhances manipulation capabilities and many questions remain unanswered. Currently we lack a comprehensive theory that defines sensing requirements for various manipulation tasks.
For the last two decades robotics researchers have worked to create a touch artificial sense to give robots some of the same manipulation capabilities that humans posses. While vision has received the most attention in robot sensing research, touch is vital for many tasks. Dextrous manipulation requires control of motions and forces at the contact between the fingers and the environment, which can only be accomplished through touch. Tactile sensing is able to provide information about mechanical properties such as friction, compliance and mass. Knowledge of these parameters is essential if robots are to reliably handle unknown objects in unstructured environment.
Although touch sensing is the dextrous basis manipulation, early work in tactile sensing research focused on the creation of sensor device and object recognition algorithms. Particular attention has been devoted to skin-like array sensors that built a simple tactile array sensor and demonstrated recognition of flat objects such as washers. The multi-fingered hands creation increased interest in tactile sensing for manipulation, beginning with preliminary work on incorporating tactile information in manipulation. In the last few years studies on the use of sensing tactile in real time control of manipulation have begun to appear. Tactile sensors provided information that guided the execution of the tasks, including edge tracking, automatic grasping, and rolling manipulation. These experimental studies have begun to explain the ways that sensing tactile enhances manipulation capabilities and many questions remain unanswered. Currently we lack a comprehensive theory that defines sensing requirements for various manipulation tasks.
Distributed Robotic Systems
While all fields of robotics have progressed based on rapid advances in information system and computing, the field of distributed abs cooperative robotics has been catalyzed in particular by new communication technologies and network. In the robotics context, the linkage of information systems by wireless, wired, or optical channels and protocol facilitates interactions that quickly scale from two robots shaking hands to a swarm of micro-robots demonstrating cooperative behaviors. The multiple robot development and sensor communication links and the control of those distributed systems in applications domains and laboratory is this subject in this article.
Network robotics has grown around the use of communications channels network, including the internet, as a means to remotely control robots, by humans, as well as support interobot interactions. Major challenges include the understanding of non deterministic time delays in communications and the control formulation and architectural principles that will enable robust performance. Key examples of robotics network include the remote tele-operation of space exploration robots and shared internet access to robotic experiments.
The multi robots systems area has focused on the understanding of physical interactions and constraint among the robots, and between robots and the physical environment such physical interactions may be direct, as in the shared manipulation of indirect, or a single object, as in the cooperative navigation of autonomous vehicles. Interaction associated with the robotic systems physical reconfiguration are also great interest. The theoretical formulation of underlying principle has been critical for this field, and forms the basis for systematic approaches to practical applications.
The multiple robots deployment in complex environments creates demands for distributed sensor network in order to provide guide actions and information and decisions. The distributed sensor network field has been driven by the capability to fabricate Microsystems with low power, high functionality, and wireless communications capability. The large numbers deployment of such expendable devices would provide extraordinary access to information, and also support the deployment of autonomous robotic systems in the field.
Network robotics has grown around the use of communications channels network, including the internet, as a means to remotely control robots, by humans, as well as support interobot interactions. Major challenges include the understanding of non deterministic time delays in communications and the control formulation and architectural principles that will enable robust performance. Key examples of robotics network include the remote tele-operation of space exploration robots and shared internet access to robotic experiments.
The multi robots systems area has focused on the understanding of physical interactions and constraint among the robots, and between robots and the physical environment such physical interactions may be direct, as in the shared manipulation of indirect, or a single object, as in the cooperative navigation of autonomous vehicles. Interaction associated with the robotic systems physical reconfiguration are also great interest. The theoretical formulation of underlying principle has been critical for this field, and forms the basis for systematic approaches to practical applications.
The multiple robots deployment in complex environments creates demands for distributed sensor network in order to provide guide actions and information and decisions. The distributed sensor network field has been driven by the capability to fabricate Microsystems with low power, high functionality, and wireless communications capability. The large numbers deployment of such expendable devices would provide extraordinary access to information, and also support the deployment of autonomous robotic systems in the field.
Using Coordination Costs to Adapt Communications
The coordination cost measures facilitates identifying which communication method is most suitable given the environment. We model every robot’s coordination cost Ci, as a factor that impacts the group productivity. We analyzed two cost categories: firstly, cost relating to communication and secondly, reactive and/or proactive collision resolution behaviors. It focuses in energy and time spent communicating and in the consequent resolutions behaviors. We then combine these factors to create the function of multi-attribute cost based on the Simple Additive Weighting (SAW) method often used for multi attribute utility functions.
Other communication issues, such as bandwidth limitations, can similarly categorize as additional cost factors as they impact any specific robot. For instance, if a robot needed to retransmit a message due to limited bandwidth, cost in terms of additional time latency and energy used in retransmission are likely to result.
There are two types of adaptive methods: firstly, uniform communication adaptation; secondly, adaptive neighborhoods of communication.
Uniform switching between methods
The first method, all robots simultaneously switch between mutually exclusive communication methods as needed. In order to facilitate this adaptation form, each robot autonomously maintains a cost estimate, V used to decide which communication method to use. Because a robot detects no resources conflict, it decreases an estimate of this cost, V, by an amount Wdown. When a robot senses a conflict is occurring the value of V is increased by an amount Wup. The values for V are then mapped to a set communication scheme methods ranging from those with little cost overhead such as with no communication, to more robust methods with higher overheads such as the centralized and localized methods.
Adaptive neighborhoods of communication
The advantage in the first adaptive approach lies in its simplicity. The uniform adaptive approach switches between existing coordination methods based on cost estimation. Assuming one analysis a new domain with different communication methods completely, and can order the communication methods based on their communication costs, this approach will be equally valid as it implements existing methods and reaches the highest level productivity.
Other communication issues, such as bandwidth limitations, can similarly categorize as additional cost factors as they impact any specific robot. For instance, if a robot needed to retransmit a message due to limited bandwidth, cost in terms of additional time latency and energy used in retransmission are likely to result.
There are two types of adaptive methods: firstly, uniform communication adaptation; secondly, adaptive neighborhoods of communication.
Uniform switching between methods
The first method, all robots simultaneously switch between mutually exclusive communication methods as needed. In order to facilitate this adaptation form, each robot autonomously maintains a cost estimate, V used to decide which communication method to use. Because a robot detects no resources conflict, it decreases an estimate of this cost, V, by an amount Wdown. When a robot senses a conflict is occurring the value of V is increased by an amount Wup. The values for V are then mapped to a set communication scheme methods ranging from those with little cost overhead such as with no communication, to more robust methods with higher overheads such as the centralized and localized methods.
Adaptive neighborhoods of communication
The advantage in the first adaptive approach lies in its simplicity. The uniform adaptive approach switches between existing coordination methods based on cost estimation. Assuming one analysis a new domain with different communication methods completely, and can order the communication methods based on their communication costs, this approach will be equally valid as it implements existing methods and reaches the highest level productivity.
Robotic Communication Approaches
Groups of robots are likely to accomplish certain tasks more robustly and quickly than single robots. Many robotic domains such as robotic search and rescue, vacuuming, de-mining, and waste clean up are characterized by limited operating spaces where robots are likely to collide. Some type of information transfer is likely to be helpful in facilitating coherent behavior in robotic group tasks and thud better achieve their tasks in order to maintain group cohesion under such conditions. This is especially true as robotic domains re typically fraught with uncertainty and dynamic such as hardware failure, noisy sensors, and changing environmental conditions.
Communication should always be advantageous-the more information a robot has the better. However, assuming communication has a cost, one also must consider the resources consumed in communication, and whether the communication cost appropriately matches the needs of the domain. We believe that different communication schemes are best suited for different environment conditions. Because nobody communication method is always most effective, on way to improve communication in coordination is to find a mechanism for switching between different communication protocols so it is match with the given environment.
The model explicitly includes resources such as the energy and time spent communicating. In situations where conflicts between group members are common, more robust means of communication are most effective.
There are two novel domain independent adaptive communication methods that use communication cost estimates to alter their communication approach based on domain conditions. The first approach, robot switch uniformly their communication scheme between differing communication approaches. Robots contain full implementations of several communication methods. The second approach represents a generalized communication scheme that allows each robot to adapt to its domain conditions independently. Each robot creates its own radius of communication range to create a sliding scale of communication between localized to centralized methods.
Communication should always be advantageous-the more information a robot has the better. However, assuming communication has a cost, one also must consider the resources consumed in communication, and whether the communication cost appropriately matches the needs of the domain. We believe that different communication schemes are best suited for different environment conditions. Because nobody communication method is always most effective, on way to improve communication in coordination is to find a mechanism for switching between different communication protocols so it is match with the given environment.
The model explicitly includes resources such as the energy and time spent communicating. In situations where conflicts between group members are common, more robust means of communication are most effective.
There are two novel domain independent adaptive communication methods that use communication cost estimates to alter their communication approach based on domain conditions. The first approach, robot switch uniformly their communication scheme between differing communication approaches. Robots contain full implementations of several communication methods. The second approach represents a generalized communication scheme that allows each robot to adapt to its domain conditions independently. Each robot creates its own radius of communication range to create a sliding scale of communication between localized to centralized methods.
The Robots Anthropomorphic Design
Robots are becoming available in a wide variety of roles. A recent report by the UN Economic Commission for Europe and the International Federation of Robotics predicts that 4.1 millions robots will be working in homes by the end of 2007. The implication is that as they roll, crawl and walk out of the laboratory and into the real world, that people in the real world will be using them – soldiers, families, nurses, and teachers. These users will more than likely not have a background in engineering nor care about the intricacies of the control algorithms in their robot. To them it will be tools in the same way as a PC or DVD player.
However robots differ from most consumers electronic significantly in two respects: first, robots are often designed to use human communication modalities, for example hearing and speech in place of LED displays and buttons. This is sometimes because these modalities are implied by the robots anthropomorphic design and sometimes for practical reasons, robots are usually mobile and even a remote control may be use in limited practical. Secondly, due to their embodiment, robots have the capability to supply rich feedback in many forms: anthropomorphic ones such as gestures, speech and body language and artificial ones such as music and lights.
Current consumer robots such as the Sony AIBO use a combine of both respects. Using real time communication a robot can engage the user in active social interaction and importantly even instigate interaction. Most consumer electronics are passive, there is interaction when instigated by a human, and that interaction is largely in one direction from the human to the machine.
The study of people expectations of a robot companion indicated that a large proportion of the participants in the test were in favour of robot companion, especially one of that could communicate like a humans. Humanlike appearance and behavior were less important.
However robots differ from most consumers electronic significantly in two respects: first, robots are often designed to use human communication modalities, for example hearing and speech in place of LED displays and buttons. This is sometimes because these modalities are implied by the robots anthropomorphic design and sometimes for practical reasons, robots are usually mobile and even a remote control may be use in limited practical. Secondly, due to their embodiment, robots have the capability to supply rich feedback in many forms: anthropomorphic ones such as gestures, speech and body language and artificial ones such as music and lights.
Current consumer robots such as the Sony AIBO use a combine of both respects. Using real time communication a robot can engage the user in active social interaction and importantly even instigate interaction. Most consumer electronics are passive, there is interaction when instigated by a human, and that interaction is largely in one direction from the human to the machine.
The study of people expectations of a robot companion indicated that a large proportion of the participants in the test were in favour of robot companion, especially one of that could communicate like a humans. Humanlike appearance and behavior were less important.
Robots with Omnidirectional Wheels
Omnidirectional wheels have become popular and choose to develop for mobile robots, because they allow them to drive on a straight path from a given location on the floor to other places without having to rotate first. Moreover, the movement of translational along any desired path can be combined with a rotation, so the robot arrives to its destination at the correct angle.
Mostly omnidirectional wheels based on the same general principle; the wheel can slide frictionless in the motor axis direction while the wheel proper provides traction in the direction normal to the motor axis. In order to achieve this, the wheel is built using smaller wheels attached along the periphery of the main wheel. The kind of wheel that is using in RoboCup is small size and middle size omnidirectional robot since 2002. The wheel is a variation of the Swedish wheels, which use rollers with a rotation direction which is neither parallel nor perpendicular to the motor axis.
Two or more omnidirectional wheels are used to drive a robot movement each wheel provides traction in the direction parallel to the floor and normal to the motor axis. The forces provide and add up a translational and a rotational motion for the robot. If it were possible to mount two orthogonally oriented omnidirectional wheels right under the center of a robot with a circular base, then driving the robot in any desired direction would be trivial. To give the robot a speed, with respect to a Cartesian coordinate system attached to the robot, each wheel would just have to provide one of the two speed components.
Since the motors and wheels need some space, this simple arrangement is not possible. The wheels are usually mounted on the periphery of the chassis. It is also easier to cancel any rotational torque which could make difficult to drive the robot on a straight path. The popular configurations of omnidirectional robots are three and four-wheeled.
Mostly omnidirectional wheels based on the same general principle; the wheel can slide frictionless in the motor axis direction while the wheel proper provides traction in the direction normal to the motor axis. In order to achieve this, the wheel is built using smaller wheels attached along the periphery of the main wheel. The kind of wheel that is using in RoboCup is small size and middle size omnidirectional robot since 2002. The wheel is a variation of the Swedish wheels, which use rollers with a rotation direction which is neither parallel nor perpendicular to the motor axis.
Two or more omnidirectional wheels are used to drive a robot movement each wheel provides traction in the direction parallel to the floor and normal to the motor axis. The forces provide and add up a translational and a rotational motion for the robot. If it were possible to mount two orthogonally oriented omnidirectional wheels right under the center of a robot with a circular base, then driving the robot in any desired direction would be trivial. To give the robot a speed, with respect to a Cartesian coordinate system attached to the robot, each wheel would just have to provide one of the two speed components.
Since the motors and wheels need some space, this simple arrangement is not possible. The wheels are usually mounted on the periphery of the chassis. It is also easier to cancel any rotational torque which could make difficult to drive the robot on a straight path. The popular configurations of omnidirectional robots are three and four-wheeled.
R2 Humanoid Robot Designed by R2D2 Droid
We think that the first important step towards having robots sharing the same environment of human people is to figure out a set of basic behavior that allows a robot to be acceptable by humans. R2 has built a PC based robot capable of moving indoor, on a floor a building. This robot has been designed to be a research platform with a rich set of sensors and only have basic movement abilities. The robot has been designed to have a height comparable with people. It is perceived not like a toy by people interacting with it. R2 has been designed using the popular R2D2 droid from Star Wars saga as an inspiration, though the droid is slightly bigger than original.
The first application for R2 will be to stroll around the floor, helping visitors to find rooms. It will also play the role of bridge between the real world and the internet, virtual world. The several problems that faced by all robot during developing such as robust navigation with collision avoidance, route finding, vision, listening to the environment.
R2 sensors include:
• 6 sensors ultrasonic on the head.
• 24 sensors infrared distributed all around the body.
• 6 sensors light on the head.
• 1 compass equipment
• 1 camera a stereoscopic vision systems.
• 2 stereo audio microphones
• 2 state PCs both software and hardware.
• Signals of Wi-Fi.
• Signals blue tooth
• Voltmeter to check the battery.
• Switches to sense body posture.
Actuators allow the robot movement, head rotation and shape shifting.
The navigation system is structured into three layers, a first reactive layer whose goal is to avoid collisions, when no collisions are to be avoided we rely on compass to decide the direction to follow, it uses Wi-Fi signals from different access points to triangulate an approximate the robot position, to be used to check if a desired has been reached.
The first application for R2 will be to stroll around the floor, helping visitors to find rooms. It will also play the role of bridge between the real world and the internet, virtual world. The several problems that faced by all robot during developing such as robust navigation with collision avoidance, route finding, vision, listening to the environment.
R2 sensors include:
• 6 sensors ultrasonic on the head.
• 24 sensors infrared distributed all around the body.
• 6 sensors light on the head.
• 1 compass equipment
• 1 camera a stereoscopic vision systems.
• 2 stereo audio microphones
• 2 state PCs both software and hardware.
• Signals of Wi-Fi.
• Signals blue tooth
• Voltmeter to check the battery.
• Switches to sense body posture.
Actuators allow the robot movement, head rotation and shape shifting.
The navigation system is structured into three layers, a first reactive layer whose goal is to avoid collisions, when no collisions are to be avoided we rely on compass to decide the direction to follow, it uses Wi-Fi signals from different access points to triangulate an approximate the robot position, to be used to check if a desired has been reached.
Navigation and Mission Planning for Military Robotic
There are three different aspects of the military robotic that should be differentiated. Starting from the top level there is the Mission Planning, the path planning and the navigation. Mission planning is mission specific and considerably changes according to the scenario.
Firstly with the data provided by the Mission Planning, the Path Planning produces paths and waypoints taking into account the dynamic and kinematics capabilities of the robots involved in the mission. The Navigation consists in avoiding those obstacles while following the paths of initial. In order to pass or avoid obstacles and to recover from small path changes the robots must be equipped with position and distance sensors. Control architectures for navigation are deliberative generally, meaning that there is a strong coupling between the sensors data and the motion commands sent to the robot actuators. These architectures are based on simple behaviors that are combined sing BCM (Behavior Coordination Mechanism). The systems have to be at least provided the following capabilities: sensor information distribution and distributed behavior communication and coordination mechanisms in order to implement coordination.
Several navigation aspects were found vital for military purposes but not yet foreseen to be solved under the current development speed when analyzing the five selected military tasks. Following are vital gaps were found in the field of navigation and mission planning:
• Following of the autonomous road.
• Autonomous driving in mixed traffic.
• Moving in all terrains in all weather conditions.
• Following the leader, manned or autonomous.
To achieve the roadmap, following has to be done:
• Prioritize different driving conditions, concept and agree on real target scenario.
• Decide and develop experimental systems.
• Organize the trials.
• Define the performance measurements.
• Improve the navigation technology through experimental systems.
• Manage the technology group of navigation.
• Develop coordination and interaction within technology group.
Firstly with the data provided by the Mission Planning, the Path Planning produces paths and waypoints taking into account the dynamic and kinematics capabilities of the robots involved in the mission. The Navigation consists in avoiding those obstacles while following the paths of initial. In order to pass or avoid obstacles and to recover from small path changes the robots must be equipped with position and distance sensors. Control architectures for navigation are deliberative generally, meaning that there is a strong coupling between the sensors data and the motion commands sent to the robot actuators. These architectures are based on simple behaviors that are combined sing BCM (Behavior Coordination Mechanism). The systems have to be at least provided the following capabilities: sensor information distribution and distributed behavior communication and coordination mechanisms in order to implement coordination.
Several navigation aspects were found vital for military purposes but not yet foreseen to be solved under the current development speed when analyzing the five selected military tasks. Following are vital gaps were found in the field of navigation and mission planning:
• Following of the autonomous road.
• Autonomous driving in mixed traffic.
• Moving in all terrains in all weather conditions.
• Following the leader, manned or autonomous.
To achieve the roadmap, following has to be done:
• Prioritize different driving conditions, concept and agree on real target scenario.
• Decide and develop experimental systems.
• Organize the trials.
• Define the performance measurements.
• Improve the navigation technology through experimental systems.
• Manage the technology group of navigation.
• Develop coordination and interaction within technology group.
Three Sections of Robot Programming
Programming the Robot
To program the robot can be broken down into three sections, firstly the developing board, secondly the walking program, and finally the vision program. The main challenge when programming the robot was when learning the programs. Learning the programs included applying theory teammates had learned as well as communicating with others who have done similar programming.
The developing board
You can search the internet to find what you want to develop it that would be capable of executing the actions of the motors. Some of the main criteria that were necessary for developing board were features, size, how recent the technology was, and if it would have the capabilities to eventually add more advanced features in the future.
Walking program
The robot of walking program was intended to have all fifteen motors working simultaneously to allow the robot walk. The main walking program would coordinate the motion walking of legs with the movement of the arms in order to better allow it to maintain they balance. The other walking program aspect was allowing the robot to correct its hip placement before walking. This was all written by C programming language and controlled by the developing board.
Vision program
The main function of the vision program is to take the images of the camera gathers and processing them. The aspect of the robot key is having the correct type of camera. The camera has to be able to communicate with the developing board and outputs uncompressed data. The camera uncompressed data output will make it easier to program. Some others function that will be essential are speed of the image processing and the accuracy. To help the learning of this robot programming you can consult to the expert in your university. Vision program is the last section to program the robot intended to resemble incorporating both the walking and image processing.
To program the robot can be broken down into three sections, firstly the developing board, secondly the walking program, and finally the vision program. The main challenge when programming the robot was when learning the programs. Learning the programs included applying theory teammates had learned as well as communicating with others who have done similar programming.
The developing board
You can search the internet to find what you want to develop it that would be capable of executing the actions of the motors. Some of the main criteria that were necessary for developing board were features, size, how recent the technology was, and if it would have the capabilities to eventually add more advanced features in the future.
Walking program
The robot of walking program was intended to have all fifteen motors working simultaneously to allow the robot walk. The main walking program would coordinate the motion walking of legs with the movement of the arms in order to better allow it to maintain they balance. The other walking program aspect was allowing the robot to correct its hip placement before walking. This was all written by C programming language and controlled by the developing board.
Vision program
The main function of the vision program is to take the images of the camera gathers and processing them. The aspect of the robot key is having the correct type of camera. The camera has to be able to communicate with the developing board and outputs uncompressed data. The camera uncompressed data output will make it easier to program. Some others function that will be essential are speed of the image processing and the accuracy. To help the learning of this robot programming you can consult to the expert in your university. Vision program is the last section to program the robot intended to resemble incorporating both the walking and image processing.
Current uses of Humanoid Robots
Currently humanoid robots are being implemented in a wide range of industries. The most common place to find humanoid robots is in the entertainment industry. One of the popular attractions that use these robots is in the hall President at the Walt Disney World theme park Florida, America. This hall contains robots that created to imitate past and current presidents. Their life-like mannerism and appearance adds an element of humanity to attraction, while still being fascinating technologically. In terms of product that is available to customers, Sony developed a robot named Qrio which runs, dances, recognize faces, maintain its balance, and can get up if knocked over.
Currently humanoid robots are a couple popular uses that will eventually be expanded upon in the work force. These robots are being used as receptionist in large company as well as some university’s technology. Some of the capabilities of these robots are including greeting people when they enter, giving directions and transferring phone call. Security is also a popular means by humanoid robots are being introduced in the work force. Task, a Japanese company created a robot named Robo-Guard. Its capabilities are including patrolling round the clock, using an elevator, replacing its own battery and wielding a fire extinguisher.
The robot that designed by the Huazhong University of Science & Technology’s (HUST) Robot Club had several limitations, they are stood out were the lack of a torso, arms and head. Lacking these features did not allow the robot to fit the definition of a humanoid robot. Another problematic feature was the unusual design of its feet. They were unnecessarily conflicted and large with one another while in motion.
HUST Robot was able to walk Regardless of its flaws. However the walking motion was not steadily due to a poorly assembled legs structure and inadequate motors. It was capable of correcting its leg placement. Both legs straight ahead, before it given the instruction to walk forward.
Currently humanoid robots are a couple popular uses that will eventually be expanded upon in the work force. These robots are being used as receptionist in large company as well as some university’s technology. Some of the capabilities of these robots are including greeting people when they enter, giving directions and transferring phone call. Security is also a popular means by humanoid robots are being introduced in the work force. Task, a Japanese company created a robot named Robo-Guard. Its capabilities are including patrolling round the clock, using an elevator, replacing its own battery and wielding a fire extinguisher.
The robot that designed by the Huazhong University of Science & Technology’s (HUST) Robot Club had several limitations, they are stood out were the lack of a torso, arms and head. Lacking these features did not allow the robot to fit the definition of a humanoid robot. Another problematic feature was the unusual design of its feet. They were unnecessarily conflicted and large with one another while in motion.
HUST Robot was able to walk Regardless of its flaws. However the walking motion was not steadily due to a poorly assembled legs structure and inadequate motors. It was capable of correcting its leg placement. Both legs straight ahead, before it given the instruction to walk forward.
Robot Platforms in Military
The robotic platform is the glue that holds together all the other aspects of a fieldable tactical military unmanned vehicle ground. Unless the platform exhibits a high degree of outstanding ruggedness and mobility it will fail to achieve its target location. If the UGV can not deploy its sensors at the correct location then the mission is useless.
The platform should merge the system of drive, a power supply system sufficient for the required mission period, an advanced communication system capable of returning real time information to the user. A human machine interface (HMI) that allows long term, stress fire operations. The platform must have a very high immunity to interference of electro magnetic, and logically any tactical UGV must not impose a heavy load on available manpower or systems.
As all these functions rely on the platform or chassis to hold the system together, any new tactical platform must be modular in concept. Like the scientific and aircraft industries who have standards on shape and hole mounting patterns we should strive to arrive at a common standard such that any type of sensor pack could be incorporated into a UGV of a given size. Standardization on connector joining and power supplies platform and equipment together would be an advantage.
The following tactical robotic platform developments are needed:
1. Integrate and develop the latest power cell technology into the UGV.
2. Adopt the latest very high efficiency power train and motor drives systems to give very high mobility even when damage.
3. Refine track transmission and wheeled and suspension systems near term fieldable walking remote control vehicles will not be possible.
4. Use building methods and new materials to reduce mass yet retain performance including ballistic protection.
5. To develop a HMI system to ensure tactical robot do not give a high workload on user.
6. To ensure C3 systems operate in real world need to develop an EMC hardening program.
The platform should merge the system of drive, a power supply system sufficient for the required mission period, an advanced communication system capable of returning real time information to the user. A human machine interface (HMI) that allows long term, stress fire operations. The platform must have a very high immunity to interference of electro magnetic, and logically any tactical UGV must not impose a heavy load on available manpower or systems.
As all these functions rely on the platform or chassis to hold the system together, any new tactical platform must be modular in concept. Like the scientific and aircraft industries who have standards on shape and hole mounting patterns we should strive to arrive at a common standard such that any type of sensor pack could be incorporated into a UGV of a given size. Standardization on connector joining and power supplies platform and equipment together would be an advantage.
The following tactical robotic platform developments are needed:
1. Integrate and develop the latest power cell technology into the UGV.
2. Adopt the latest very high efficiency power train and motor drives systems to give very high mobility even when damage.
3. Refine track transmission and wheeled and suspension systems near term fieldable walking remote control vehicles will not be possible.
4. Use building methods and new materials to reduce mass yet retain performance including ballistic protection.
5. To develop a HMI system to ensure tactical robot do not give a high workload on user.
6. To ensure C3 systems operate in real world need to develop an EMC hardening program.
Subscribe to:
Posts (Atom)