The concept of autonomous ground-robots is by bringing to the military effort a decade of experience working on planetary rovers for NASA. Much of more DARPA previous work had involved some the size of tanks, large vehicle, and had addressed successfully technical specifications and requirements to the point where they could be spun off to the development community.
The planetary rover technology was promising and NASA was doing what they could, but DARPA was the place to develop the smaller idea vehicles for military applications. It spent the majority times to formulate a new program that ultimately would be called TMR (Tactical Mobile Robotics). TMR’s primary goals were to make it mobile for fit into backpack, urban terrain and travel 100 meter per operator intervention. So it was an order of smaller magnitude and an order of smarter magnitude. We came through on the mobility part pretty well and go things moving in the right direction on autonomy.
The military was interested in small robotics air, ground and under water, in portable formats that could fit into areas a human could not, areas that would be left unguarded by the enemy in urban environment. It would separate the objectives of the program into the operational focus was to revolutionize urban warfare by penetrating denied areas with robots.
There are six primary imperatives for general objectives that crucial to making tactical robot functional:
1. Response to lost communication.
2. Tumble recovery, the robot must be invertible.
3. Anti handling, the robot must feature a method of keeping the enemy from picking it up while not endangering innocent civilians, especially children.
4. Self location, the robot should fuse GPS, odometry and visual inputs to determine its location.
5. Complex obstacle negotiation.
6. Self cleaning, the robots must have traceability to clear the dust and mud, etc.
Blog about Robotics Introduction, news of the robot, information about the robot, sharing knowledge about the various kinds of robots, shared an article about robots, and others associated with the robot
DARPA Killer Robots
While Asimo’s ability to walk up and down stairs was a breakthrough in robot mobility, DARPA has been pushing the envelope to develop robots that are able to autonomously and safely navigate a battlefield in combat.
Robots have been part of DARPA culture almost from day one. The agency worked to improve UAVs (Unmanned Aerial Vehicles), and then known as remotely piloted vehicles, used for low altitude tactical reconnaissance over enemy territory.
DARPA work in AI (Artificial Intelligence) technologies in the 1970s spurred an interest in robotics applications, especially in academia, but the policy at DARPA itself at the time was not to emphasize robotics. Even so, the agency continued to support robotics research at Stanford and MIT through its information IPTO (Information Processing Techniques Office).
DARPA took a more direct role as it began work on a family of autonomous ground, air and sea vehicles known as the killer robots in the early 1980s. Killer Robots did lay the groundwork for DARPA’s future work while never achieving program status, as did the Strategic Computing program, a major program created in the same time frame to fund all DARPA work then rapidly evolving field of computers.
The Strategic Computing Initiative of the 1980s produced its share of disruptive technology in developing reduced instruction set processors, RAID disks, robotics, specialized graphic engines, and AI tools which are now currently mainstream. The investments by DARPA in this technology in their early and formative years have paid rich dividends.
Various DARPA efforts (including Killer Robots) become part of the new TTO Tactical Technology Office) Smart weapons program (SWP) to develop a modular family of autonomous weapons, smart. Advances in computer technology and software such as Automatic Target Recognition) were belief to finally have made a true autonomous loitering weapon possible. The size, power demands, weight, and capability of computers had been perhaps the greatest limitation on reality catching.
Robots have been part of DARPA culture almost from day one. The agency worked to improve UAVs (Unmanned Aerial Vehicles), and then known as remotely piloted vehicles, used for low altitude tactical reconnaissance over enemy territory.
DARPA work in AI (Artificial Intelligence) technologies in the 1970s spurred an interest in robotics applications, especially in academia, but the policy at DARPA itself at the time was not to emphasize robotics. Even so, the agency continued to support robotics research at Stanford and MIT through its information IPTO (Information Processing Techniques Office).
DARPA took a more direct role as it began work on a family of autonomous ground, air and sea vehicles known as the killer robots in the early 1980s. Killer Robots did lay the groundwork for DARPA’s future work while never achieving program status, as did the Strategic Computing program, a major program created in the same time frame to fund all DARPA work then rapidly evolving field of computers.
The Strategic Computing Initiative of the 1980s produced its share of disruptive technology in developing reduced instruction set processors, RAID disks, robotics, specialized graphic engines, and AI tools which are now currently mainstream. The investments by DARPA in this technology in their early and formative years have paid rich dividends.
Various DARPA efforts (including Killer Robots) become part of the new TTO Tactical Technology Office) Smart weapons program (SWP) to develop a modular family of autonomous weapons, smart. Advances in computer technology and software such as Automatic Target Recognition) were belief to finally have made a true autonomous loitering weapon possible. The size, power demands, weight, and capability of computers had been perhaps the greatest limitation on reality catching.
The robot soldier is coming
The Pentagon predicts that robots will be a major army force in the American military in less than a decade, hunting and killing the enemies in the wars. Robots are a crucial part of the Army’s effort to rebuild itself as a century of 21st fighting force, and a $127 billions project called Future Combat Systems is the biggest military contract in American history.
Robot soldiers will increasingly think, see, and react like humans. At the starting, they will be remote controlled, looking and acting like lethal toy trucks. They may take many shapes as the technology develops. And because their intelligence grows, so will their autonomy.
The robot soldier has been planned for 30 years. And the engineers who involve in this project say it may take at least 30 more years to realize in full. The military have to answer tough questions if it intends to trust robots with the responsibility of distinguishing friend from foe.
Robots in combat as envisioned by their builders may look and move like hummingbirds, tractors or tanks, cockroaches or crickets, even like a humans. They may become swarms of smart dust with the development of nanotechnology. The military robot is concerning for gather intelligence, haul munitions, search buildings or blow them up.
Several hundred robots are scouring caves in Afghanistan, digging up roadside bombs in Iraq, and serving as armed sentries at depots of weapon. An armed version of the bomb disposal robot is in Baghdad, capable of firing 1000 rounds a minute. The robot will be the first thinking machine of its kind to take up a front line infantry position though controlled by a soldier with a laptop. Within a decade the ground vehicles and a third of deep strike aircraft in the military may become robotic. It is mandated by Congress of United States, so it will spend many billion of dollars on military robots.
Robot soldiers will increasingly think, see, and react like humans. At the starting, they will be remote controlled, looking and acting like lethal toy trucks. They may take many shapes as the technology develops. And because their intelligence grows, so will their autonomy.
The robot soldier has been planned for 30 years. And the engineers who involve in this project say it may take at least 30 more years to realize in full. The military have to answer tough questions if it intends to trust robots with the responsibility of distinguishing friend from foe.
Robots in combat as envisioned by their builders may look and move like hummingbirds, tractors or tanks, cockroaches or crickets, even like a humans. They may become swarms of smart dust with the development of nanotechnology. The military robot is concerning for gather intelligence, haul munitions, search buildings or blow them up.
Several hundred robots are scouring caves in Afghanistan, digging up roadside bombs in Iraq, and serving as armed sentries at depots of weapon. An armed version of the bomb disposal robot is in Baghdad, capable of firing 1000 rounds a minute. The robot will be the first thinking machine of its kind to take up a front line infantry position though controlled by a soldier with a laptop. Within a decade the ground vehicles and a third of deep strike aircraft in the military may become robotic. It is mandated by Congress of United States, so it will spend many billion of dollars on military robots.
Developing Military Robotic Vehicles
Since autonomous machines in the military became possible in the mid of 1980s, when processor computer became faster and faster. The development of improved sensor technology in the 199s allowed machines to pick up more information about their environment. Autonomous systems now can keep track of their whereabouts using global-positioning satellite links, and talk to commanders and comrades through wireless links that shut off automatically if the signal is in danger of being intercepted.
The first unmanned military vehicles in the early 1980s by the Defense Department of USA were huge vans the size of UPD delivery trucks, filled with hundreds of pounds of clunky computers that could barely navigate at 5 miles an hour in relatively flat terrain. For the comparison, Stryker can navigate through desert and forests environments, or drive on the road at top speeds of 60 miles an hour.
Now they have the basic functioning down, and they are trying to make it smarter at something or better. They have tested a four-wheeled robot called Short for Mobile Detection Assessment and Response System (MDARS), a robotic watchdog that patrols the Westminster lab’s snow covered back yard looking for intruders. It drives several feet, apparently puzzled, eyes a parking sign and halts, until a human attendant reprograms MDARS to move on.
Compared with a human, MDARS is not really that smart. Developing a robot is like raising children. Even Stryker’s most movement rudimentary requires complex calculations that must be taught to its brain, using hundreds of thousands of mathematical algorithms and programming codes. When it hits a fork on the road, it selects the gravel route instead of the dirt road. When it finds itself trapped in a cul-de-sac, it backs up to re-evaluate alternative paths. Stryker in the future will learn more tactical behaviors mimicking a human’s, like running and hiding in behind hills or trees in the presence of enemies.
The first unmanned military vehicles in the early 1980s by the Defense Department of USA were huge vans the size of UPD delivery trucks, filled with hundreds of pounds of clunky computers that could barely navigate at 5 miles an hour in relatively flat terrain. For the comparison, Stryker can navigate through desert and forests environments, or drive on the road at top speeds of 60 miles an hour.
Now they have the basic functioning down, and they are trying to make it smarter at something or better. They have tested a four-wheeled robot called Short for Mobile Detection Assessment and Response System (MDARS), a robotic watchdog that patrols the Westminster lab’s snow covered back yard looking for intruders. It drives several feet, apparently puzzled, eyes a parking sign and halts, until a human attendant reprograms MDARS to move on.
Compared with a human, MDARS is not really that smart. Developing a robot is like raising children. Even Stryker’s most movement rudimentary requires complex calculations that must be taught to its brain, using hundreds of thousands of mathematical algorithms and programming codes. When it hits a fork on the road, it selects the gravel route instead of the dirt road. When it finds itself trapped in a cul-de-sac, it backs up to re-evaluate alternative paths. Stryker in the future will learn more tactical behaviors mimicking a human’s, like running and hiding in behind hills or trees in the presence of enemies.
Types of Robot based on the Function
Just as there are multiple tool boxes, multiple robotic formats exist that are suited to different application requirements:
SCARA robots are best used in dispensing, pick and place and gang-picking applications for assembly and packaging where loads are moderate and high precision accuracy is not the top priority – for instance, in assembling cell phone to place covers or buttons in the right location. SCARA robots are appropriate for plane-to-plane moves and have a small footprint, making them an ideal choice for manufacturers with space constraints.
Delta robots also are useful in pick-and-place applications for assembly and packaging when the load is light – typically less than one kilogram – like candy or lids for jars, and they are capable of operating at very high speeds. Delta robots are ideal for plane-to-plane moves. However, they are only able to move up and down relatively short distances in the Z axis – typically less than 100 millimeters.
Articulated arm robots are ideal for applications with a large work envelope and heavier payloads. In addition to plane-to-plane moves, they are also well suited to painting or welding applications where movement over and under objects is necessary.
Cartesian robots are frequently used in everything from life sciences application to cartooning, dispensing, palletizing and large assembly projects. A Cartesian robot is a good choice for any system that has clearly defined x, y, and z axes.
SCARA robots are best used in dispensing, pick and place and gang-picking applications for assembly and packaging where loads are moderate and high precision accuracy is not the top priority – for instance, in assembling cell phone to place covers or buttons in the right location. SCARA robots are appropriate for plane-to-plane moves and have a small footprint, making them an ideal choice for manufacturers with space constraints.
Delta robots also are useful in pick-and-place applications for assembly and packaging when the load is light – typically less than one kilogram – like candy or lids for jars, and they are capable of operating at very high speeds. Delta robots are ideal for plane-to-plane moves. However, they are only able to move up and down relatively short distances in the Z axis – typically less than 100 millimeters.
Articulated arm robots are ideal for applications with a large work envelope and heavier payloads. In addition to plane-to-plane moves, they are also well suited to painting or welding applications where movement over and under objects is necessary.
Cartesian robots are frequently used in everything from life sciences application to cartooning, dispensing, palletizing and large assembly projects. A Cartesian robot is a good choice for any system that has clearly defined x, y, and z axes.
Technology and Market Trends of Robot Effect
For years, robots have represented the cutting edge in manufacturing. And until recently, they have been viable only for manufacturers with complex needs and big budgets. But today, robotic solutions are becoming more versatile and easier to attain. They are a practical solution for a broadening range of manufacturing applications – from vision-directed, high speed pick and place packaging, to high precision automotive assembly and semiconductor handling.
Technology and market trends are helping bring robots to the forefront of machine design, including:
• Flexibility demands – constantly shifting consumer preferences put increased pressure on manufacturers to offer a greater variety of product styles, shapes and sizes. A programmable robot that can perform different tasks with quick changeover helps end users produce this variety for less money and using less space.
• Worker safety imperative – manufacturers around the world are increasingly focused on corporate responsibly, including making sure manufacturing operations protect the company’s most asset – its workers.
• Declining hardware costs – over the past 10 years, more robot sourcing options have entered a competitive market, helping lower the cost of the hardware used for robots.
• Quality improvements – the increase in robot sourcing options also has helped drive improvements in the quality of the hardware and controls available.
Technology and market trends are helping bring robots to the forefront of machine design, including:
• Flexibility demands – constantly shifting consumer preferences put increased pressure on manufacturers to offer a greater variety of product styles, shapes and sizes. A programmable robot that can perform different tasks with quick changeover helps end users produce this variety for less money and using less space.
• Worker safety imperative – manufacturers around the world are increasingly focused on corporate responsibly, including making sure manufacturing operations protect the company’s most asset – its workers.
• Declining hardware costs – over the past 10 years, more robot sourcing options have entered a competitive market, helping lower the cost of the hardware used for robots.
• Quality improvements – the increase in robot sourcing options also has helped drive improvements in the quality of the hardware and controls available.
History of the Robot
Robotics is based ob two enabling technologies: telemanipulators and the ability of numerical control of machines.
Telemanipulators are remotely controlled machines which usually consist of an arm and a gripper. The movement of arm and gripper follow the instructions the human gives through his control device. First telemanipulators have been used to deal with radio-active material.
Numeric control allows controlling machines very precisely in relation to a given coordinate system. It was first used in 1952 at the MIT and lead to the first programming language for machines called APT, Automatic Program Tools.
The combination of both of these techniques leads to the first programmable telemanipulator. The first industrial robot using these principles was installed in 1961. These are the robots one knows from industrial facilities like car construction plant.
The development of mobile robots was driven by the desire to automate transportation in production processes and autonomous transport systems. The former lead to driver-less transport system used on factory floors to move objects to different points in the production processes in the late seventies.
Humanoid robots are developed since 1975 when Wabot-I was presented in Japan. The current Wabot-III already has some minor cognitive capabilities. Another humanoid robot is Cog, developed in the MIT-AI-Lab since 1994. Honda’s humanoid robot became well known in the public when presented back in 1999.
Telemanipulators are remotely controlled machines which usually consist of an arm and a gripper. The movement of arm and gripper follow the instructions the human gives through his control device. First telemanipulators have been used to deal with radio-active material.
Numeric control allows controlling machines very precisely in relation to a given coordinate system. It was first used in 1952 at the MIT and lead to the first programming language for machines called APT, Automatic Program Tools.
The combination of both of these techniques leads to the first programmable telemanipulator. The first industrial robot using these principles was installed in 1961. These are the robots one knows from industrial facilities like car construction plant.
The development of mobile robots was driven by the desire to automate transportation in production processes and autonomous transport systems. The former lead to driver-less transport system used on factory floors to move objects to different points in the production processes in the late seventies.
Humanoid robots are developed since 1975 when Wabot-I was presented in Japan. The current Wabot-III already has some minor cognitive capabilities. Another humanoid robot is Cog, developed in the MIT-AI-Lab since 1994. Honda’s humanoid robot became well known in the public when presented back in 1999.
Plane to Plane Homography of the Robot
The robot exists in planar environment. This is an approximation as the robot has a finite height above the table on which it moves. Provided the distance of the camera from the robot is sufficiently large in comparison to this height, the error from this approximation is acceptable.
The camera views scene from an arbitrary position. The frame grabbed from it is a second 2D environment. To infer the robot position from the frame buffer, it is necessary to know the transformation between the two planes. This transformation can be decomposed into three matrix operations on the homogeneous coordinates of the robot position.
Homogeneous coordinates are a method of representing point in n-space n+1-dimensional vectors with arbitrary scale. They have two inherent advantages in the application:
1. To return from the homogeneous coordinate to the n-space point, it is necessary to divide the first n elements of the vector by the (n+1)th. This allows certain non-linear transformations, such as projective one, to be represented by a matrix multiplication.
2. The second advantage is that an addition/subtraction operation in n-space can also be condensed into a single matrix multiplication.
The first matrix multiplication is the rigid-body transformation from world-centred coordinates on the table, to camera-centred coordinates. This is a transform from the 3-element homogeneous coordinate representing the 2D point on the ground plane into a 4-element coordinate reflecting its 3D position in camera-centred coordinates.
The camera views scene from an arbitrary position. The frame grabbed from it is a second 2D environment. To infer the robot position from the frame buffer, it is necessary to know the transformation between the two planes. This transformation can be decomposed into three matrix operations on the homogeneous coordinates of the robot position.
Homogeneous coordinates are a method of representing point in n-space n+1-dimensional vectors with arbitrary scale. They have two inherent advantages in the application:
1. To return from the homogeneous coordinate to the n-space point, it is necessary to divide the first n elements of the vector by the (n+1)th. This allows certain non-linear transformations, such as projective one, to be represented by a matrix multiplication.
2. The second advantage is that an addition/subtraction operation in n-space can also be condensed into a single matrix multiplication.
The first matrix multiplication is the rigid-body transformation from world-centred coordinates on the table, to camera-centred coordinates. This is a transform from the 3-element homogeneous coordinate representing the 2D point on the ground plane into a 4-element coordinate reflecting its 3D position in camera-centred coordinates.
The Motion of Mobile Robot in Unknown Environment
A robotic vehicle is an intelligent mobile machine capable of autonomous operations in structured an unstructured environment, it must be capable of sensing (perceiving its environment), thinking (planning and reasoning), and acting (moving and manipulating). Thus, the recent developments in autonomy requirements, intelligent components, multi robot system, and massively parallel computer have made the IAS (Intelligent Autonomous System) very used, notably in the planetary explorations, mine industry, and highways. But, the current mobile robots do relatively little that is recognizable as intelligent thinking, this is because:
• Perception does not meet the necessary standards.
• Much of the intelligence is tied up in task specific behavior and has more to do with particular devices and missions than with the mobile robots in general.
• Much of the challenge of the mobile robots requires intelligence at subconscious level.
The motion of mobile robots in an unknown environment where are stationary unknown obstacles requires the existence of algorithms that are able to solve the path and motion planning problem of these robots so that collisions are avoided. In order to execute the desired motion, the mobile robot navigates intelligibly and avoids obstacles so that the target is reached. The problem becomes more difficult when the parameters that describe the model and/or the workspace of the robot are not exactly known.
• Perception does not meet the necessary standards.
• Much of the intelligence is tied up in task specific behavior and has more to do with particular devices and missions than with the mobile robots in general.
• Much of the challenge of the mobile robots requires intelligence at subconscious level.
The motion of mobile robots in an unknown environment where are stationary unknown obstacles requires the existence of algorithms that are able to solve the path and motion planning problem of these robots so that collisions are avoided. In order to execute the desired motion, the mobile robot navigates intelligibly and avoids obstacles so that the target is reached. The problem becomes more difficult when the parameters that describe the model and/or the workspace of the robot are not exactly known.
Navigation the Key Important in Autonomous Robot
Navigation is the ability to move and on being self-sufficient. The IAS (Intelligent Autonomous System) must be able to achieve these tasks: to avoid obstacles, and to make one way toward their target. In fact, recognition, learning, decision making, and action constitute principal problem of navigation. One of the specific characteristic of mobile robot is the complexity of their environment. Therefore, one of the critical problems for the mobile robots is path planning, which is still an open one to be studying extensively. Accordingly, one of the key issues in the design of an autonomous robot is navigation, for which navigation planning is one of the most vital aspect of an autonomous robot.
Several models have been applied for environment where the principle of navigation is applied to do path planning. For example, a grid model has been adopted by many researchers, where the robot environment is dividing into many line squares and indicated to the presence of an object or not in each square.
Besides, the most important key of the navigation system of mobile robot is to move the robot to a named place in known, unknown or partially known environments. In most practical situations the mobile robot can not take the most direct path from the start to goal point.
Several models have been applied for environment where the principle of navigation is applied to do path planning. For example, a grid model has been adopted by many researchers, where the robot environment is dividing into many line squares and indicated to the presence of an object or not in each square.
Besides, the most important key of the navigation system of mobile robot is to move the robot to a named place in known, unknown or partially known environments. In most practical situations the mobile robot can not take the most direct path from the start to goal point.
Intelligent Autonomous Robot in Industrial Fields
The theory and practice of IAS are currently among the most intensively studied and promising areas in computer science and engineering which will certainly play a primarily goal role in future. These theories and applications provide a source linking all fields in which intelligent control plays a dominant role. Cognition, perception, action and learning are essential components of such systems and their use is tending extensively towards challenging applications (service robots, micro robots, bio robots, guard robots, warehousing robots). Many traditional working machines already used e.g. in agriculture or construction mining are going through changes to become remotely operated or even autonomous. Autonomous driving in certain conditions is then realistic target in near future.
Industrial robots used for manipulations of goods; typically consist of one or two arms and a controller. The term controller is used in at least two different ways in this context, we mean the computer system used to control the robot, often called a robot work-station controller. The controller may be programmed to operate the robot in a number of way; thus distinguishing it from hard automation.
Most often, industrial robot are stationary, and work is transported to them by conveyer or robot carts, which are often called autonomous guided vehicles (AVG). Autonomous guided vehicles are becoming increasing used in industrial for material transport. Most frequently, these vehicles use a sensor to follow a wire in the factory floor.
Industrial robots used for manipulations of goods; typically consist of one or two arms and a controller. The term controller is used in at least two different ways in this context, we mean the computer system used to control the robot, often called a robot work-station controller. The controller may be programmed to operate the robot in a number of way; thus distinguishing it from hard automation.
Most often, industrial robot are stationary, and work is transported to them by conveyer or robot carts, which are often called autonomous guided vehicles (AVG). Autonomous guided vehicles are becoming increasing used in industrial for material transport. Most frequently, these vehicles use a sensor to follow a wire in the factory floor.
Autonomy Requirements of Autonomous Robot
Several autonomy requirements must be satisfied to well perform the tasks. These are some requirements:
Thermal
To carry out tasks in various environments as in space applications, the thermal design must be taken into account, especially when the temperature can vary significantly. At ambient temperatures, the limited temperature-sensitive electronic equipment on-board must be placed in the thermally insulated compartments.
Energy
For specified period, IAV can operate autonomously, one very limited resource for under-water and space applications are energy. So IAS usually carries a rechargeable energy system, appropriately sized batteries on-board.
Communication Management
The components on board the vehicle and on-board the surface station must be inter-connected by a two-way communication link. As in both underwater and space applications, a data management system is usually necessary to transfer data from IAS to terrestrial storage and processing stations by two-way communication link. Indeed, the data management system must be split between components of the vehicle and surface station. Thus, the vehicle must be more autonomous and intelligent to perform and achieve the tasks. Due to limited resources and weight constraints, major data processing and storage capacities must be on the surface station. Although individual vehicles may have wildly different external appearances, different mechanisms of locomotion, and different missions or goals, many of the underlying computational issues involved are related to sensing and sensor modeling spatial data representation and reasoning.
Thermal
To carry out tasks in various environments as in space applications, the thermal design must be taken into account, especially when the temperature can vary significantly. At ambient temperatures, the limited temperature-sensitive electronic equipment on-board must be placed in the thermally insulated compartments.
Energy
For specified period, IAV can operate autonomously, one very limited resource for under-water and space applications are energy. So IAS usually carries a rechargeable energy system, appropriately sized batteries on-board.
Communication Management
The components on board the vehicle and on-board the surface station must be inter-connected by a two-way communication link. As in both underwater and space applications, a data management system is usually necessary to transfer data from IAS to terrestrial storage and processing stations by two-way communication link. Indeed, the data management system must be split between components of the vehicle and surface station. Thus, the vehicle must be more autonomous and intelligent to perform and achieve the tasks. Due to limited resources and weight constraints, major data processing and storage capacities must be on the surface station. Although individual vehicles may have wildly different external appearances, different mechanisms of locomotion, and different missions or goals, many of the underlying computational issues involved are related to sensing and sensor modeling spatial data representation and reasoning.
Material to Build Modular Robot
Modular robot hardware, such as Polybot, CONRO, M-Tran, Molecule, and Crystal, the I-cube, and Molecubes generally have nodes that are greater than 10 cm in smallest dimension and cost more than $50 per node to produce. This is because they are typically made using off-the shelf, macro-scale electronic and mechanical components, and a large number of standardized components are required to produce a functional system.
To make modular robots a useful raw material for building products, the per-node cost must be substantially reduced. Over the past several years, modular robot design has moved toward systems with few or no moving parts in the nodes. Some of these systems utilize an external fluid bath and external agitation to provide the force and energy to make and break connections, controlling node to node adhesion to steer the structure toward the desired result. Kirby described 24 mm diameter cylindrical nodes, capable of translating in a plane by rotating around one another by activating a radial positioned array of electromagnets.
Another strategy is employed by the Miche self-disassembling modular robot, which starts with all nodes connected, and then releases magnetic latches to disconnect node that are not part of the structure. These systems have lower per-node cost and are more amenable to micro fabrication than the previous generation of designs.
To make modular robots a useful raw material for building products, the per-node cost must be substantially reduced. Over the past several years, modular robot design has moved toward systems with few or no moving parts in the nodes. Some of these systems utilize an external fluid bath and external agitation to provide the force and energy to make and break connections, controlling node to node adhesion to steer the structure toward the desired result. Kirby described 24 mm diameter cylindrical nodes, capable of translating in a plane by rotating around one another by activating a radial positioned array of electromagnets.
Another strategy is employed by the Miche self-disassembling modular robot, which starts with all nodes connected, and then releases magnetic latches to disconnect node that are not part of the structure. These systems have lower per-node cost and are more amenable to micro fabrication than the previous generation of designs.
Scorpion Robot Vision
Robot vision is one of Scorpion Vision Software’s focus areas. Scorpion gives the robot the ability to pick products with high precision in 2D or 3D. Flexible automation means robots, automation and vision working together. This reduces cost and increases the flexibility and possibility to produce several product variants in one production line at the same time – 24 hours a day – with profits. The vision system’s ability to locate and identify objects is critical elements in making these systems.
Scorpion Vision Software has been used in robot vision and inspection systems for many years. Scorpion has a complete toolbox of robust and reliable 2D and 3D image processing tools needed for robot vision, gauging and assembly verification. Included in high accuracy and sub-pixel object location with 3DMaMa and PolygonMatch technology is making it a perfect companion to world class image components.
Scorpion Vision Software is flexible and easy interfacing to standard robots. With Scorpion Vision Software it is easy to implement reliable communication with robots from any vendor. Scorpion is used with robots from ABB, Motoman, Kuka, Fanuc, Kawasaki, Sony, and Rexroth Bosch over serial and TCP/IP ports. The specification of the robot is 2D and 3D robot vision, robot guiding. The robot will be inspected 100% inspection and the aim is zero defects.
Scorpion Vision Software has been used in robot vision and inspection systems for many years. Scorpion has a complete toolbox of robust and reliable 2D and 3D image processing tools needed for robot vision, gauging and assembly verification. Included in high accuracy and sub-pixel object location with 3DMaMa and PolygonMatch technology is making it a perfect companion to world class image components.
Scorpion Vision Software is flexible and easy interfacing to standard robots. With Scorpion Vision Software it is easy to implement reliable communication with robots from any vendor. Scorpion is used with robots from ABB, Motoman, Kuka, Fanuc, Kawasaki, Sony, and Rexroth Bosch over serial and TCP/IP ports. The specification of the robot is 2D and 3D robot vision, robot guiding. The robot will be inspected 100% inspection and the aim is zero defects.
Behavior Language of TDL
The robot control architectures can be developed as 3 interacting layers. The behavior level interacts with the physical world. The planning layer is used for defining how to achieve goals. The executable layer connects these two layers issuing commands to the behavior level which are results of the plans and passing sensory data taken from the behavior level to the planning layer to enable planning reactive to the real world. So the executive layer is responsible for expanding abstract goals into low level commands, executing them and handling exceptions.
The main motivation behind developing Task Description Language (TDL) is that using conventional programming languages for defining such task-level control functions result in highly non-linear code which is also difficult to understand, debug and maintain. TDL extends C++ with a syntactic support for task-level control. A compiler is available to translate TDL code into C++ code that will use the Task Control Management (TCM) libraries.
The basic data type of TDL is the task tree. The leaves of a task tree are generally commands which will perform some physical action in the world. Other types of the node are goals, representing higher level tasks, monitors and exceptions. An action associated with such nodes can perform computations, and change the structure of the task tree. The nodes of a task tree can be executed sequentially or in parallel. It is also possible to expand a sub-tree but wait for some synchronization constraints to hold before beginning executing it.
The main motivation behind developing Task Description Language (TDL) is that using conventional programming languages for defining such task-level control functions result in highly non-linear code which is also difficult to understand, debug and maintain. TDL extends C++ with a syntactic support for task-level control. A compiler is available to translate TDL code into C++ code that will use the Task Control Management (TCM) libraries.
The basic data type of TDL is the task tree. The leaves of a task tree are generally commands which will perform some physical action in the world. Other types of the node are goals, representing higher level tasks, monitors and exceptions. An action associated with such nodes can perform computations, and change the structure of the task tree. The nodes of a task tree can be executed sequentially or in parallel. It is also possible to expand a sub-tree but wait for some synchronization constraints to hold before beginning executing it.
Robotic Perception (Localization)
A robot receives raw sensor data from its sensors. It has to map those measurements into an internal representation to formalize this data. This process is called robotic perception. This is a difficult process in general the sensors are noisy and the environment is partially observable, unpredictable, and often dynamic.
Good representation should meet three criteria: they should
• Contain enough information for the robot to make a right decision.
• Be structured in a way that it can be updated efficiently.
• Be natural, meaning that internal variables correspond to natural state variables in the physical world.
Filtering and updating the belief state is not covered here as it was covered in earlier presentations. Some topics are Kalaman filters and dynamic Bayes nets.
A very generic perception task is localization. It is the problem of determining where things are. Localization is one of the most pervasive perception problems in robotics. Knowing the location of objects in the environment that the robot has to deal with is the base for making any successful interaction with the physical world. There are three increasingly difficult flavors of localization problems:
• Tracking – if the initial state of the object to be localized is known you can just track this object.
• Global localization – the initial location of the object is unknown you first have to find the object.
• Kidnapping – this is the most difficult task.
Good representation should meet three criteria: they should
• Contain enough information for the robot to make a right decision.
• Be structured in a way that it can be updated efficiently.
• Be natural, meaning that internal variables correspond to natural state variables in the physical world.
Filtering and updating the belief state is not covered here as it was covered in earlier presentations. Some topics are Kalaman filters and dynamic Bayes nets.
A very generic perception task is localization. It is the problem of determining where things are. Localization is one of the most pervasive perception problems in robotics. Knowing the location of objects in the environment that the robot has to deal with is the base for making any successful interaction with the physical world. There are three increasingly difficult flavors of localization problems:
• Tracking – if the initial state of the object to be localized is known you can just track this object.
• Global localization – the initial location of the object is unknown you first have to find the object.
• Kidnapping – this is the most difficult task.
Robot Behavior Language of Charon
Charon is a language for modular specification of interacting hybrid systems and can be also used for defining robot control strategies. The building blocks of the system are agents and modes.
An agent can communicate with its environment via shared variables and also communication channels. The language supports the operations of composition of agents for concurrency, hiding of variables for information encapsulation, and instantiation of agents for reuse. Therefore complex agents can be built from other agents to define hierarchical architectures.
Each atomic agent has a mode which represents a flow of control. Modes can contain sub-modes and transitions between them so it is possible to connect modes to others with well-defined entry and exit points. There are some specific entry and exit points. The former is used for supporting history retention, default entry transitions are allowed to restore the local state from the most recent exit. A default exit point can be used for group transitions which apply to the all sub-modes to support exceptions.
Transitions can be labeled by guarded actions to allow discrete updates. In a discrete round only an atomic agent will be executed and the execution will continue as long as there are enabled transitions. Since a mode can contain sub-modes group transitions are examined only when there are no enabled transitions in the sub-modes.
An agent can communicate with its environment via shared variables and also communication channels. The language supports the operations of composition of agents for concurrency, hiding of variables for information encapsulation, and instantiation of agents for reuse. Therefore complex agents can be built from other agents to define hierarchical architectures.
Each atomic agent has a mode which represents a flow of control. Modes can contain sub-modes and transitions between them so it is possible to connect modes to others with well-defined entry and exit points. There are some specific entry and exit points. The former is used for supporting history retention, default entry transitions are allowed to restore the local state from the most recent exit. A default exit point can be used for group transitions which apply to the all sub-modes to support exceptions.
Transitions can be labeled by guarded actions to allow discrete updates. In a discrete round only an atomic agent will be executed and the execution will continue as long as there are enabled transitions. Since a mode can contain sub-modes group transitions are examined only when there are no enabled transitions in the sub-modes.
Robot Programming Systems Review
A review of robot programming systems was conducted in 1983 by Tomas Lozano Perez. At that time, robots were only common in industrial environments, the range of programming methods was very limited, and the review examined only industrial robot programming systems. A new review is necessary to determine what has been achieved in the intervening time, and what the next steps should be to provide convenient control for the general population as robots become ubiquitous in our lives.
Lozano Perez divided programming systems into three categories: guiding systems, robot level programming systems, and task-level programming systems. For guiding systems the robot was manually moved to each desired position and the joint positions recorded. For robot-level systems a programming language was provided with the robot. Finally task-level systems specified the goals to be achieved (for example, the positions of the objects).
By contrast, it divides the field of robot programming into automatic programming, manual programming, and software architecture. The first two distinguish programming according to the actual method used, which is the crucial distinction for users and programmers. In automatic programming systems the user or programmer has little or no direct control over the robot code. These include learning systems, Programming by Demonstration and Instructive.
Lozano Perez divided programming systems into three categories: guiding systems, robot level programming systems, and task-level programming systems. For guiding systems the robot was manually moved to each desired position and the joint positions recorded. For robot-level systems a programming language was provided with the robot. Finally task-level systems specified the goals to be achieved (for example, the positions of the objects).
By contrast, it divides the field of robot programming into automatic programming, manual programming, and software architecture. The first two distinguish programming according to the actual method used, which is the crucial distinction for users and programmers. In automatic programming systems the user or programmer has little or no direct control over the robot code. These include learning systems, Programming by Demonstration and Instructive.
Multi Object Localization – Mapping
So far the discussion was only the localization of a single object, but often one seeks to localize many objects. The classical example of this problem is robotic mapping.
In the localization algorithms before we assumed that the robot knew the map of the environment a priori. But what if it does not? Then it has to generate such a map itself. Humans have already proven their mapping skills with maps of the whole planet. Now we will give a short introduction how robots can do the same.
This problem is often referred to as simultaneous localization and mapping (SLAM). The robot does only construct a map, but it must do this without knowing where it is. A problem is that the robot may not know in advance how large the map is going to be.
The most widely used method for the SLAM problem is EKF. It is usually combined with landmark sensing models and requires that the landmarks are distinguishable. Think of a map where you have several distinguishable landmarks of unknown location. The robot now starts to move and discovers more and more landmarks. The uncertainty about the location of the landmarks and itself increases with time. When the robot finds one landmark he already discovered earlier again the uncertainty of its position and of all landmarks decreases.
In the localization algorithms before we assumed that the robot knew the map of the environment a priori. But what if it does not? Then it has to generate such a map itself. Humans have already proven their mapping skills with maps of the whole planet. Now we will give a short introduction how robots can do the same.
This problem is often referred to as simultaneous localization and mapping (SLAM). The robot does only construct a map, but it must do this without knowing where it is. A problem is that the robot may not know in advance how large the map is going to be.
The most widely used method for the SLAM problem is EKF. It is usually combined with landmark sensing models and requires that the landmarks are distinguishable. Think of a map where you have several distinguishable landmarks of unknown location. The robot now starts to move and discovers more and more landmarks. The uncertainty about the location of the landmarks and itself increases with time. When the robot finds one landmark he already discovered earlier again the uncertainty of its position and of all landmarks decreases.
CORBA in Robot Control Application
CORBA is an acronym for the Common Object Request Broker Architecture. It is an open architecture, specified by the Object Management Group. The OMG exists to provide vendor independent software standards for distributed systems.
To design a CORBA object to operate in the robot control application, it was first established what data needed to be passed from one process to another. This would be done before designing any object whether it is in C++, Java or CORBA. A CORBA object, however, contains no data. It may have data-types providing application specific data structure, but apart from this, it will consist only of methods. The methods required for our application were decided to be the following:
Frame – a function taking no variables but returning a large array containing the last bitmap image grabbed by the server.
Calibrate – this function sends image coordinates corresponding to the ;ast frame sent over by frame allowing remote calibration.
Newsnake – instantiates a new B-Spline snake interpolating four or more image coordinates specified as an array in the parameters of the function.
Pos – returns the current snake coordinates.
Map – returns the topology of the environment the server is controlling. This allows low-bandwidth operation by eliminating the need to use the heavy weight frame function.
Send – finally, a function which takes image coordinates (or map coordinates if in low-bandwidth mode) and asks the server to navigate the robot to the world coordinates o which they correspond.
To design a CORBA object to operate in the robot control application, it was first established what data needed to be passed from one process to another. This would be done before designing any object whether it is in C++, Java or CORBA. A CORBA object, however, contains no data. It may have data-types providing application specific data structure, but apart from this, it will consist only of methods. The methods required for our application were decided to be the following:
Frame – a function taking no variables but returning a large array containing the last bitmap image grabbed by the server.
Calibrate – this function sends image coordinates corresponding to the ;ast frame sent over by frame allowing remote calibration.
Newsnake – instantiates a new B-Spline snake interpolating four or more image coordinates specified as an array in the parameters of the function.
Pos – returns the current snake coordinates.
Map – returns the topology of the environment the server is controlling. This allows low-bandwidth operation by eliminating the need to use the heavy weight frame function.
Send – finally, a function which takes image coordinates (or map coordinates if in low-bandwidth mode) and asks the server to navigate the robot to the world coordinates o which they correspond.
Robot Behavior Language of Signal
SIGNAL is a language designed for safe real-time system programming. It is based on semantics defined by mathematical modeling of multiple-clocked flows of data and events. Relations can be defined on such data and event signals to describe arbitrary dynamical systems and then constraints may be used to develop real time applications. There are operators to relate the clocks and values of the signals. SIGNAL can be also described as synchronous data-flow language.
Although SIGNAL is not designed specifically for robotics, the characteristic of robot programming and SIGNAL’s mentioned functionalities makes it possible to use it for active vision-based robotic systems. Since the vision data have a synchronous and continuous nature, it can be captured in signals and then the control functions between the sensory data and control outputs can be defined. The SIGNAL-GTi extension of SIGNAL can be used for task sequencing at the discrete level.
SIGNAL-GTi enables the definition of time intervals related to the signals and also provides methods to specify hierarchical preemptive tasks. Combining the data flow and multitasking paradigms results in having the advantages of both the automata and concurrent robot programming. Using these advantages a hierarchy of parallel automata can be designed. The planning level of robot control does not have a counterpart in SIGNAL but the task level can be used for this purpose.
Although SIGNAL is not designed specifically for robotics, the characteristic of robot programming and SIGNAL’s mentioned functionalities makes it possible to use it for active vision-based robotic systems. Since the vision data have a synchronous and continuous nature, it can be captured in signals and then the control functions between the sensory data and control outputs can be defined. The SIGNAL-GTi extension of SIGNAL can be used for task sequencing at the discrete level.
SIGNAL-GTi enables the definition of time intervals related to the signals and also provides methods to specify hierarchical preemptive tasks. Combining the data flow and multitasking paradigms results in having the advantages of both the automata and concurrent robot programming. Using these advantages a hierarchy of parallel automata can be designed. The planning level of robot control does not have a counterpart in SIGNAL but the task level can be used for this purpose.
Behavior Language for Robot Programming
Brooks is one of the first advocates of reactive behavior-based methods for robot programming. His subsumption architecture is based on different layers, each of which work concurrently and asynchronously to achieve individual goals. In the earlier designs, the behaviors were represented by augmented finite state machines (AFSMs), thus the Behavior Language still includes AFSMs as the low level building blocks. The behavior language has a Lisp-like syntax and compiler available even into programmable-array logic circuits.
An AFSM encapsulates a behavioral transformation function where the input to the function can be suppressed or the output can be inhibited by other components of the system. It is also possible to reset an AFSM to its initial state. Each layer in the subsumption architecture has a specific goal. The higher layers can use the output of the lower levels and also affect their input and output to achieve their goals which are generally more abstract then the goals of the lower layers. It is argued that this kind of hierarchical interaction between layers prohibits designing higher levels independently.
When an AFSM is started, it waits for a specific triggering event and then its body executed. Such events can depend on time, a predicate about the state of the system, a message deposited to a specific internal register, or other components being enabled or disabled. In the body it is possible to perform primitive actions or to put messages in order to interact with other AFSMs.
An AFSM encapsulates a behavioral transformation function where the input to the function can be suppressed or the output can be inhibited by other components of the system. It is also possible to reset an AFSM to its initial state. Each layer in the subsumption architecture has a specific goal. The higher layers can use the output of the lower levels and also affect their input and output to achieve their goals which are generally more abstract then the goals of the lower layers. It is argued that this kind of hierarchical interaction between layers prohibits designing higher levels independently.
When an AFSM is started, it waits for a specific triggering event and then its body executed. Such events can depend on time, a predicate about the state of the system, a message deposited to a specific internal register, or other components being enabled or disabled. In the body it is possible to perform primitive actions or to put messages in order to interact with other AFSMs.
Programming Language of Colbert
Colbert is called as a sequencer language by its developers. It is a part of the Saphira architecture. The Saphira architecture is an integrating sensing and control system for robotic applications. Complex operations like visual tracking of humans, coordination of motor controls, planning are integrated in the architecture using the concepts of coordination of behavior, coherence of modeling, and communication with other agents. The motion control layer of Saphira consists of a fuzzy controller and Colbert is used for the middle execution level between the motion control layer and planning.
Colbert programs are activities whose semantic is based on FSAs and are written in a subset of ANSI C. the behavior of the robot is controlled by activities like:
• Sequencing the basic actions that the robot will perform.
• Monitoring the execution of basic actions and other activities.
• Executing activity subroutines.
• Checking and setting the values of internal variables.
Robot control in Colbert means defining such activities as activity schema each of which corresponds to a finite state automation. The activity executive interprets the statements in an activity schema according to the associated FSA. The statements of a schema do not correspond directly to the states of the FSA. For instance, conditional and looping statements will be probably represented as a set of nodes.
Colbert programs are activities whose semantic is based on FSAs and are written in a subset of ANSI C. the behavior of the robot is controlled by activities like:
• Sequencing the basic actions that the robot will perform.
• Monitoring the execution of basic actions and other activities.
• Executing activity subroutines.
• Checking and setting the values of internal variables.
Robot control in Colbert means defining such activities as activity schema each of which corresponds to a finite state automation. The activity executive interprets the statements in an activity schema according to the associated FSA. The statements of a schema do not correspond directly to the states of the FSA. For instance, conditional and looping statements will be probably represented as a set of nodes.
Monte Carlo Localization (MCL)
MCL is essentially a particle filter algorithm. The requirements are a map of the environment showing the regions the robot can move to and appropriate motion model and sensor model. It assumes that the robot uses range scanners for localization. First we want to talk about theory. The algorithm takes the given map and creates a population of N samples by a given probabilistic distribution (if the robot have some knowledge about where it is – for example it knows that it is in a particular room-the algorithm could take care of this and we would see particles only in that room). Then we start a continuous update cycle. The localization stars at time t=0. Then the update cycle is repeated for each time step:
• Each sample is propagated forward by sampling the next state value Xt+1 given the current value Xt for the sample, and using the transition model given.
• Each sample is weighted by the likelihood it assigns to the new evidence, P(et+1l Xt+1).
• The population is re-sampled to generate a new population of N samples. Each new sample is selected from the current population; the probability that a particular sample is selected is proportional to its weight.
So as the robot gathers more knowledge about the environment by analyzing the range scanner data it re-samples the population and the particles concentrate at one point or more points. At some points in time all points are in one cluster point.
• Each sample is propagated forward by sampling the next state value Xt+1 given the current value Xt for the sample, and using the transition model given.
• Each sample is weighted by the likelihood it assigns to the new evidence, P(et+1l Xt+1).
• The population is re-sampled to generate a new population of N samples. Each new sample is selected from the current population; the probability that a particular sample is selected is proportional to its weight.
So as the robot gathers more knowledge about the environment by analyzing the range scanner data it re-samples the population and the particles concentrate at one point or more points. At some points in time all points are in one cluster point.
Mobile Robot Based on SLAM
A component of a mobile robot system is the ability to localize itself accurately and simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sensors or artificial landmarks. It is a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building.
Mobile robot localization and mapping, the process of simultaneously tracking the position of a mobile robot relative to its environment and building a map of the environment, has been a central research topic in mobile robotics. Accurate localization is a prerequisite for building a good map, and having an accurate map is essential for good localization. Therefore, simultaneous localization and map building (SLAM) is critical underlying factor for successful mobile robot navigation in a large environment, irrespective of what the high-level goals or applications are.
To achieve the SLAM, there are different types of sensor modalities such as sonar, laser range finders and vision. Sonar is fast and cheap but usually very crude, whereas a laser scanning system is active, accurate but slow. Vision systems are passive and high resolution.
Mobile robot localization and mapping, the process of simultaneously tracking the position of a mobile robot relative to its environment and building a map of the environment, has been a central research topic in mobile robotics. Accurate localization is a prerequisite for building a good map, and having an accurate map is essential for good localization. Therefore, simultaneous localization and map building (SLAM) is critical underlying factor for successful mobile robot navigation in a large environment, irrespective of what the high-level goals or applications are.
To achieve the SLAM, there are different types of sensor modalities such as sonar, laser range finders and vision. Sonar is fast and cheap but usually very crude, whereas a laser scanning system is active, accurate but slow. Vision systems are passive and high resolution.
Cell Decomposition for Free Space Mapping
One approach towards planning a path in configuration space is to simply map the free space onto a discrete raster which reduces the path planning problem within each of the cells to a trivial problem (move on a straight line) as it is completely traversable. By this means, we end up with a discrete graph search problem, which are relatively easy to solve. The simplest cell decomposition matches the free space onto a regularly spaced grid but more complex methods are possible.
Cell decomposition has advantage of an easy implementation, but also suffers some deficiencies:
• The amount of grid cells increases exponentially with the dimension of the configuration space.
• Polluted cells (cells which contain free and occupied space) are problematic. If included, they might lead to solutions, which are in reality blocked, if omitted, some valid solutions would be missing.
The latter problem could be solved by further subdivision of the affected cells or by requiring exact cell decomposition by allowing irregularly shaped cells. Those new cells would need to have an ‘easy’ way to compute a traversal through them, which would lead to complex geometric problem.
Another problem is the fact that a path through such a cell grid can contain arbitrary sharp corners, which would be impossible to perform by real life robots. Such a path can lead very close to obstacle, rendering even the slightest motion errors fatal.
Cell decomposition has advantage of an easy implementation, but also suffers some deficiencies:
• The amount of grid cells increases exponentially with the dimension of the configuration space.
• Polluted cells (cells which contain free and occupied space) are problematic. If included, they might lead to solutions, which are in reality blocked, if omitted, some valid solutions would be missing.
The latter problem could be solved by further subdivision of the affected cells or by requiring exact cell decomposition by allowing irregularly shaped cells. Those new cells would need to have an ‘easy’ way to compute a traversal through them, which would lead to complex geometric problem.
Another problem is the fact that a path through such a cell grid can contain arbitrary sharp corners, which would be impossible to perform by real life robots. Such a path can lead very close to obstacle, rendering even the slightest motion errors fatal.
Robot Functionality at Home or Office
The idea of a mobile robot to provide assistance either in the home, office or in more hostile environments (e.g. bomb-disposal or nuclear reactors) has existed for many years and such systems are available today. Unfortunately, they are typically expensive and by no means ubiquitous in the way that 1950s and 60s science fiction would have had us believe.
The major limitations to including robots in homes and offices are the infrastructure changes they require. Computer vision however means that robots can be monitored from just a few inexpensive cameras and the recent availability of wireless network solutions has decimated the infrastructure they demand.
The final step to the package is how humans are to interact with such a system. It may be that people wish to interact with their robot either “face to face”, via a home or office workstation or even using their mobile telephone on the train. Operations may then be invoked on our robot from anywhere in the world and it is functionality that increases the possible applications by orders of magnitude.
For example, Household robot. Imagine that you have a dog at home. Many people’s pet is able to alert them if there is an intruder in the home, but how many can do it whilst you are at work? And how many can pass a message to the kids that you’re going to be late home?
The major limitations to including robots in homes and offices are the infrastructure changes they require. Computer vision however means that robots can be monitored from just a few inexpensive cameras and the recent availability of wireless network solutions has decimated the infrastructure they demand.
The final step to the package is how humans are to interact with such a system. It may be that people wish to interact with their robot either “face to face”, via a home or office workstation or even using their mobile telephone on the train. Operations may then be invoked on our robot from anywhere in the world and it is functionality that increases the possible applications by orders of magnitude.
For example, Household robot. Imagine that you have a dog at home. Many people’s pet is able to alert them if there is an intruder in the home, but how many can do it whilst you are at work? And how many can pass a message to the kids that you’re going to be late home?
Types of Robots based on the Working
Now we will see, what robots are used for nowadays.
Hardworking Robots
Traditionally robots have been used to replace human workers in areas of difficult labor, which is structures enough for automation, like assembly line work in the automobile industry or harvesting machines in the agricultural sector. Some existing examples apart from the assembly robot are:
• Melon harvester robot.
• Ore transport robot for mines.
• A robot that removes paint from larger ships.
• A robot that generates high precision sewer maps.
if employed in a suitable environment robots can work faster, cheaper and more precise than humans.
Transporter
Although most autonomous transport robots still need environmental to find their way they are already widely in use. But building a robot which can navigate using natural landmarks is probably no more science fiction. Examples of currently available transporters are:
• Container transporters used to load and unload cargo ships.
• Medication and food transport systems in hospitals.
• Autonomous helicopters, to deliver goods to remote areas.
Insensible Steel Giants
As robots can be easily shielded against hazardous environments and are somewhat replaceable, they are used in dangerous, toxic or nuclear environments. Some places robots have helped cleaning up a mess:
• In Chernobyl robots have helped to clean up nuclear waste.
• Robots have entered dangerous areas in the remains of the WTC.
• Robots are used to clean ammunition and mines all around the world.
Hardworking Robots
Traditionally robots have been used to replace human workers in areas of difficult labor, which is structures enough for automation, like assembly line work in the automobile industry or harvesting machines in the agricultural sector. Some existing examples apart from the assembly robot are:
• Melon harvester robot.
• Ore transport robot for mines.
• A robot that removes paint from larger ships.
• A robot that generates high precision sewer maps.
if employed in a suitable environment robots can work faster, cheaper and more precise than humans.
Transporter
Although most autonomous transport robots still need environmental to find their way they are already widely in use. But building a robot which can navigate using natural landmarks is probably no more science fiction. Examples of currently available transporters are:
• Container transporters used to load and unload cargo ships.
• Medication and food transport systems in hospitals.
• Autonomous helicopters, to deliver goods to remote areas.
Insensible Steel Giants
As robots can be easily shielded against hazardous environments and are somewhat replaceable, they are used in dangerous, toxic or nuclear environments. Some places robots have helped cleaning up a mess:
• In Chernobyl robots have helped to clean up nuclear waste.
• Robots have entered dangerous areas in the remains of the WTC.
• Robots are used to clean ammunition and mines all around the world.
Model based Rescue Robot Control
The competitions of rescue have been established since 2000 to foster robot autonomy and an unknown completely and unsettled environment, and to promote the use of robots in high risk areas, for helping human rescue teams in the aftermath of disastrous events. In rescue competitions the task is not to prevent calamitous events but to support operators for people rescue where human accessibility is limited or most probably interdicted. This security topology tasks are crucial when the environment can not be accessed by rescue operators and the aid of robots endowed with good perceptual abilities can help to save the lives of human.
Autonomous robots have to accomplish such a tasks in complete autonomy, which is producing and exploring a map of the environment, recognizing via different perceptual skills the victims, correctly labeling the map with the victim position, and possibly, status and conditions.
DORO cognitive architecture purposely designed for the autonomous finding and exploration tasks required in rescue competitions and focus on hw exploit the ECLiPSe framework in order to implement its model based executive controller.
A model-based role monitoring system is to enhance the system safeness, pro-activity and flexibility. In this approach, the monitoring system is endowed with a declarative representation of the temporal and causal properties of the controlled processes. Given this explicit model, the executive control is provided by a reactive planning engine which harmonizes the mission goals, the reactive activity of the modules functional, and the operator interventions. The execution state of the robot can be compared continuously with a declarative model of the system behavior: the executive controller can track relevant parallel activities integrating them into a global view and time constraint violations and subtle resources can be detected. The system of planning is to compensate these misalignments/failures generating on the fly recovery sequences. Such features are designed and implemented by deploying a paradigm high level agent programming.
Autonomous robots have to accomplish such a tasks in complete autonomy, which is producing and exploring a map of the environment, recognizing via different perceptual skills the victims, correctly labeling the map with the victim position, and possibly, status and conditions.
DORO cognitive architecture purposely designed for the autonomous finding and exploration tasks required in rescue competitions and focus on hw exploit the ECLiPSe framework in order to implement its model based executive controller.
A model-based role monitoring system is to enhance the system safeness, pro-activity and flexibility. In this approach, the monitoring system is endowed with a declarative representation of the temporal and causal properties of the controlled processes. Given this explicit model, the executive control is provided by a reactive planning engine which harmonizes the mission goals, the reactive activity of the modules functional, and the operator interventions. The execution state of the robot can be compared continuously with a declarative model of the system behavior: the executive controller can track relevant parallel activities integrating them into a global view and time constraint violations and subtle resources can be detected. The system of planning is to compensate these misalignments/failures generating on the fly recovery sequences. Such features are designed and implemented by deploying a paradigm high level agent programming.
Model based Monitoring with ECLiPSe Framework
A model based monitoring system is to enhance both the operator situation awareness and the system safeness. Given a declarative representation of the temporal properties and the system causal, the flexible executive control is provided by a reactive planning engine which is to harmonize the operator activity such as tasks, commands, etc, with the mission goals and the reactive activity of the functional modules. Since the execution state of the robot is compare continuously with a declarative model of the system, all the main parallel activities are integrated a global view.
In order to get such a features it deploys high level agent programming in TCGolog (Temporal Concurrent Golog) which provides both a declarative language to represent the system properties and the planning engine to generate the control sequences.
Temporal Concurrent Situation Calculus, the SC (Situation Calculus) is a sorted first order language representing dynamic domains by means of situations, actions, e.g. sequences of actions, and fluent. TSCS (Temporal Concurrent Situation Calculus extends the SC with concurrent actions and time. Concurrent durative processes can be represented by fluent properties started and ended by duration less actions. For instance, the process going(p1,p2) is started by the action startGo(p1,t) and it is ended by endGo(p2,t’).
The main DORO processes and states are represented explicitly by a declarative dynamic temporal model specified in the Temporal Concurrent Situation Calculus. This model represents the cause effects relations and temporal constraint among the activities: the system is modeled as a set of component whose state changes over time. Each component is a concurrent thread, describing its history over time as a state sequence ago activities and states. For instance, in the rescue domain some components are: slam, navigation, pant-tilt, VisualPerception, etc. each of these associated with a set of processes, e.g. navigation can be nav_Wand (Wandering the arena), nav_GoTo (Navigate to reach given position) etc.
In order to get such a features it deploys high level agent programming in TCGolog (Temporal Concurrent Golog) which provides both a declarative language to represent the system properties and the planning engine to generate the control sequences.
Temporal Concurrent Situation Calculus, the SC (Situation Calculus) is a sorted first order language representing dynamic domains by means of situations, actions, e.g. sequences of actions, and fluent. TSCS (Temporal Concurrent Situation Calculus extends the SC with concurrent actions and time. Concurrent durative processes can be represented by fluent properties started and ended by duration less actions. For instance, the process going(p1,p2) is started by the action startGo(p1,t) and it is ended by endGo(p2,t’).
The main DORO processes and states are represented explicitly by a declarative dynamic temporal model specified in the Temporal Concurrent Situation Calculus. This model represents the cause effects relations and temporal constraint among the activities: the system is modeled as a set of component whose state changes over time. Each component is a concurrent thread, describing its history over time as a state sequence ago activities and states. For instance, in the rescue domain some components are: slam, navigation, pant-tilt, VisualPerception, etc. each of these associated with a set of processes, e.g. navigation can be nav_Wand (Wandering the arena), nav_GoTo (Navigate to reach given position) etc.
Control Architecture of DORO Robotic
Several activities need to be controlled and coordinate during mission. An execution model is thus a formal framework allowing for a consistent description of the correct timing of any kind of behavior the system has to perform successfully conclude a mission. However as the domain is uncertain, the result of any action can be unexpected, and the resources and time needed can not be rigidly scheduled. It is necessary to account for flexible behavior which means managing dynamic change of time and resource allocation.
A model based executive control system integrates and supervises both the robot modules activities and the operator interventions. The main robot and operator processes such as laser scanning, mapping, navigation etc, are represented explicitly by a declarative temporal context following this approach. A reactive planner can monitor the system status and generate the control on the fly performing continuously sense-plan-act cycles. Each cycle the reactive planner is to generate the robot activities managing failures. The short range planning activity can balance reactivity and goal oriented behavior: short term goals and internal/external events can be combined while the reactive planner tries to solve conflicts.
The physical layer is composed of all the robot devices, e.g. sonar’s, motors, and payload. The DORO payload consists of two stereo cameras, two microphones, one laser telemeter and the pan-tilt unit. The robot embedded components are accessible through the software provided by the vendor ActivMedia Robotics Interface Application (ARIA) libraries, while the payload software is custom.
The functional collects all the capabilities of robot basic. The functional module conducts some actions in order to accomplish basic tasks e.g. acquire image from the arena, collect sensor measures and similar activities. In particular DORO system is endowed with the following modules: acquisition, navigations, laser, joypad, laser and PTU. The navigation module controls the movements of robot.
A model based executive control system integrates and supervises both the robot modules activities and the operator interventions. The main robot and operator processes such as laser scanning, mapping, navigation etc, are represented explicitly by a declarative temporal context following this approach. A reactive planner can monitor the system status and generate the control on the fly performing continuously sense-plan-act cycles. Each cycle the reactive planner is to generate the robot activities managing failures. The short range planning activity can balance reactivity and goal oriented behavior: short term goals and internal/external events can be combined while the reactive planner tries to solve conflicts.
The physical layer is composed of all the robot devices, e.g. sonar’s, motors, and payload. The DORO payload consists of two stereo cameras, two microphones, one laser telemeter and the pan-tilt unit. The robot embedded components are accessible through the software provided by the vendor ActivMedia Robotics Interface Application (ARIA) libraries, while the payload software is custom.
The functional collects all the capabilities of robot basic. The functional module conducts some actions in order to accomplish basic tasks e.g. acquire image from the arena, collect sensor measures and similar activities. In particular DORO system is endowed with the following modules: acquisition, navigations, laser, joypad, laser and PTU. The navigation module controls the movements of robot.
Subscribe to:
Posts (Atom)