An actuators, sensors, microprocessors and wireless networks become cheaper and more ubiquitous it has become increasingly attractive to consider employing teams of small robots to tackle various manipulations and sensing tasks. In order to exploit the full capabilities of these teams, we need to develop effective methods and models for programming distribute ensembles of actuators and sensors.
Application for distributed dynamic robotic teams require a different programming model than the one employed for most applications of traditional robotic. In the traditional model, the programmer is faced with the task of developing software for a single processor interacting with a prescribed set of actuators and sensors. She or he can assume typically that the configuration of the system goal is completely specified before the first line of code written. When developing code for multi robot dynamic teams, we must account for the fact that the type and number of robots available at runtime can not be predicted. We expect to operate in environment where robots will be removed and added continuously and unpredictably. It is expected an environment where the robots will have heterogeneous capabilities; for instance, some may be equipped with camera system, others with range sensors or specialized actuators, some agents may be stationary while others may offer specialized computational resources. This implies that the program must be able to identify and marshal all of the required resources to carry out the specified task automatically.
Remote Objects Control Interface (ROCI), objected oriented, a self describing, strongly typed programming framework that allows the development of robust applications for dynamic multi robot teams. The ROCI building blocks applications are self contained, reusable modules. Fundamentally, a module encapsulates a process which acts on data available on the module’s inputs and presents its result as outputs. Complex tasks can be built connecting inputs and outputs of specific modules.
Blog about Robotics Introduction, news of the robot, information about the robot, sharing knowledge about the various kinds of robots, shared an article about robots, and others associated with the robot
The Integrated Telerobotic Surgery System
The total system behaves as the human surgeon would if there were not an encumbering delay performance. Because the simulator through which the surgeon operates is running in real time the surgeon sees reaction the inceptor movements much more quickly than would be the case if she/he were required to wait while the signals made a complete round trip over the long haul network.
The signal path from the surgeon’s inceptor movement proceeds to the simulator and to the intelligent controller simultaneously, which commands the robot movement. This is simulator like any other in that it calculates all of the system dynamic in real time and from these computations come changes to the states of system, which alter the visual scene observed by the physician. The visual scene is generated by high speed computer graphics engines not unlike those employed by simulators of modern flight. However, the proposed embodiment of unique aspect is that the graphics image is updated periodically by the video image transmitted over the long haul network. This approach ensures that the visual scenes at the simulator and at the patient are never allowed to deviate perceptibly. This updated is generated by a complex scheme of image decoding, image format transformation and texture extraction.
The intelligent controller performs the dual role the performance of optimizing robot and preventing inadvertent incisions. The research will investigate two general approaches to the design. One approach will use optimal control theory and the other will utilize a hybrid of soft computing techniques such as neutral network, fuzzy control, and genetic algorithms. The both techniques have been used successfully to control autonomous aircraft.
The simulator also calculates appropriate the forces of inceptor. The drive signal math model for the haptic stimuli will be essentially the same as that in the actual robot although it will rely on a sophisticated organ dynamics model to compute the forces of appropriate organ interacting with the robot end effectors.
The signal path from the surgeon’s inceptor movement proceeds to the simulator and to the intelligent controller simultaneously, which commands the robot movement. This is simulator like any other in that it calculates all of the system dynamic in real time and from these computations come changes to the states of system, which alter the visual scene observed by the physician. The visual scene is generated by high speed computer graphics engines not unlike those employed by simulators of modern flight. However, the proposed embodiment of unique aspect is that the graphics image is updated periodically by the video image transmitted over the long haul network. This approach ensures that the visual scenes at the simulator and at the patient are never allowed to deviate perceptibly. This updated is generated by a complex scheme of image decoding, image format transformation and texture extraction.
The intelligent controller performs the dual role the performance of optimizing robot and preventing inadvertent incisions. The research will investigate two general approaches to the design. One approach will use optimal control theory and the other will utilize a hybrid of soft computing techniques such as neutral network, fuzzy control, and genetic algorithms. The both techniques have been used successfully to control autonomous aircraft.
The simulator also calculates appropriate the forces of inceptor. The drive signal math model for the haptic stimuli will be essentially the same as that in the actual robot although it will rely on a sophisticated organ dynamics model to compute the forces of appropriate organ interacting with the robot end effectors.
The Intelligent Controller Surgical Robotic
The intelligent controller is located on the robot/patient side of the communication link, perform in two critical roles. In the ultimate system for use on actual patients, the intelligent controller will be necessary to provide both an improved level of efficiency and an added measure of safety in the presence of time delay. Both of safety role and the efficiency enhancement role require intelligent behavior.
The requirement for an added element of safety in presence of time delay is quite obvious. Even when the surgeon as well as the various other components of the system are performing perfectly, the existence of prevent delay time the possibility of 100% certainty as to where various tissue will be in relation to the surgical instruments at any given instant in the future. Because the intelligent controller will be proximate to the robot, it will interact with the robot without delay significant, and thereby has the potential to control all robot movement instantaneously. As a last line of defense against the accidental collision possibility between surgical instrument and the patient’s vital organs, the intelligent controller will play ultimately a critical role.
The requirement for improving the level of efficiency over what it would be otherwise in the presence of time delay is also clear. Finishing surgery in a timely manner and preventing unnecessary frustration for the surgeon are always important aims. While it may be true that the delays associated with telerobotic surgery will never allow it to be quite as efficient as proximate robotic surgery, the aim at least must be to make it ultimately as efficient as possible.
We will attempt to apply variety of advanced approaches to machine intelligence in designing effective intelligent controller prototypes although in the course of the research, some basic aspects no need be particularly complex. A fairly effective controller could be based on nothing more than a three-dimensional geometric model of the surgical field combined with a production rule system type.
The requirement for an added element of safety in presence of time delay is quite obvious. Even when the surgeon as well as the various other components of the system are performing perfectly, the existence of prevent delay time the possibility of 100% certainty as to where various tissue will be in relation to the surgical instruments at any given instant in the future. Because the intelligent controller will be proximate to the robot, it will interact with the robot without delay significant, and thereby has the potential to control all robot movement instantaneously. As a last line of defense against the accidental collision possibility between surgical instrument and the patient’s vital organs, the intelligent controller will play ultimately a critical role.
The requirement for improving the level of efficiency over what it would be otherwise in the presence of time delay is also clear. Finishing surgery in a timely manner and preventing unnecessary frustration for the surgeon are always important aims. While it may be true that the delays associated with telerobotic surgery will never allow it to be quite as efficient as proximate robotic surgery, the aim at least must be to make it ultimately as efficient as possible.
We will attempt to apply variety of advanced approaches to machine intelligence in designing effective intelligent controller prototypes although in the course of the research, some basic aspects no need be particularly complex. A fairly effective controller could be based on nothing more than a three-dimensional geometric model of the surgical field combined with a production rule system type.
Organ Dynamics Modeling
Surgery simulation involves biological tissues, therefore it is essential to determine the deformation of the tissue while in contact with a surgical instrument. This is important to transfer a realistic picture of the tissue to the surgeon in real time. The problem is how to determine the interaction between the tissue and the tool. This requires a rigorous model of the material properties, an accurate simulation method to reflect the actual behavior of the tissue, and fast simulation results to enable the simulation of real-time interactive.
The organ dynamics model must be accurate, responsive and timely. Realistic visual deformation and haptic output of the simulator is essential for the success of its tele-robotic system. The surgeon must see and it is desirable to feel the organs presented by the simulator as if they were the actual organs of the patient. As an aside it should be noted that the daVinci robot does not provide significant, haptic the feel tissue. However, as part of this project it will add the capability, since haptic feedback has been demonstrated to mitigate some of the effects of the delay by virtue of the fact that the proprioceptive system is faster responding than vision. The output simulator is depending on the organ dynamic models.
The first requirement of the organ dynamic model of the biological system is accuracy; the force and deformation feedback of the simulated organ on which the surgeon is operating must be as close as possible to the actual patient organ. Secondly, the computation that is required to determine the simulated forces and deformation must be performed in a time interval that appears to the surgeon to be real time. The organ dynamic model must be capable of being quickly updated using the decompressed video feedback that provides corrections without haptic discontinuities or creating visual, that could disturb the surgeon.
The organ dynamics model must be accurate, responsive and timely. Realistic visual deformation and haptic output of the simulator is essential for the success of its tele-robotic system. The surgeon must see and it is desirable to feel the organs presented by the simulator as if they were the actual organs of the patient. As an aside it should be noted that the daVinci robot does not provide significant, haptic the feel tissue. However, as part of this project it will add the capability, since haptic feedback has been demonstrated to mitigate some of the effects of the delay by virtue of the fact that the proprioceptive system is faster responding than vision. The output simulator is depending on the organ dynamic models.
The first requirement of the organ dynamic model of the biological system is accuracy; the force and deformation feedback of the simulated organ on which the surgeon is operating must be as close as possible to the actual patient organ. Secondly, the computation that is required to determine the simulated forces and deformation must be performed in a time interval that appears to the surgeon to be real time. The organ dynamic model must be capable of being quickly updated using the decompressed video feedback that provides corrections without haptic discontinuities or creating visual, that could disturb the surgeon.
Performing Robotic Surgical
Robotic is one of key technologies that have a strong potential to change how we live in century 21st. we have already seen robots exploring surfaces of the depth of ocean and distant planets, streamlining and speeding up the assembly lines in manufacturing industry. Robotic lawn mover, vacuum cleaner and even pets found their ways to our houses. Among the medical applications of robotics the minimally invasive surgery was the first demonstrate a real benefits and advantages of introducing robotic devices into operating room over conventional surgical methods. These machines have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal heartburn and reflux.
After its inception in the late 1980s the utilization of laparoscopic cholecystectomy grew rapidly. However, minimally invasive surgery (MIS) for other operations has not experienced the same pattern of growth. The reason is that in general laparoscopic procedures are hard to learn, perform and master. This is a consequence of the fact that the camera platform is unstable, the instruments have a restrictive number of degrees of freedom and the imagery presented to the surgeon does not offer sufficient depth of information. The solutions seem to be at hand with the significant growth of robotic surgery. This is surgery where in the surgeon operate through a robot. This robot is a telemanipulator under the control of the surgeon. The system of robotic provides a stable video platform, added dexterity and in some cases a stereoscopic view of the field of surgical.
The surgical robots use technology that allows the human surgeon to get closer to the site of surgical than human vision will allow, and work at a smaller scale than conventional surgery permits. The robotic surgery system consist two primary components:
• A surgical arm unit (robotic)
• A viewing and control console (operating station)
After its inception in the late 1980s the utilization of laparoscopic cholecystectomy grew rapidly. However, minimally invasive surgery (MIS) for other operations has not experienced the same pattern of growth. The reason is that in general laparoscopic procedures are hard to learn, perform and master. This is a consequence of the fact that the camera platform is unstable, the instruments have a restrictive number of degrees of freedom and the imagery presented to the surgeon does not offer sufficient depth of information. The solutions seem to be at hand with the significant growth of robotic surgery. This is surgery where in the surgeon operate through a robot. This robot is a telemanipulator under the control of the surgeon. The system of robotic provides a stable video platform, added dexterity and in some cases a stereoscopic view of the field of surgical.
The surgical robots use technology that allows the human surgeon to get closer to the site of surgical than human vision will allow, and work at a smaller scale than conventional surgery permits. The robotic surgery system consist two primary components:
• A surgical arm unit (robotic)
• A viewing and control console (operating station)
Navigation Framework based on Implementation
The Morphin algorithm provides traversability data to the navigation system through the callback interface described earlier. Three register classes to receive this data:
• Goodness Callback: this class updates a local traversability map used a local function of cost.
• D* Goodness Callback: this class transfers the traversability analysis to the map used by the D* global function of cost.
• Goodness Streamer Callback: a debugging class which is used to broadcast the data to an off board of interface user.
The global and local cost functions are used by an implementation of the action selector interface which searches for the best arc to traverse.
This framework addresses many of the problems associated with prior algorithms implementations. The model used to describe the motion of the robot is centralized in this framework, so all cost evaluations are consistent. Furthermore, the scaling data in the cost functions allows for the arbitrary cost functions to be used while still providing reasonable summations.
Each component, e.g. D* cost function, the action selector, Morphin traversability analyzer, and locomotion model is independent of the others. Changing the types of motion the robot performs therefore does not require changes to any module other than the action selector. For instance, to have a robot move along straight line segments connected by point turns would require only the implementation of a new action selector. Similarly, by replacing the Morphin traversability analyzer with a port of the GESTALT terrain analyzer, the two analyzers of traversability could be compared directly, without the need to change the other software.
The fundamental nature of the algorithm can also be changed easily. For instance, by removing the D*global cost function and replacing the action selector with a new module, a new navigation system that resembles the Rover Bug algorithm could be implemented quickly.
• Goodness Callback: this class updates a local traversability map used a local function of cost.
• D* Goodness Callback: this class transfers the traversability analysis to the map used by the D* global function of cost.
• Goodness Streamer Callback: a debugging class which is used to broadcast the data to an off board of interface user.
The global and local cost functions are used by an implementation of the action selector interface which searches for the best arc to traverse.
This framework addresses many of the problems associated with prior algorithms implementations. The model used to describe the motion of the robot is centralized in this framework, so all cost evaluations are consistent. Furthermore, the scaling data in the cost functions allows for the arbitrary cost functions to be used while still providing reasonable summations.
Each component, e.g. D* cost function, the action selector, Morphin traversability analyzer, and locomotion model is independent of the others. Changing the types of motion the robot performs therefore does not require changes to any module other than the action selector. For instance, to have a robot move along straight line segments connected by point turns would require only the implementation of a new action selector. Similarly, by replacing the Morphin traversability analyzer with a port of the GESTALT terrain analyzer, the two analyzers of traversability could be compared directly, without the need to change the other software.
The fundamental nature of the algorithm can also be changed easily. For instance, by removing the D*global cost function and replacing the action selector with a new module, a new navigation system that resembles the Rover Bug algorithm could be implemented quickly.
Action Selection and Locomotion of Robotic
The class of action selector is where the specifics of a navigation algorithm are implemented. Basically, the role of the action selector is to determine the appropriate next action for the robot to perform given its current state and information from global and local cost functions.
Through the action selection decoupling from traversability analysis, it is straight forward to do modify the set of trajectories over which navigation algorithm searches. The generic action selector interface provide access or functions to get and set the current waypoint, and a function that returns the next action the robot should take.
To enable a generic algorithms implementation, action selectors are provided with a model of the locomotion capabilities of the rover. At minimum, the model provides kinematic properties, such as the number of wheels and the wheelbase of the robot. Action selectors may use this information to generically integrate terrain costs over the expected path of a robot.
The locomotors classes provide an abstract interface to the underlying the mechanism of robotic locomotion. The locomotion framework is structured in software of double-bridge, allowing independent specialization of the kinematic control interface and configuration. The wheel locomotors class provides the interface used by other components to maneuver the vehicle. The wheel locomotors interface defines functions that descendants must provide to encapsulate the specific of the protocol used to command the mechanism of locomotors.
The wheel locomotors model describes the robot kinematics. This structure allows for maximal code reuse, which is important particularly in a research environment where changes to the robot may occur incrementally, e.g. the control system may be redeveloped while the mechanical robot remains the same or vise versa.
Locomotors interface classes provide an abstraction to the different control interfaces potentially available on a robot, e.g. bank control, high level arc control, and independent motor control. Drive commands are used as a high level interface to the locomotors.
Through the action selection decoupling from traversability analysis, it is straight forward to do modify the set of trajectories over which navigation algorithm searches. The generic action selector interface provide access or functions to get and set the current waypoint, and a function that returns the next action the robot should take.
To enable a generic algorithms implementation, action selectors are provided with a model of the locomotion capabilities of the rover. At minimum, the model provides kinematic properties, such as the number of wheels and the wheelbase of the robot. Action selectors may use this information to generically integrate terrain costs over the expected path of a robot.
The locomotors classes provide an abstract interface to the underlying the mechanism of robotic locomotion. The locomotion framework is structured in software of double-bridge, allowing independent specialization of the kinematic control interface and configuration. The wheel locomotors class provides the interface used by other components to maneuver the vehicle. The wheel locomotors interface defines functions that descendants must provide to encapsulate the specific of the protocol used to command the mechanism of locomotors.
The wheel locomotors model describes the robot kinematics. This structure allows for maximal code reuse, which is important particularly in a research environment where changes to the robot may occur incrementally, e.g. the control system may be redeveloped while the mechanical robot remains the same or vise versa.
Locomotors interface classes provide an abstraction to the different control interfaces potentially available on a robot, e.g. bank control, high level arc control, and independent motor control. Drive commands are used as a high level interface to the locomotors.
Robot Programming Language Meld and LDP
Modular robot programming can be substantially more challenging than normal robot programming due to:
• Scale/number of modules.
• Concurrency and asynchronicity, both in physical interactions and potentially at the software level.
• The local scope of information naturally available at each module.
Recent declaration approaches such as P2 and Saphira have shown promise in other domains that share some of theses characteristics. Inspired by those results it has been developing two modular robot specific declarative programming languages, Meld and LDP. Both languages provide the illusion of executing a single program across an ensemble, while the runtime system of each language automatically distributes to computation and applies a variety of optimizations to reduce the net computational and messaging overhead.
It has previously described a whole motion based shape planning algorithm that exhibits constant planning complexity per module and requires information linear in the complexity of the target shape. Subsequently it generalized this approach to extend its functioning to other local metamorphic systems. The latter, generalized algorithm operates on sub-ensembles to accomplish both shape control and resource allocation while maintaining global connectivity of the ensemble.
Meld has proven more effective at many of the global coordination aspect of this algorithm, at efficiently tracking persistent operating conditions, and at coping with non linear topologies. LDP has proven more effective at local coordination, sophisticated temporal conditions, detecting local topological configurations, and more generally, at expressing variable-length conditional expressions.
• Scale/number of modules.
• Concurrency and asynchronicity, both in physical interactions and potentially at the software level.
• The local scope of information naturally available at each module.
Recent declaration approaches such as P2 and Saphira have shown promise in other domains that share some of theses characteristics. Inspired by those results it has been developing two modular robot specific declarative programming languages, Meld and LDP. Both languages provide the illusion of executing a single program across an ensemble, while the runtime system of each language automatically distributes to computation and applies a variety of optimizations to reduce the net computational and messaging overhead.
It has previously described a whole motion based shape planning algorithm that exhibits constant planning complexity per module and requires information linear in the complexity of the target shape. Subsequently it generalized this approach to extend its functioning to other local metamorphic systems. The latter, generalized algorithm operates on sub-ensembles to accomplish both shape control and resource allocation while maintaining global connectivity of the ensemble.
Meld has proven more effective at many of the global coordination aspect of this algorithm, at efficiently tracking persistent operating conditions, and at coping with non linear topologies. LDP has proven more effective at local coordination, sophisticated temporal conditions, detecting local topological configurations, and more generally, at expressing variable-length conditional expressions.
Navigation Architecture to Evaluate Terrain and Locomotion
The role of a navigation algorithm is to generate paths through terrain while achieving specific aims. The navigator uses sensor data to evaluate terrain and locomotion components to execute robot trajectories and motor.
The navigation architecture is divided into modules roughly along sense-think-act lines: traversabilty analysis components, action selection and cost functions. In the traversability analysis components, sensor data is converted into a model of the world. Action selectors use this planning space to determine how the robot should move. Cost functions transform these models into a form that can be used for planning. Once a course of actions is determined, the resulting trajectory is then passed to the locomotion system for execution. The navigator provides the basic interface between decision layer processes and locomotion and the navigation systems. Each component provides a standardized set of interface functions that can be overridden in descendant classes to provide specific behavior.
Navigation aims are expressed as waypoints. Waypoints provide a simple interface that returns whether or not a state is in a set of desired states. The basic implementation is a two dimensional goal location specified with some error tolerance. Descendent waypoint classes may provide more complicated goal conditions, such as a goal line to cross achieving a position with desired orientation.
The navigator tracks the progress of the robot as it progresses towards a waypoint. The decision layer may queue a list of waypoints for the robot to pass through this interface. The navigator also provides an interface through which callbacks can be registered. The callbacks can be used to monitor the progress of the navigator or, in a more complex way, to trigger non navigation tasks to execute (e.g. opportunistic science).
Traversability analysis involves the conversion of sensor data into a model in the world. This may be as simple as a binary occupancy or as complicated as a statistical evaluation of the terrain.
The navigation architecture is divided into modules roughly along sense-think-act lines: traversabilty analysis components, action selection and cost functions. In the traversability analysis components, sensor data is converted into a model of the world. Action selectors use this planning space to determine how the robot should move. Cost functions transform these models into a form that can be used for planning. Once a course of actions is determined, the resulting trajectory is then passed to the locomotion system for execution. The navigator provides the basic interface between decision layer processes and locomotion and the navigation systems. Each component provides a standardized set of interface functions that can be overridden in descendant classes to provide specific behavior.
Navigation aims are expressed as waypoints. Waypoints provide a simple interface that returns whether or not a state is in a set of desired states. The basic implementation is a two dimensional goal location specified with some error tolerance. Descendent waypoint classes may provide more complicated goal conditions, such as a goal line to cross achieving a position with desired orientation.
The navigator tracks the progress of the robot as it progresses towards a waypoint. The decision layer may queue a list of waypoints for the robot to pass through this interface. The navigator also provides an interface through which callbacks can be registered. The callbacks can be used to monitor the progress of the navigator or, in a more complex way, to trigger non navigation tasks to execute (e.g. opportunistic science).
Traversability analysis involves the conversion of sensor data into a model in the world. This may be as simple as a binary occupancy or as complicated as a statistical evaluation of the terrain.
Coupled Layer Architecture for Robotic Autonomy
As part of CLARAty (Coupled-Layer Architecture for Robotic Autonomy), this framework shares the design aims of maximizing code reuse while maintaining an efficient and accessible implementation.
CLARAty is designed to ease the transition from research to software of flight-ready. It attempts to achieve this aim by developing a set of standard interface and a basic set of reusable components. CLARAty is being developed using object oriented design principles to provide an avenue and to enable code reuse for extension. An open source development model is being used to allow collaborators to contribute components extension, which helps to maintain a critical mass and architecture achieve.
One novel feature of the CLARAty architecture is its two layer structure, the top layer called as decision layer provides a combination of operational executive and procedural planner. The lower level called as functional layer provides a hierarchical interface to hardware components and rover services. The decision layer may access services in the functional layer at any point in the hierarchy, allowing the decision layer to plan at a granularity appropriate for a given task.
The motivation for developing a generic navigation framework comes from the experiences navigation algorithms implementation for a variety of robots. For instance, a combination of a local obstacle avoidance algorithm and a real time path planner has been used on a number of robotic platforms. The first implementation was developed for Ratler. It has been used on progression of robots including Nomad, an ATRV, and most currently Hyperion.
Every new implementation has made gains in performance and capabilities but a major effort has been required to port the software, often involving a complete reimplementation. A target of this work is to simplify this process, allowing researchers to focus on developing and testing new capabilities rather than dealing with the mundane details of creating a specific platform implementation of an existing algorithm.
CLARAty is designed to ease the transition from research to software of flight-ready. It attempts to achieve this aim by developing a set of standard interface and a basic set of reusable components. CLARAty is being developed using object oriented design principles to provide an avenue and to enable code reuse for extension. An open source development model is being used to allow collaborators to contribute components extension, which helps to maintain a critical mass and architecture achieve.
One novel feature of the CLARAty architecture is its two layer structure, the top layer called as decision layer provides a combination of operational executive and procedural planner. The lower level called as functional layer provides a hierarchical interface to hardware components and rover services. The decision layer may access services in the functional layer at any point in the hierarchy, allowing the decision layer to plan at a granularity appropriate for a given task.
The motivation for developing a generic navigation framework comes from the experiences navigation algorithms implementation for a variety of robots. For instance, a combination of a local obstacle avoidance algorithm and a real time path planner has been used on a number of robotic platforms. The first implementation was developed for Ratler. It has been used on progression of robots including Nomad, an ATRV, and most currently Hyperion.
Every new implementation has made gains in performance and capabilities but a major effort has been required to port the software, often involving a complete reimplementation. A target of this work is to simplify this process, allowing researchers to focus on developing and testing new capabilities rather than dealing with the mundane details of creating a specific platform implementation of an existing algorithm.
Deployment of Robotic Sensors
An important problem for systems of wireless sensor is the effective deployment of the sensor within the target space S. the deployment have to satisfy some optimization criteria with respect to the space S. they are usually deployed by external means in case of static sensors, either carefully or randomly; the distribution of the sensors may not satisfy the desired optimization criteria in the latter case.
If the sensing entities are mobile, as in the case of mobile sensor networks, robotic sensor networks and vehicular networks, they are potentially capable to position themselves in appropriate locations without the help of any external control or central coordination. However to achieve such an aim is a rather complex task, and designing localized algorithms for effective and efficient deployment of the mobile sensors is a challenging research issue.
We are interested in a specific instance of the problem, called the Uniform Dispersal problem, where the sensors have to fill completely an unknown space S entering through one or more designated entry point called doors. The sensors must avoid colliding with each other, have to terminate within finite time. The space S is assumed to be simply connected, without holes, and orthogonal, e.g. polygonal with slides either perpendicular or parallel to one another. Orthogonal spaces are interesting because Orthogonal spaces can be used to model indoor and urban environment.
It considers the problem within the context of robotics sensor network: the mobile entities rely only on sensed local information within a restricted radius, called visibility range; when active they operate in a sense-compute-move cycle; and they usually have no explicit means of communication.
A crucial difference between traditional wireless sensor networks and robotic sensor networks is in the determination of an entity’s neighbors. In robotic sensor networks, the determination of one neighbor is done by sensing capabilities, i.e. vision: any sensor in the sensing radius will be detected even if inactive.
If the sensing entities are mobile, as in the case of mobile sensor networks, robotic sensor networks and vehicular networks, they are potentially capable to position themselves in appropriate locations without the help of any external control or central coordination. However to achieve such an aim is a rather complex task, and designing localized algorithms for effective and efficient deployment of the mobile sensors is a challenging research issue.
We are interested in a specific instance of the problem, called the Uniform Dispersal problem, where the sensors have to fill completely an unknown space S entering through one or more designated entry point called doors. The sensors must avoid colliding with each other, have to terminate within finite time. The space S is assumed to be simply connected, without holes, and orthogonal, e.g. polygonal with slides either perpendicular or parallel to one another. Orthogonal spaces are interesting because Orthogonal spaces can be used to model indoor and urban environment.
It considers the problem within the context of robotics sensor network: the mobile entities rely only on sensed local information within a restricted radius, called visibility range; when active they operate in a sense-compute-move cycle; and they usually have no explicit means of communication.
A crucial difference between traditional wireless sensor networks and robotic sensor networks is in the determination of an entity’s neighbors. In robotic sensor networks, the determination of one neighbor is done by sensing capabilities, i.e. vision: any sensor in the sensing radius will be detected even if inactive.
Model and Definitions of Robotic Sensor
The space to be filled by the sensors is a simply connected to orthogonal region S that is partitioned into square cells each of size roughly equal to the area occupied by a sensor. Simply connected means, it is possible to reach any cell in the space from any other cell and there are no obstacle surrounded completely by cell belonging to the space.
The system is composed of simple entities, called sensors, having locomotion and sensory capabilities. The entities can move and turn in any direction. The sensory devices on the entity allow it to have a vision of its surrounding: we assume the sensors to have restricted vision up to a fixed radius around it. They do not have any explicit means of communicating with each other even if two sensors see each other. Each sensor functions according to preprogrammed algorithm into it. The sensors have a (1) bits of working memory, and they have the orientation of local sense.
If two sensors are in the same cell at the same time then there is a collision. The algorithms executed by the sensor have to avoid collisions. The sensors enter the space through special cell that is called doors. A door is simply a cell in the space which always has sensors. Whenever the sensors in the door move to the neighboring cell, a new sensor appears instantaneously in the door.
The sensor first look s at its surrounding and then based on the rules of the algorithm during each step taken by a sensor, the sensor either chooses one of the neighboring cells to move to, or decides to remain stationary. Each step is atomic and during one step of sensor can only move to a cell of neighboring. However since the sensors are asynchronous, an arbitrary amount of time may lapse between two steps that taken by a sensor.
The system is composed of simple entities, called sensors, having locomotion and sensory capabilities. The entities can move and turn in any direction. The sensory devices on the entity allow it to have a vision of its surrounding: we assume the sensors to have restricted vision up to a fixed radius around it. They do not have any explicit means of communicating with each other even if two sensors see each other. Each sensor functions according to preprogrammed algorithm into it. The sensors have a (1) bits of working memory, and they have the orientation of local sense.
If two sensors are in the same cell at the same time then there is a collision. The algorithms executed by the sensor have to avoid collisions. The sensors enter the space through special cell that is called doors. A door is simply a cell in the space which always has sensors. Whenever the sensors in the door move to the neighboring cell, a new sensor appears instantaneously in the door.
The sensor first look s at its surrounding and then based on the rules of the algorithm during each step taken by a sensor, the sensor either chooses one of the neighboring cells to move to, or decides to remain stationary. Each step is atomic and during one step of sensor can only move to a cell of neighboring. However since the sensors are asynchronous, an arbitrary amount of time may lapse between two steps that taken by a sensor.
Human Tactile Sensing of Robots
Human dexterity is a marvelous thing: people can grasp a wide variety of sizes and shapes, perform complex tasks, and switch between grasp in response to changing task requirements. This is due to our sophisticated control part capabilities. In large measure this control capability is founded force and tactile sensing, especially the ability to sense conditions at the finger object contact.
For the last two decades robotics researchers have worked to create a touch artificial sense to give robots some of the same manipulation capabilities that humans posses. While vision has received the most attention in robot sensing research, touch is vital for many tasks. Dextrous manipulation requires control of motions and forces at the contact between the fingers and the environment, which can only be accomplished through touch. Tactile sensing is able to provide information about mechanical properties such as friction, compliance and mass. Knowledge of these parameters is essential if robots are to reliably handle unknown objects in unstructured environment.
Although touch sensing is the dextrous basis manipulation, early work in tactile sensing research focused on the creation of sensor device and object recognition algorithms. Particular attention has been devoted to skin-like array sensors that built a simple tactile array sensor and demonstrated recognition of flat objects such as washers. The multi-fingered hands creation increased interest in tactile sensing for manipulation, beginning with preliminary work on incorporating tactile information in manipulation. In the last few years studies on the use of sensing tactile in real time control of manipulation have begun to appear. Tactile sensors provided information that guided the execution of the tasks, including edge tracking, automatic grasping, and rolling manipulation. These experimental studies have begun to explain the ways that sensing tactile enhances manipulation capabilities and many questions remain unanswered. Currently we lack a comprehensive theory that defines sensing requirements for various manipulation tasks.
For the last two decades robotics researchers have worked to create a touch artificial sense to give robots some of the same manipulation capabilities that humans posses. While vision has received the most attention in robot sensing research, touch is vital for many tasks. Dextrous manipulation requires control of motions and forces at the contact between the fingers and the environment, which can only be accomplished through touch. Tactile sensing is able to provide information about mechanical properties such as friction, compliance and mass. Knowledge of these parameters is essential if robots are to reliably handle unknown objects in unstructured environment.
Although touch sensing is the dextrous basis manipulation, early work in tactile sensing research focused on the creation of sensor device and object recognition algorithms. Particular attention has been devoted to skin-like array sensors that built a simple tactile array sensor and demonstrated recognition of flat objects such as washers. The multi-fingered hands creation increased interest in tactile sensing for manipulation, beginning with preliminary work on incorporating tactile information in manipulation. In the last few years studies on the use of sensing tactile in real time control of manipulation have begun to appear. Tactile sensors provided information that guided the execution of the tasks, including edge tracking, automatic grasping, and rolling manipulation. These experimental studies have begun to explain the ways that sensing tactile enhances manipulation capabilities and many questions remain unanswered. Currently we lack a comprehensive theory that defines sensing requirements for various manipulation tasks.
Distributed Robotic Systems
While all fields of robotics have progressed based on rapid advances in information system and computing, the field of distributed abs cooperative robotics has been catalyzed in particular by new communication technologies and network. In the robotics context, the linkage of information systems by wireless, wired, or optical channels and protocol facilitates interactions that quickly scale from two robots shaking hands to a swarm of micro-robots demonstrating cooperative behaviors. The multiple robot development and sensor communication links and the control of those distributed systems in applications domains and laboratory is this subject in this article.
Network robotics has grown around the use of communications channels network, including the internet, as a means to remotely control robots, by humans, as well as support interobot interactions. Major challenges include the understanding of non deterministic time delays in communications and the control formulation and architectural principles that will enable robust performance. Key examples of robotics network include the remote tele-operation of space exploration robots and shared internet access to robotic experiments.
The multi robots systems area has focused on the understanding of physical interactions and constraint among the robots, and between robots and the physical environment such physical interactions may be direct, as in the shared manipulation of indirect, or a single object, as in the cooperative navigation of autonomous vehicles. Interaction associated with the robotic systems physical reconfiguration are also great interest. The theoretical formulation of underlying principle has been critical for this field, and forms the basis for systematic approaches to practical applications.
The multiple robots deployment in complex environments creates demands for distributed sensor network in order to provide guide actions and information and decisions. The distributed sensor network field has been driven by the capability to fabricate Microsystems with low power, high functionality, and wireless communications capability. The large numbers deployment of such expendable devices would provide extraordinary access to information, and also support the deployment of autonomous robotic systems in the field.
Network robotics has grown around the use of communications channels network, including the internet, as a means to remotely control robots, by humans, as well as support interobot interactions. Major challenges include the understanding of non deterministic time delays in communications and the control formulation and architectural principles that will enable robust performance. Key examples of robotics network include the remote tele-operation of space exploration robots and shared internet access to robotic experiments.
The multi robots systems area has focused on the understanding of physical interactions and constraint among the robots, and between robots and the physical environment such physical interactions may be direct, as in the shared manipulation of indirect, or a single object, as in the cooperative navigation of autonomous vehicles. Interaction associated with the robotic systems physical reconfiguration are also great interest. The theoretical formulation of underlying principle has been critical for this field, and forms the basis for systematic approaches to practical applications.
The multiple robots deployment in complex environments creates demands for distributed sensor network in order to provide guide actions and information and decisions. The distributed sensor network field has been driven by the capability to fabricate Microsystems with low power, high functionality, and wireless communications capability. The large numbers deployment of such expendable devices would provide extraordinary access to information, and also support the deployment of autonomous robotic systems in the field.
Using Coordination Costs to Adapt Communications
The coordination cost measures facilitates identifying which communication method is most suitable given the environment. We model every robot’s coordination cost Ci, as a factor that impacts the group productivity. We analyzed two cost categories: firstly, cost relating to communication and secondly, reactive and/or proactive collision resolution behaviors. It focuses in energy and time spent communicating and in the consequent resolutions behaviors. We then combine these factors to create the function of multi-attribute cost based on the Simple Additive Weighting (SAW) method often used for multi attribute utility functions.
Other communication issues, such as bandwidth limitations, can similarly categorize as additional cost factors as they impact any specific robot. For instance, if a robot needed to retransmit a message due to limited bandwidth, cost in terms of additional time latency and energy used in retransmission are likely to result.
There are two types of adaptive methods: firstly, uniform communication adaptation; secondly, adaptive neighborhoods of communication.
Uniform switching between methods
The first method, all robots simultaneously switch between mutually exclusive communication methods as needed. In order to facilitate this adaptation form, each robot autonomously maintains a cost estimate, V used to decide which communication method to use. Because a robot detects no resources conflict, it decreases an estimate of this cost, V, by an amount Wdown. When a robot senses a conflict is occurring the value of V is increased by an amount Wup. The values for V are then mapped to a set communication scheme methods ranging from those with little cost overhead such as with no communication, to more robust methods with higher overheads such as the centralized and localized methods.
Adaptive neighborhoods of communication
The advantage in the first adaptive approach lies in its simplicity. The uniform adaptive approach switches between existing coordination methods based on cost estimation. Assuming one analysis a new domain with different communication methods completely, and can order the communication methods based on their communication costs, this approach will be equally valid as it implements existing methods and reaches the highest level productivity.
Other communication issues, such as bandwidth limitations, can similarly categorize as additional cost factors as they impact any specific robot. For instance, if a robot needed to retransmit a message due to limited bandwidth, cost in terms of additional time latency and energy used in retransmission are likely to result.
There are two types of adaptive methods: firstly, uniform communication adaptation; secondly, adaptive neighborhoods of communication.
Uniform switching between methods
The first method, all robots simultaneously switch between mutually exclusive communication methods as needed. In order to facilitate this adaptation form, each robot autonomously maintains a cost estimate, V used to decide which communication method to use. Because a robot detects no resources conflict, it decreases an estimate of this cost, V, by an amount Wdown. When a robot senses a conflict is occurring the value of V is increased by an amount Wup. The values for V are then mapped to a set communication scheme methods ranging from those with little cost overhead such as with no communication, to more robust methods with higher overheads such as the centralized and localized methods.
Adaptive neighborhoods of communication
The advantage in the first adaptive approach lies in its simplicity. The uniform adaptive approach switches between existing coordination methods based on cost estimation. Assuming one analysis a new domain with different communication methods completely, and can order the communication methods based on their communication costs, this approach will be equally valid as it implements existing methods and reaches the highest level productivity.
Robotic Communication Approaches
Groups of robots are likely to accomplish certain tasks more robustly and quickly than single robots. Many robotic domains such as robotic search and rescue, vacuuming, de-mining, and waste clean up are characterized by limited operating spaces where robots are likely to collide. Some type of information transfer is likely to be helpful in facilitating coherent behavior in robotic group tasks and thud better achieve their tasks in order to maintain group cohesion under such conditions. This is especially true as robotic domains re typically fraught with uncertainty and dynamic such as hardware failure, noisy sensors, and changing environmental conditions.
Communication should always be advantageous-the more information a robot has the better. However, assuming communication has a cost, one also must consider the resources consumed in communication, and whether the communication cost appropriately matches the needs of the domain. We believe that different communication schemes are best suited for different environment conditions. Because nobody communication method is always most effective, on way to improve communication in coordination is to find a mechanism for switching between different communication protocols so it is match with the given environment.
The model explicitly includes resources such as the energy and time spent communicating. In situations where conflicts between group members are common, more robust means of communication are most effective.
There are two novel domain independent adaptive communication methods that use communication cost estimates to alter their communication approach based on domain conditions. The first approach, robot switch uniformly their communication scheme between differing communication approaches. Robots contain full implementations of several communication methods. The second approach represents a generalized communication scheme that allows each robot to adapt to its domain conditions independently. Each robot creates its own radius of communication range to create a sliding scale of communication between localized to centralized methods.
Communication should always be advantageous-the more information a robot has the better. However, assuming communication has a cost, one also must consider the resources consumed in communication, and whether the communication cost appropriately matches the needs of the domain. We believe that different communication schemes are best suited for different environment conditions. Because nobody communication method is always most effective, on way to improve communication in coordination is to find a mechanism for switching between different communication protocols so it is match with the given environment.
The model explicitly includes resources such as the energy and time spent communicating. In situations where conflicts between group members are common, more robust means of communication are most effective.
There are two novel domain independent adaptive communication methods that use communication cost estimates to alter their communication approach based on domain conditions. The first approach, robot switch uniformly their communication scheme between differing communication approaches. Robots contain full implementations of several communication methods. The second approach represents a generalized communication scheme that allows each robot to adapt to its domain conditions independently. Each robot creates its own radius of communication range to create a sliding scale of communication between localized to centralized methods.
The Robots Anthropomorphic Design
Robots are becoming available in a wide variety of roles. A recent report by the UN Economic Commission for Europe and the International Federation of Robotics predicts that 4.1 millions robots will be working in homes by the end of 2007. The implication is that as they roll, crawl and walk out of the laboratory and into the real world, that people in the real world will be using them – soldiers, families, nurses, and teachers. These users will more than likely not have a background in engineering nor care about the intricacies of the control algorithms in their robot. To them it will be tools in the same way as a PC or DVD player.
However robots differ from most consumers electronic significantly in two respects: first, robots are often designed to use human communication modalities, for example hearing and speech in place of LED displays and buttons. This is sometimes because these modalities are implied by the robots anthropomorphic design and sometimes for practical reasons, robots are usually mobile and even a remote control may be use in limited practical. Secondly, due to their embodiment, robots have the capability to supply rich feedback in many forms: anthropomorphic ones such as gestures, speech and body language and artificial ones such as music and lights.
Current consumer robots such as the Sony AIBO use a combine of both respects. Using real time communication a robot can engage the user in active social interaction and importantly even instigate interaction. Most consumer electronics are passive, there is interaction when instigated by a human, and that interaction is largely in one direction from the human to the machine.
The study of people expectations of a robot companion indicated that a large proportion of the participants in the test were in favour of robot companion, especially one of that could communicate like a humans. Humanlike appearance and behavior were less important.
However robots differ from most consumers electronic significantly in two respects: first, robots are often designed to use human communication modalities, for example hearing and speech in place of LED displays and buttons. This is sometimes because these modalities are implied by the robots anthropomorphic design and sometimes for practical reasons, robots are usually mobile and even a remote control may be use in limited practical. Secondly, due to their embodiment, robots have the capability to supply rich feedback in many forms: anthropomorphic ones such as gestures, speech and body language and artificial ones such as music and lights.
Current consumer robots such as the Sony AIBO use a combine of both respects. Using real time communication a robot can engage the user in active social interaction and importantly even instigate interaction. Most consumer electronics are passive, there is interaction when instigated by a human, and that interaction is largely in one direction from the human to the machine.
The study of people expectations of a robot companion indicated that a large proportion of the participants in the test were in favour of robot companion, especially one of that could communicate like a humans. Humanlike appearance and behavior were less important.
Robots with Omnidirectional Wheels
Omnidirectional wheels have become popular and choose to develop for mobile robots, because they allow them to drive on a straight path from a given location on the floor to other places without having to rotate first. Moreover, the movement of translational along any desired path can be combined with a rotation, so the robot arrives to its destination at the correct angle.
Mostly omnidirectional wheels based on the same general principle; the wheel can slide frictionless in the motor axis direction while the wheel proper provides traction in the direction normal to the motor axis. In order to achieve this, the wheel is built using smaller wheels attached along the periphery of the main wheel. The kind of wheel that is using in RoboCup is small size and middle size omnidirectional robot since 2002. The wheel is a variation of the Swedish wheels, which use rollers with a rotation direction which is neither parallel nor perpendicular to the motor axis.
Two or more omnidirectional wheels are used to drive a robot movement each wheel provides traction in the direction parallel to the floor and normal to the motor axis. The forces provide and add up a translational and a rotational motion for the robot. If it were possible to mount two orthogonally oriented omnidirectional wheels right under the center of a robot with a circular base, then driving the robot in any desired direction would be trivial. To give the robot a speed, with respect to a Cartesian coordinate system attached to the robot, each wheel would just have to provide one of the two speed components.
Since the motors and wheels need some space, this simple arrangement is not possible. The wheels are usually mounted on the periphery of the chassis. It is also easier to cancel any rotational torque which could make difficult to drive the robot on a straight path. The popular configurations of omnidirectional robots are three and four-wheeled.
Mostly omnidirectional wheels based on the same general principle; the wheel can slide frictionless in the motor axis direction while the wheel proper provides traction in the direction normal to the motor axis. In order to achieve this, the wheel is built using smaller wheels attached along the periphery of the main wheel. The kind of wheel that is using in RoboCup is small size and middle size omnidirectional robot since 2002. The wheel is a variation of the Swedish wheels, which use rollers with a rotation direction which is neither parallel nor perpendicular to the motor axis.
Two or more omnidirectional wheels are used to drive a robot movement each wheel provides traction in the direction parallel to the floor and normal to the motor axis. The forces provide and add up a translational and a rotational motion for the robot. If it were possible to mount two orthogonally oriented omnidirectional wheels right under the center of a robot with a circular base, then driving the robot in any desired direction would be trivial. To give the robot a speed, with respect to a Cartesian coordinate system attached to the robot, each wheel would just have to provide one of the two speed components.
Since the motors and wheels need some space, this simple arrangement is not possible. The wheels are usually mounted on the periphery of the chassis. It is also easier to cancel any rotational torque which could make difficult to drive the robot on a straight path. The popular configurations of omnidirectional robots are three and four-wheeled.
R2 Humanoid Robot Designed by R2D2 Droid
We think that the first important step towards having robots sharing the same environment of human people is to figure out a set of basic behavior that allows a robot to be acceptable by humans. R2 has built a PC based robot capable of moving indoor, on a floor a building. This robot has been designed to be a research platform with a rich set of sensors and only have basic movement abilities. The robot has been designed to have a height comparable with people. It is perceived not like a toy by people interacting with it. R2 has been designed using the popular R2D2 droid from Star Wars saga as an inspiration, though the droid is slightly bigger than original.
The first application for R2 will be to stroll around the floor, helping visitors to find rooms. It will also play the role of bridge between the real world and the internet, virtual world. The several problems that faced by all robot during developing such as robust navigation with collision avoidance, route finding, vision, listening to the environment.
R2 sensors include:
• 6 sensors ultrasonic on the head.
• 24 sensors infrared distributed all around the body.
• 6 sensors light on the head.
• 1 compass equipment
• 1 camera a stereoscopic vision systems.
• 2 stereo audio microphones
• 2 state PCs both software and hardware.
• Signals of Wi-Fi.
• Signals blue tooth
• Voltmeter to check the battery.
• Switches to sense body posture.
Actuators allow the robot movement, head rotation and shape shifting.
The navigation system is structured into three layers, a first reactive layer whose goal is to avoid collisions, when no collisions are to be avoided we rely on compass to decide the direction to follow, it uses Wi-Fi signals from different access points to triangulate an approximate the robot position, to be used to check if a desired has been reached.
The first application for R2 will be to stroll around the floor, helping visitors to find rooms. It will also play the role of bridge between the real world and the internet, virtual world. The several problems that faced by all robot during developing such as robust navigation with collision avoidance, route finding, vision, listening to the environment.
R2 sensors include:
• 6 sensors ultrasonic on the head.
• 24 sensors infrared distributed all around the body.
• 6 sensors light on the head.
• 1 compass equipment
• 1 camera a stereoscopic vision systems.
• 2 stereo audio microphones
• 2 state PCs both software and hardware.
• Signals of Wi-Fi.
• Signals blue tooth
• Voltmeter to check the battery.
• Switches to sense body posture.
Actuators allow the robot movement, head rotation and shape shifting.
The navigation system is structured into three layers, a first reactive layer whose goal is to avoid collisions, when no collisions are to be avoided we rely on compass to decide the direction to follow, it uses Wi-Fi signals from different access points to triangulate an approximate the robot position, to be used to check if a desired has been reached.
Navigation and Mission Planning for Military Robotic
There are three different aspects of the military robotic that should be differentiated. Starting from the top level there is the Mission Planning, the path planning and the navigation. Mission planning is mission specific and considerably changes according to the scenario.
Firstly with the data provided by the Mission Planning, the Path Planning produces paths and waypoints taking into account the dynamic and kinematics capabilities of the robots involved in the mission. The Navigation consists in avoiding those obstacles while following the paths of initial. In order to pass or avoid obstacles and to recover from small path changes the robots must be equipped with position and distance sensors. Control architectures for navigation are deliberative generally, meaning that there is a strong coupling between the sensors data and the motion commands sent to the robot actuators. These architectures are based on simple behaviors that are combined sing BCM (Behavior Coordination Mechanism). The systems have to be at least provided the following capabilities: sensor information distribution and distributed behavior communication and coordination mechanisms in order to implement coordination.
Several navigation aspects were found vital for military purposes but not yet foreseen to be solved under the current development speed when analyzing the five selected military tasks. Following are vital gaps were found in the field of navigation and mission planning:
• Following of the autonomous road.
• Autonomous driving in mixed traffic.
• Moving in all terrains in all weather conditions.
• Following the leader, manned or autonomous.
To achieve the roadmap, following has to be done:
• Prioritize different driving conditions, concept and agree on real target scenario.
• Decide and develop experimental systems.
• Organize the trials.
• Define the performance measurements.
• Improve the navigation technology through experimental systems.
• Manage the technology group of navigation.
• Develop coordination and interaction within technology group.
Firstly with the data provided by the Mission Planning, the Path Planning produces paths and waypoints taking into account the dynamic and kinematics capabilities of the robots involved in the mission. The Navigation consists in avoiding those obstacles while following the paths of initial. In order to pass or avoid obstacles and to recover from small path changes the robots must be equipped with position and distance sensors. Control architectures for navigation are deliberative generally, meaning that there is a strong coupling between the sensors data and the motion commands sent to the robot actuators. These architectures are based on simple behaviors that are combined sing BCM (Behavior Coordination Mechanism). The systems have to be at least provided the following capabilities: sensor information distribution and distributed behavior communication and coordination mechanisms in order to implement coordination.
Several navigation aspects were found vital for military purposes but not yet foreseen to be solved under the current development speed when analyzing the five selected military tasks. Following are vital gaps were found in the field of navigation and mission planning:
• Following of the autonomous road.
• Autonomous driving in mixed traffic.
• Moving in all terrains in all weather conditions.
• Following the leader, manned or autonomous.
To achieve the roadmap, following has to be done:
• Prioritize different driving conditions, concept and agree on real target scenario.
• Decide and develop experimental systems.
• Organize the trials.
• Define the performance measurements.
• Improve the navigation technology through experimental systems.
• Manage the technology group of navigation.
• Develop coordination and interaction within technology group.
Three Sections of Robot Programming
Programming the Robot
To program the robot can be broken down into three sections, firstly the developing board, secondly the walking program, and finally the vision program. The main challenge when programming the robot was when learning the programs. Learning the programs included applying theory teammates had learned as well as communicating with others who have done similar programming.
The developing board
You can search the internet to find what you want to develop it that would be capable of executing the actions of the motors. Some of the main criteria that were necessary for developing board were features, size, how recent the technology was, and if it would have the capabilities to eventually add more advanced features in the future.
Walking program
The robot of walking program was intended to have all fifteen motors working simultaneously to allow the robot walk. The main walking program would coordinate the motion walking of legs with the movement of the arms in order to better allow it to maintain they balance. The other walking program aspect was allowing the robot to correct its hip placement before walking. This was all written by C programming language and controlled by the developing board.
Vision program
The main function of the vision program is to take the images of the camera gathers and processing them. The aspect of the robot key is having the correct type of camera. The camera has to be able to communicate with the developing board and outputs uncompressed data. The camera uncompressed data output will make it easier to program. Some others function that will be essential are speed of the image processing and the accuracy. To help the learning of this robot programming you can consult to the expert in your university. Vision program is the last section to program the robot intended to resemble incorporating both the walking and image processing.
To program the robot can be broken down into three sections, firstly the developing board, secondly the walking program, and finally the vision program. The main challenge when programming the robot was when learning the programs. Learning the programs included applying theory teammates had learned as well as communicating with others who have done similar programming.
The developing board
You can search the internet to find what you want to develop it that would be capable of executing the actions of the motors. Some of the main criteria that were necessary for developing board were features, size, how recent the technology was, and if it would have the capabilities to eventually add more advanced features in the future.
Walking program
The robot of walking program was intended to have all fifteen motors working simultaneously to allow the robot walk. The main walking program would coordinate the motion walking of legs with the movement of the arms in order to better allow it to maintain they balance. The other walking program aspect was allowing the robot to correct its hip placement before walking. This was all written by C programming language and controlled by the developing board.
Vision program
The main function of the vision program is to take the images of the camera gathers and processing them. The aspect of the robot key is having the correct type of camera. The camera has to be able to communicate with the developing board and outputs uncompressed data. The camera uncompressed data output will make it easier to program. Some others function that will be essential are speed of the image processing and the accuracy. To help the learning of this robot programming you can consult to the expert in your university. Vision program is the last section to program the robot intended to resemble incorporating both the walking and image processing.
Current uses of Humanoid Robots
Currently humanoid robots are being implemented in a wide range of industries. The most common place to find humanoid robots is in the entertainment industry. One of the popular attractions that use these robots is in the hall President at the Walt Disney World theme park Florida, America. This hall contains robots that created to imitate past and current presidents. Their life-like mannerism and appearance adds an element of humanity to attraction, while still being fascinating technologically. In terms of product that is available to customers, Sony developed a robot named Qrio which runs, dances, recognize faces, maintain its balance, and can get up if knocked over.
Currently humanoid robots are a couple popular uses that will eventually be expanded upon in the work force. These robots are being used as receptionist in large company as well as some university’s technology. Some of the capabilities of these robots are including greeting people when they enter, giving directions and transferring phone call. Security is also a popular means by humanoid robots are being introduced in the work force. Task, a Japanese company created a robot named Robo-Guard. Its capabilities are including patrolling round the clock, using an elevator, replacing its own battery and wielding a fire extinguisher.
The robot that designed by the Huazhong University of Science & Technology’s (HUST) Robot Club had several limitations, they are stood out were the lack of a torso, arms and head. Lacking these features did not allow the robot to fit the definition of a humanoid robot. Another problematic feature was the unusual design of its feet. They were unnecessarily conflicted and large with one another while in motion.
HUST Robot was able to walk Regardless of its flaws. However the walking motion was not steadily due to a poorly assembled legs structure and inadequate motors. It was capable of correcting its leg placement. Both legs straight ahead, before it given the instruction to walk forward.
Currently humanoid robots are a couple popular uses that will eventually be expanded upon in the work force. These robots are being used as receptionist in large company as well as some university’s technology. Some of the capabilities of these robots are including greeting people when they enter, giving directions and transferring phone call. Security is also a popular means by humanoid robots are being introduced in the work force. Task, a Japanese company created a robot named Robo-Guard. Its capabilities are including patrolling round the clock, using an elevator, replacing its own battery and wielding a fire extinguisher.
The robot that designed by the Huazhong University of Science & Technology’s (HUST) Robot Club had several limitations, they are stood out were the lack of a torso, arms and head. Lacking these features did not allow the robot to fit the definition of a humanoid robot. Another problematic feature was the unusual design of its feet. They were unnecessarily conflicted and large with one another while in motion.
HUST Robot was able to walk Regardless of its flaws. However the walking motion was not steadily due to a poorly assembled legs structure and inadequate motors. It was capable of correcting its leg placement. Both legs straight ahead, before it given the instruction to walk forward.
Robot Platforms in Military
The robotic platform is the glue that holds together all the other aspects of a fieldable tactical military unmanned vehicle ground. Unless the platform exhibits a high degree of outstanding ruggedness and mobility it will fail to achieve its target location. If the UGV can not deploy its sensors at the correct location then the mission is useless.
The platform should merge the system of drive, a power supply system sufficient for the required mission period, an advanced communication system capable of returning real time information to the user. A human machine interface (HMI) that allows long term, stress fire operations. The platform must have a very high immunity to interference of electro magnetic, and logically any tactical UGV must not impose a heavy load on available manpower or systems.
As all these functions rely on the platform or chassis to hold the system together, any new tactical platform must be modular in concept. Like the scientific and aircraft industries who have standards on shape and hole mounting patterns we should strive to arrive at a common standard such that any type of sensor pack could be incorporated into a UGV of a given size. Standardization on connector joining and power supplies platform and equipment together would be an advantage.
The following tactical robotic platform developments are needed:
1. Integrate and develop the latest power cell technology into the UGV.
2. Adopt the latest very high efficiency power train and motor drives systems to give very high mobility even when damage.
3. Refine track transmission and wheeled and suspension systems near term fieldable walking remote control vehicles will not be possible.
4. Use building methods and new materials to reduce mass yet retain performance including ballistic protection.
5. To develop a HMI system to ensure tactical robot do not give a high workload on user.
6. To ensure C3 systems operate in real world need to develop an EMC hardening program.
The platform should merge the system of drive, a power supply system sufficient for the required mission period, an advanced communication system capable of returning real time information to the user. A human machine interface (HMI) that allows long term, stress fire operations. The platform must have a very high immunity to interference of electro magnetic, and logically any tactical UGV must not impose a heavy load on available manpower or systems.
As all these functions rely on the platform or chassis to hold the system together, any new tactical platform must be modular in concept. Like the scientific and aircraft industries who have standards on shape and hole mounting patterns we should strive to arrive at a common standard such that any type of sensor pack could be incorporated into a UGV of a given size. Standardization on connector joining and power supplies platform and equipment together would be an advantage.
The following tactical robotic platform developments are needed:
1. Integrate and develop the latest power cell technology into the UGV.
2. Adopt the latest very high efficiency power train and motor drives systems to give very high mobility even when damage.
3. Refine track transmission and wheeled and suspension systems near term fieldable walking remote control vehicles will not be possible.
4. Use building methods and new materials to reduce mass yet retain performance including ballistic protection.
5. To develop a HMI system to ensure tactical robot do not give a high workload on user.
6. To ensure C3 systems operate in real world need to develop an EMC hardening program.
Sensing and World Modeling of Military Robotic
The mission success of any robot highly depends on world model and its sensors. The quality of sensor gathered information is important for tele-operated robots that pass this information to an operator directly, but also more for autonomous robots that use their sensor information for autonomous navigation and all sorts of autonomous robots as this is the robot’s total view on the outside world and the robot’s basis coherent execution and navigation of mission tasks.
Good world model and sensors are essential for basic information on the robot’s own location and movements, but also for tasks like route planning or automated detection, region observation, and recognition of typical targets. The right sensor to use depend on the actual tasks need to perform. For examples, current sensors are infrared sensors, CCD/HDTV sensors, acoustic and laser sensors and even radar antennas or arrays including mini SAR.
Because of the conditions variation that robots will be operated in, most sensors should be usable under all environmental conditions and weather and in all sorts of terrain. For many tasks the information should be processed on-board of the UGV, and should be efficient and effective by means of information compression and filtering of relevant information.
Concerning world modeling and sensors for military robots, following gaps were identified:
1. Obstacle negotiation and avoidance, terrain modeling and classification, and transport in normal traffic, including unstructured terrain.
2. Mine detection, de-mining, biological and chemical sensing, this gap considered not vital but important.
3. Sensor fusion at limited visibility, environmental mapping, situational awareness as well as vehicle and human detection and recognition.
The greatest challenge will be in multi sensor suites including fusion sensor, meaning that information from diverse sensors on the UGV is analyzed then merged into a more robust and complete view on the robot’s ‘outside world’ than can be achieved by any single sensor.
Good world model and sensors are essential for basic information on the robot’s own location and movements, but also for tasks like route planning or automated detection, region observation, and recognition of typical targets. The right sensor to use depend on the actual tasks need to perform. For examples, current sensors are infrared sensors, CCD/HDTV sensors, acoustic and laser sensors and even radar antennas or arrays including mini SAR.
Because of the conditions variation that robots will be operated in, most sensors should be usable under all environmental conditions and weather and in all sorts of terrain. For many tasks the information should be processed on-board of the UGV, and should be efficient and effective by means of information compression and filtering of relevant information.
Concerning world modeling and sensors for military robots, following gaps were identified:
1. Obstacle negotiation and avoidance, terrain modeling and classification, and transport in normal traffic, including unstructured terrain.
2. Mine detection, de-mining, biological and chemical sensing, this gap considered not vital but important.
3. Sensor fusion at limited visibility, environmental mapping, situational awareness as well as vehicle and human detection and recognition.
The greatest challenge will be in multi sensor suites including fusion sensor, meaning that information from diverse sensors on the UGV is analyzed then merged into a more robust and complete view on the robot’s ‘outside world’ than can be achieved by any single sensor.
Communication Requirements in Military Robotic
Industry has a rather leading role in defining the military use of robotics. Often more or less standard robots are introduced into the military operations without very thoroughly defined military requirements of functionality. These standard robots are then tested on how they function in military operational environment.
Communication is essential for the use of all kinds of systems robot. In most cases, especially when using multi robot systems where several robots deliberately cooperate in autonomous manner, there is a demand for communication of wireless to achieve high flexibility. The communication system is usually used to get information from system sensors, radar, vision, et cetera and to control the robot in single robot systems. Multi robot systems combine the functionality of single robot system to achieve a higher efficiency and to cope the scenarios that are more complex.
The example in the surveillance scenario, an object could be observed by a multi robot system from different sensors and with different positions. Through the result of data fusion process sensor, it would be possible to get a more exhaustive and complete situation awareness than achievable with only one sensor or robot.
Below the demands on the communication system were identified for military robotics:
• Wireless and mobile ad hoc communication.
• High ranges communication.
• High data rates of communication.
• Adjustment to the network varying availability.
• Compliance with Quality of Service requirements.
• Secure communication.
• Awareness of power.
• To prioritize the data.
A generic robot communication system should meet these requirements but recently technology does not support all of these requirements at the same time. Satellite communication may allow high data rates over a long distance but there is solution for robot system movement. Other technologies like HF, VHF, and UHF do support high ranges but lack high data rates. UMTS, GSM, and GPRS do support medium and high data but are in need of existing infrastructure that either may be under foreign control or even exist in the area of operation.
Communication is essential for the use of all kinds of systems robot. In most cases, especially when using multi robot systems where several robots deliberately cooperate in autonomous manner, there is a demand for communication of wireless to achieve high flexibility. The communication system is usually used to get information from system sensors, radar, vision, et cetera and to control the robot in single robot systems. Multi robot systems combine the functionality of single robot system to achieve a higher efficiency and to cope the scenarios that are more complex.
The example in the surveillance scenario, an object could be observed by a multi robot system from different sensors and with different positions. Through the result of data fusion process sensor, it would be possible to get a more exhaustive and complete situation awareness than achievable with only one sensor or robot.
Below the demands on the communication system were identified for military robotics:
• Wireless and mobile ad hoc communication.
• High ranges communication.
• High data rates of communication.
• Adjustment to the network varying availability.
• Compliance with Quality of Service requirements.
• Secure communication.
• Awareness of power.
• To prioritize the data.
A generic robot communication system should meet these requirements but recently technology does not support all of these requirements at the same time. Satellite communication may allow high data rates over a long distance but there is solution for robot system movement. Other technologies like HF, VHF, and UHF do support high ranges but lack high data rates. UMTS, GSM, and GPRS do support medium and high data but are in need of existing infrastructure that either may be under foreign control or even exist in the area of operation.
The Future of Humanoid Robot
The robotics study originates back to ancient Egypt where priest created masks that moved as a way to intimidate their worshippers. Robotics, as we know it today, originated a half century ago with the creation of a robot named “Unimate”. This robot was created by Joseph Engelberger and George Devol. Unimate was created with the intention of being used in industry at a General Motor plant, working with heated die casting machines.
Presently the development of humanoid robots has become a larger area of focus for the community of engineering. Humanoid robots are precisely what their name would lead you to expect, robots designed to act and look like humans. While their current use is primarily within the industry of entertainment, there are hopes that one day they will be able to be used in a broader environment.
Modern investigation into humanoid robot development have lead to the desire to create a robot that can not only walk from one destination to another but also be able to compensate for that by moving around them and discern objects in front of it. This was where the present object came into play. The purpose of this project was to design and build a humanoid robot that capable of walking smoothly.
Humanoid robots of the future will be capable of helping mankind by accomplishing tasks that may too dirty, dull, dangerous, or even physically impossible, such as exploring other planets. Though there is still room for improvement for the locomotion of these robots to become more similar to the human.
It created a humanoid robot with a pair of legs, a pair of arms, a head and a torso which was able to walk in a manner similar to the human. The motion of walking was controlled by a program written by developer. For the head it used a camera that would eventually give the robot a vision capability and complete al attributes required to be ‘humans’.
Presently the development of humanoid robots has become a larger area of focus for the community of engineering. Humanoid robots are precisely what their name would lead you to expect, robots designed to act and look like humans. While their current use is primarily within the industry of entertainment, there are hopes that one day they will be able to be used in a broader environment.
Modern investigation into humanoid robot development have lead to the desire to create a robot that can not only walk from one destination to another but also be able to compensate for that by moving around them and discern objects in front of it. This was where the present object came into play. The purpose of this project was to design and build a humanoid robot that capable of walking smoothly.
Humanoid robots of the future will be capable of helping mankind by accomplishing tasks that may too dirty, dull, dangerous, or even physically impossible, such as exploring other planets. Though there is still room for improvement for the locomotion of these robots to become more similar to the human.
It created a humanoid robot with a pair of legs, a pair of arms, a head and a torso which was able to walk in a manner similar to the human. The motion of walking was controlled by a program written by developer. For the head it used a camera that would eventually give the robot a vision capability and complete al attributes required to be ‘humans’.
Software Architecture of Interaction Robot
It developed the software architecture for interaction robot. To incorporated the obtained ideas as ‘Communicative unit’ into the previous architecture. The structure basic of architecture is a network of ‘Situated modules’.
A network of situated module is the basic structure of the architecture. To develop the modules easily, it defines the situated modules as a program that performs a particular robot behavior in a particular situation. Developer easily implements situated modules with concerning only the particular limited situation because each module works in a particular situation. Situated module is implemented by coupling communicative sensory-motor units with directly supplementing other sensory motor units.
A robot autonomously behaves around environments and interacts with humans by executing situated modules sequentially. The developer develops situated modules progressively and adds them into the network in order to achieve the pre-determined robot tasks.
The architecture has the components for communication through the networks of computer. Some robots are able to execute behaviors synchronously by connecting to communication server. Robots can give information to humans in natural communication as new infrastructure of information. For instance, when the humans and robot will talk about weather, the robot will obtain weather information from the internet then it will talk “it will rain tomorrow”.
Then it will explain briefly other components of the architectures. Reactive modules realize look very simple and reactive behaviors such as avoidance. A current task, internal status represents intention, and an emotional model. According to the module control and internal status plans the execution of situated modules sequence. Inputs from sensors are pre processed as sensor modules such as speech recognition. Actuator modules perform low level controls of actuators according to the order of the situated modules. Based on the architecture, it has implemented interactive behaviors as situated modules into the developed robot. It was demonstrated on the ‘Robovie’ robot.
A network of situated module is the basic structure of the architecture. To develop the modules easily, it defines the situated modules as a program that performs a particular robot behavior in a particular situation. Developer easily implements situated modules with concerning only the particular limited situation because each module works in a particular situation. Situated module is implemented by coupling communicative sensory-motor units with directly supplementing other sensory motor units.
A robot autonomously behaves around environments and interacts with humans by executing situated modules sequentially. The developer develops situated modules progressively and adds them into the network in order to achieve the pre-determined robot tasks.
The architecture has the components for communication through the networks of computer. Some robots are able to execute behaviors synchronously by connecting to communication server. Robots can give information to humans in natural communication as new infrastructure of information. For instance, when the humans and robot will talk about weather, the robot will obtain weather information from the internet then it will talk “it will rain tomorrow”.
Then it will explain briefly other components of the architectures. Reactive modules realize look very simple and reactive behaviors such as avoidance. A current task, internal status represents intention, and an emotional model. According to the module control and internal status plans the execution of situated modules sequence. Inputs from sensors are pre processed as sensor modules such as speech recognition. Actuator modules perform low level controls of actuators according to the order of the situated modules. Based on the architecture, it has implemented interactive behaviors as situated modules into the developed robot. It was demonstrated on the ‘Robovie’ robot.
Artificial Intelligence and Robotics
Researchers in artificial intelligence (AI) feel that their work has suffered because of ‘public discussion’ hype might be a better term in the 1960s and 1980s which adversely affected advances in the field unlike the situation for nanotechnology after the delivery did not live to expectations and the funding was dropped. Currently many researchers feel that the aim of mimicking the human ability to solve problems and achieve goals in the real world is neither likely nor desirable because a long series of conceptual breakthroughs is required.
The applications number fro weak AI is growing. AI related patents in US increased from 100 – 1700 from 1989 to 1999, with total 3900 patents mentioning related terms. Generally AI systems are embedded within larger systems applications can be found in speech recognition, video games, and data mining business sector. Leading to voice led internet access or recognition in security applications, full speech recognition, is anticipated relatively soon. However, to extract meaning ability from natural language recognition remains way off. The data of mining market uses software to extract general regularities from data online, patterns humans may not look for or dealing in particular with large volumes.
The field of robotics is linked closely to that of AI, although definitional issues abound. Giving AI motor capability seems reasonable definition but most people would not regard a cruise missile as a robot even though the control techniques and navigation draw heavily on robotic research.
Experts moved away from the idea of complete automation as it was neither desirable nor feasible after the hype from the 1960s rebounded on investment. Instead more practical applications have been found such as, in the sphere of the military where Unmanned Combat Air Vehicles (UCAVs) are being developed with the hope of fielding them by 2008. Actually the funding for the AI is far more away compare to nanotechnology as it has no existing overview on the topic and information on the spending.
The applications number fro weak AI is growing. AI related patents in US increased from 100 – 1700 from 1989 to 1999, with total 3900 patents mentioning related terms. Generally AI systems are embedded within larger systems applications can be found in speech recognition, video games, and data mining business sector. Leading to voice led internet access or recognition in security applications, full speech recognition, is anticipated relatively soon. However, to extract meaning ability from natural language recognition remains way off. The data of mining market uses software to extract general regularities from data online, patterns humans may not look for or dealing in particular with large volumes.
The field of robotics is linked closely to that of AI, although definitional issues abound. Giving AI motor capability seems reasonable definition but most people would not regard a cruise missile as a robot even though the control techniques and navigation draw heavily on robotic research.
Experts moved away from the idea of complete automation as it was neither desirable nor feasible after the hype from the 1960s rebounded on investment. Instead more practical applications have been found such as, in the sphere of the military where Unmanned Combat Air Vehicles (UCAVs) are being developed with the hope of fielding them by 2008. Actually the funding for the AI is far more away compare to nanotechnology as it has no existing overview on the topic and information on the spending.
Developing of Human Robot Interaction
There are two research directions in robotics development, one is to develop robot task oriented that work in limited environments, and secondly is to develop interaction oriented robots that communicate with human and will participate in human society. Pet and industrial robots are the former ones. They work in limited areas and in factories with particular tasks such as behaving like animals and assembling industrial parts. In other word, the purpose of the interaction oriented robots that are developing is not to execute particular tasks. It is trying to develop a robot that exists as the partner in our daily life. These robots will be new information of communication infrastructure.
Regarding with the robots that interact with humans, there are many researches; mimicking of human body motions, conveying intentionality through facial expressions and behavior, developing mentally commitment. However, the robots lack physical expression ability. For instance some of them have only heads, some look like animals.
Robovie is the robot that has enough physical expression ability. It can generate almost all human like behaviors required for human robot interaction and communication with humans by using rich sensory information.
To make the best use of the physical expression ability, it has started a new collaborative work between cognitive science and robotics. Cognitive science, especially on the ideas about the practical use of the properties body for communication, helps to design more effective robots behavior. To incorporate the cognitive science’ ideas, it considered a new software architecture. It enables easy to develop and rich human interaction.
Further it needs to evaluate the performance of the interactive behaviors implementation. About the task oriented robots, it can evaluate their performance with physical measures such as accuracy and speed. The measurement helps to improve the performance. It is also need to apply psychological measures of these robots that interact with humans is discussed along with how they influence humans.
Regarding with the robots that interact with humans, there are many researches; mimicking of human body motions, conveying intentionality through facial expressions and behavior, developing mentally commitment. However, the robots lack physical expression ability. For instance some of them have only heads, some look like animals.
Robovie is the robot that has enough physical expression ability. It can generate almost all human like behaviors required for human robot interaction and communication with humans by using rich sensory information.
To make the best use of the physical expression ability, it has started a new collaborative work between cognitive science and robotics. Cognitive science, especially on the ideas about the practical use of the properties body for communication, helps to design more effective robots behavior. To incorporate the cognitive science’ ideas, it considered a new software architecture. It enables easy to develop and rich human interaction.
Further it needs to evaluate the performance of the interactive behaviors implementation. About the task oriented robots, it can evaluate their performance with physical measures such as accuracy and speed. The measurement helps to improve the performance. It is also need to apply psychological measures of these robots that interact with humans is discussed along with how they influence humans.
Interactive Humanoid Robot “Robovie”
The humanoid robot “Robovie” has a human-like appearance is designed for communications with humans. It has various sensors like a human, such as sense of touch, vision, audition and so on. With the sensors and the human-like body, the robot performs meaningful interactive behaviors for humans.
The size of the robot is important as an interactive robot. The Robovie size as 120 cm, which is same as a junior school student. The weight is 40 kg and the diameter is 40 cm. the robot has a head (3DOF), two eyes (2*2 DOF for gaze control), two arms (4*2 DOF) and a mobile platform (2 driving wheels and 1 free wheel).the robovie also has various sensors such as, 16 skin sensors covering the major parts of the robot, an omni-directional vision sensor, 10 tactile sensors around the mobile platform, 2 microphone to listen human voices, and 24 ultrasonic sensor for detecting obstacles. The skin sensor is important to realizing behaviors of interactive. It has developed a sensitive skin sensors using pressure sensitive conductivity rubber. This robovie also can work 4 hours and charges the battery by autonomously looking for battery charger stations. With the sensors and actuators, the robot can generate enough behaviors required for communication with humans.
Robovie is a self contained autonomous robot that has Pentium III PC on board for processing sensory data and generating behaviors. The operating system is Linux since the Pentium III PC is sufficiently fast and Robovie does not require precise real time controls like legged robots. Linux is the best solution for quick and easy development of Robovie’s software modules.
Mutual entrained gestures are important for smooth communications between a human and a robot. It has performed as experiment to ensure it. It focused on the interaction between a subject and the robot while it teaches a route direction. The relationship between the emergence of the subject’s entrained gestures and the level of understanding of the robot utterance was investigated by using several different gestures of the robots in teaching.
The size of the robot is important as an interactive robot. The Robovie size as 120 cm, which is same as a junior school student. The weight is 40 kg and the diameter is 40 cm. the robot has a head (3DOF), two eyes (2*2 DOF for gaze control), two arms (4*2 DOF) and a mobile platform (2 driving wheels and 1 free wheel).the robovie also has various sensors such as, 16 skin sensors covering the major parts of the robot, an omni-directional vision sensor, 10 tactile sensors around the mobile platform, 2 microphone to listen human voices, and 24 ultrasonic sensor for detecting obstacles. The skin sensor is important to realizing behaviors of interactive. It has developed a sensitive skin sensors using pressure sensitive conductivity rubber. This robovie also can work 4 hours and charges the battery by autonomously looking for battery charger stations. With the sensors and actuators, the robot can generate enough behaviors required for communication with humans.
Robovie is a self contained autonomous robot that has Pentium III PC on board for processing sensory data and generating behaviors. The operating system is Linux since the Pentium III PC is sufficiently fast and Robovie does not require precise real time controls like legged robots. Linux is the best solution for quick and easy development of Robovie’s software modules.
Mutual entrained gestures are important for smooth communications between a human and a robot. It has performed as experiment to ensure it. It focused on the interaction between a subject and the robot while it teaches a route direction. The relationship between the emergence of the subject’s entrained gestures and the level of understanding of the robot utterance was investigated by using several different gestures of the robots in teaching.
Human Robotic Interface Design based on Human Decision Making
The human robot interface design can directly affect the operator’s ability and desire to complete a task. The design also affects operator’s ability to understand current situation and to make decisions as well as supervise and provide high level commands to robotic system. There is also a wealth of human factors research that can affect all HRI designs while it is possible to spend a significant amount of time discussing specific interaction techniques. Such research is related to human decision making, vigilance, workload levels, situation awareness and human errors. These areas should be considered when developing a human robotic interface.
Human decision making area appears to be an untapped resource for the field of HRIs. These decisions are made rapidly in dynamic environment under varies condition. Such decision may have dire consequences depending upon the human’s current task. For example pilots during take off, a chemical process operator during a chemical leak, and individual when driving the car down in busy street. An understanding of the human decision process should be incorporated into the human robotic interface design in order to support the process human’s employ. The field of human decision making research involves individuals making decisions as well as teams of individuals.
Klien has studied human decision making with domain experts including pilots, firemen, nurses and nuclear power plant operator since 1985. The intent of his work is to identify how humans make rapid and effective decisions in a natural environment under difficult conditions.
Clint Bowers and Eduardo Salas are fundamental contributors to research regarding human decision making. They focus on how a system may or may not support the human decision make process. They look at how training can affect decision making when automation is used to support decision making in complex systems.
The naturalistic decision making result from Klein’s work may be applied to the development of cooperation techniques and decision making for robotic teams.
Human decision making area appears to be an untapped resource for the field of HRIs. These decisions are made rapidly in dynamic environment under varies condition. Such decision may have dire consequences depending upon the human’s current task. For example pilots during take off, a chemical process operator during a chemical leak, and individual when driving the car down in busy street. An understanding of the human decision process should be incorporated into the human robotic interface design in order to support the process human’s employ. The field of human decision making research involves individuals making decisions as well as teams of individuals.
Klien has studied human decision making with domain experts including pilots, firemen, nurses and nuclear power plant operator since 1985. The intent of his work is to identify how humans make rapid and effective decisions in a natural environment under difficult conditions.
Clint Bowers and Eduardo Salas are fundamental contributors to research regarding human decision making. They focus on how a system may or may not support the human decision make process. They look at how training can affect decision making when automation is used to support decision making in complex systems.
The naturalistic decision making result from Klein’s work may be applied to the development of cooperation techniques and decision making for robotic teams.