Configuration of ARDev was unable to perform at runtime. It was necessary to terminate the program and reinitialize it in order to change any settings. This tool should ideally be run constantly helping to avoid the potential loss of time and important data.
In order to provide configuration and runtime control of ARDev a graphical user interface library was added. The Library AntTweakBar was chosen as a lightweight option. The main GUI window for ARDev is small, and provides buttons to open further windows and also provides control over player connections, Environment and Data, display information regarding rendered objects, environments and environments switching, and miscellaneous robot data respectively.
The Display List Window can be used to show and hide objects of interest and to change the color or transparency of rendered objects which can be very useful at runtime when rendered objects overlap and obscure each other. Some objects extended settings are provided; in the case of the grid, the minor and major grid lines can be changed quickly for less or more precision.
With the GUI addition it is now easy to change runtime settings, manipulate the environment and view the state of the system. Further configurations easily be added for any type of object using the extensible architecture. Developing robots that are mobile and interact with the real world can be a significant challenge. The available tools for a developer to understand complex robot data in a real time and useful way are limited, resulting in an extended time to find the correct solutions. The real time world is dynamic and complex and can not be easily understood from a robot’s perspective without placing the robot’s data in context with the developers world view. Visualization of actuators and sensor readings can be interpreted easily when overlaid onto an image of the real world. Any inconsistencies between real time robot data and what the developer actually can see in the real world become obvious when presented in a visual manner.
Blog about Robotics Introduction, news of the robot, information about the robot, sharing knowledge about the various kinds of robots, shared an article about robots, and others associated with the robot
Four Practices to Control Robot Design Architectures
The platforms of robot controllers development and their underlying methodologies are of great importance for IT and laboratories society, because there is an increasing interest in future service robotics. Such platforms help developers in many of their activities such as modeling, programming, model analysis, test and simulation and should take into account preoccupations like the reuse of software pieces and the modularity of control architectures as they correspond to two major issues.
The aim of this platform development is to provide a robot controller development methodology and its dedicated tools, in order to help developers overcoming problems during all steps of the design process. So, we investigate on the creation of a software paradigm that deals specifically with controller development preoccupations. We identified four main different practices in control architecture design approaches that must be considered.
Firstly, is the structuring of the control activities. There are different approaches, contains decomposing the control architecture into hierarchical layers. Each layer within the robot controller has a ‘decision making system”, as each layer only ensures part of the control from low level control to planning.
Secondly, id the decomposition of the control architecture into subsystem that incorporate the specific parts control of a robotic system. This practice is reified in IDEA agents architecture and Chimera development methodology. This organizational view is orthogonal to the hierarchical one: each subsystem can incorporate both reactive and long term decision making activities and so can be layered itself.
Thirdly, is to separate, in architecture description, the robot operative portion description from the control and decision making one. This practice is often adopted at implementation phase, except in specific architectures like CLARATY, which in the real world description is made by means of objects hierarchies.
Fourth, is to use notations to describe the controller’s parts and to formalize their interactions. Model based specifications are coupled with techniques of formal analysis in order to follow a quality oriented of design process.
The aim of this platform development is to provide a robot controller development methodology and its dedicated tools, in order to help developers overcoming problems during all steps of the design process. So, we investigate on the creation of a software paradigm that deals specifically with controller development preoccupations. We identified four main different practices in control architecture design approaches that must be considered.
Firstly, is the structuring of the control activities. There are different approaches, contains decomposing the control architecture into hierarchical layers. Each layer within the robot controller has a ‘decision making system”, as each layer only ensures part of the control from low level control to planning.
Secondly, id the decomposition of the control architecture into subsystem that incorporate the specific parts control of a robotic system. This practice is reified in IDEA agents architecture and Chimera development methodology. This organizational view is orthogonal to the hierarchical one: each subsystem can incorporate both reactive and long term decision making activities and so can be layered itself.
Thirdly, is to separate, in architecture description, the robot operative portion description from the control and decision making one. This practice is often adopted at implementation phase, except in specific architectures like CLARATY, which in the real world description is made by means of objects hierarchies.
Fourth, is to use notations to describe the controller’s parts and to formalize their interactions. Model based specifications are coupled with techniques of formal analysis in order to follow a quality oriented of design process.
The Application of Augmented Reality
Developing mobile robot applications in a real world environment presents a number of challenges. The challenges are a result of the researcher developing the application often being remote from the robot, real environments are dynamic and the information provided by sensors can not be interpreted easily in real time with classical techniques. AR (Augmented Reality) provides a means to solve many of these problems placing the sensor data geographically with the robot in real time and the associated physical surrounding.
The application domains for AR are diverse and ranging from medical displays, to entertainment, to treating psychological disorders. AR has been used in to aid an operator when maneuvering a vehicle in limited or no visibility. The design of the AR system is presented discussing hardware and integration of a near scene map building algorithm. An AR system is used to aid in commissioning helicopter tasks for agriculture. Their work focusing in tracking the camera pose using natural features as markers, with results given from a mock up simulation.
It is generally agreed that it is the developer’s lack of understanding of the robot’s world view that makes it difficult to code new tasks and algorithms, debug problems in the resulting actions and commission integrated systems for real world work. The problem of understanding can be overcome with a shared perceptual space. AR provides this shared space between robots and developers enabling the developer to view he world through the robots sensors.
The ARDev (Augmented Reality Visualization Project), originally created in 2006, was designed to allow visual debugging of robot data. ARDev integrates with the open source player project, providing intuitive visualizations of player robot data. The integration into the player projects makes ARDev very accessible with easy access to existing sensors through the generic player interfaces. The AR visualization used in ARDev provides developers with a clear, visual representation of robot data, and can be used by the researcher to detect problems between the real world and the robot.
The application domains for AR are diverse and ranging from medical displays, to entertainment, to treating psychological disorders. AR has been used in to aid an operator when maneuvering a vehicle in limited or no visibility. The design of the AR system is presented discussing hardware and integration of a near scene map building algorithm. An AR system is used to aid in commissioning helicopter tasks for agriculture. Their work focusing in tracking the camera pose using natural features as markers, with results given from a mock up simulation.
It is generally agreed that it is the developer’s lack of understanding of the robot’s world view that makes it difficult to code new tasks and algorithms, debug problems in the resulting actions and commission integrated systems for real world work. The problem of understanding can be overcome with a shared perceptual space. AR provides this shared space between robots and developers enabling the developer to view he world through the robots sensors.
The ARDev (Augmented Reality Visualization Project), originally created in 2006, was designed to allow visual debugging of robot data. ARDev integrates with the open source player project, providing intuitive visualizations of player robot data. The integration into the player projects makes ARDev very accessible with easy access to existing sensors through the generic player interfaces. The AR visualization used in ARDev provides developers with a clear, visual representation of robot data, and can be used by the researcher to detect problems between the real world and the robot.
Autonomous Mental Development by Robot and Animals
How does one create the intelligent of machine? This problem has proven difficult. Scientist have taken one of three approaches over the past years, firstly, which is knowledge based, an intelligent machine in a laboratory is programmed directly to perform a given task. Secondly, learning base approach, a computer is “spoon fed” human edited sensory data while the machine is controlled by a task specific learning program. Third, is by a “genetic search”, robots have evolved through generations by the principle of survival of the fittest, mostly in a computer simulated virtual world. Although notable, none of these is powerful enough to lead to machine having the diverse, complex, and highly integrated capabilities of an adult brain, such as vision, language and speech. These traditional approaches have done as the incubator for the birth and growth of a new direction for machine intelligence: autonomous mental development.
What is autonomous mental development? With a brainlike natural or an artificial embodied system, under the control of its intrinsic developmental program develops mental capabilities through autonomous real time interactions with its environments by using its own effectors and sensors. Traditionally, a machine is not autonomous when it develops its skills, but a human is autonomous throughout its lifelong mental development.
Current advances in neuroscience illustrate this principle. For instance, if the optic nerves originating from the eyes of an animal are connected into the auditory pathway early in life, the auditory cortex gradually takes on representation that is normally found in the visual cortex. Further, the rewired animals learn successfully to perform vision tasks with the auditory cortex. This discovery advises that the cortex is governed by developmental principles that work for both auditory and visual signals. For instance, the developmental program of the monkey brain dynamically elect sensory input, according to the actual sensory signal that received, and the selection process is active throughout adulthood.
What is autonomous mental development? With a brainlike natural or an artificial embodied system, under the control of its intrinsic developmental program develops mental capabilities through autonomous real time interactions with its environments by using its own effectors and sensors. Traditionally, a machine is not autonomous when it develops its skills, but a human is autonomous throughout its lifelong mental development.
Current advances in neuroscience illustrate this principle. For instance, if the optic nerves originating from the eyes of an animal are connected into the auditory pathway early in life, the auditory cortex gradually takes on representation that is normally found in the visual cortex. Further, the rewired animals learn successfully to perform vision tasks with the auditory cortex. This discovery advises that the cortex is governed by developmental principles that work for both auditory and visual signals. For instance, the developmental program of the monkey brain dynamically elect sensory input, according to the actual sensory signal that received, and the selection process is active throughout adulthood.
The Development of Multifingered Robot
Elaborate research has been done on the development of multifingered robot hand, which is employed as prosthetic hand or in humanoid robots for the past two decades. Power grasping and precision grasping are two areas in which the former relates to the application of robots carrying heavy loads. Many models and algorithms have been developed for manipulating objects with multifingered robot hand. In the current years, research is focused towards the manipulation in which the robot fingers are made from soft material. Identification of suitable soft material as substitute for the human skin is a tedious and the deformation effect of soft finger and/or object is common issue in the development of such robot hand.
The development of soft fingers is related fundamental area in the soft manipulations. Related to the modeling of a soft hand Xydas. Developed a contact model and studied soft finger tip contact mechanics using FEM and validated the results by experiments. Byoung-Ho Kim analyzed the fundamental deformation effect of soft finger tip in two fingered object manipulation.
Takahiro Inoue focused on formulating elastic force and potential energy equation for the deformation of fingers which are represented as an infinite number of virtual spring s standing vertically. Elaborate research has been done on soft finger tip manipulations but only a few attempts have been made on power grasping. Analytical model for force distribution in power grasps has been developed by Mirza. Developed a method calculating additional grasping force required for stable power grasping of objects.
A simple contact model as applied to power grasping has been developed and force deformation relationship has been formulated. The geometrical relationship between deformation and contact width is first proposed. The total contact which is based on the contract which is based on the contact parameters, material property, and geometrical data and material property is found by using the compressional strain mechanism.
The development of soft fingers is related fundamental area in the soft manipulations. Related to the modeling of a soft hand Xydas. Developed a contact model and studied soft finger tip contact mechanics using FEM and validated the results by experiments. Byoung-Ho Kim analyzed the fundamental deformation effect of soft finger tip in two fingered object manipulation.
Takahiro Inoue focused on formulating elastic force and potential energy equation for the deformation of fingers which are represented as an infinite number of virtual spring s standing vertically. Elaborate research has been done on soft finger tip manipulations but only a few attempts have been made on power grasping. Analytical model for force distribution in power grasps has been developed by Mirza. Developed a method calculating additional grasping force required for stable power grasping of objects.
A simple contact model as applied to power grasping has been developed and force deformation relationship has been formulated. The geometrical relationship between deformation and contact width is first proposed. The total contact which is based on the contract which is based on the contact parameters, material property, and geometrical data and material property is found by using the compressional strain mechanism.
Deployment Subsystem and Attitude Control Subsystem of STARS-I
Deployment subsystem is one of important subsystems for STARS-I mission to verify TSR technology on orbit, and it is mounted on mother satellite. Its main objectives are to give an initial velocity of daughter satellite for deployment, and to deploy and to retrieve daughter satellite by tether control.
Eject unit makes an initial velocity of daughter satellite, here mother satellite has a velocity due to reaction force. Hook attached to motor presses the spring, and hook unlatch the spring, then spring is extended by its potential energy. Thus, bowl supported by the spring gives an initial velocity of daughter satellite.
Deployment unit controls tether deployment and retrieval. It contains of tether reel, motor, and the torque transmission device, which keeps constant torque. Tether can avoid excessive tension and its sudden change. Fundamentally, tether deployment and retrieval is controlled by motor velocity control. When excessive tension is applied on tether, differential rotation of tether reel and motor are occurred due to the permanent wave torque.
Attitude control subsystem is very important to employ attitude control function of Tethered Space Robot (TSR). Attitude of TSR is controlled by tether position and tensions of the arm and mass centre of the robot. Since relation of tether extension position and lines of tether attachment point and mass centre of the robot is operated by arm motion, torque due to tether tension acting on the robot. There fore the two degrees of freedom for arm motion around the tether extension line axis, and TSR attitude can be controlled around two vertical axis of tether extension line.
Two motors are equipped on daughter satellite in attitude control subsystem. Motor 1 is mounted on the main body of daughter satellite, and actuates the bowl. Motor 2 is mounted on the bowl, and actuates the arm. As a result the arm end attached to tether can be places in spatial space, and then attitude control around tether extension line is possible.
Eject unit makes an initial velocity of daughter satellite, here mother satellite has a velocity due to reaction force. Hook attached to motor presses the spring, and hook unlatch the spring, then spring is extended by its potential energy. Thus, bowl supported by the spring gives an initial velocity of daughter satellite.
Deployment unit controls tether deployment and retrieval. It contains of tether reel, motor, and the torque transmission device, which keeps constant torque. Tether can avoid excessive tension and its sudden change. Fundamentally, tether deployment and retrieval is controlled by motor velocity control. When excessive tension is applied on tether, differential rotation of tether reel and motor are occurred due to the permanent wave torque.
Attitude control subsystem is very important to employ attitude control function of Tethered Space Robot (TSR). Attitude of TSR is controlled by tether position and tensions of the arm and mass centre of the robot. Since relation of tether extension position and lines of tether attachment point and mass centre of the robot is operated by arm motion, torque due to tether tension acting on the robot. There fore the two degrees of freedom for arm motion around the tether extension line axis, and TSR attitude can be controlled around two vertical axis of tether extension line.
Two motors are equipped on daughter satellite in attitude control subsystem. Motor 1 is mounted on the main body of daughter satellite, and actuates the bowl. Motor 2 is mounted on the bowl, and actuates the arm. As a result the arm end attached to tether can be places in spatial space, and then attitude control around tether extension line is possible.
Robotic System to Increase Production Efficiency and Capacity
Robotics, automation and control play an important role in manufacturing in different industries. The art of robotics and intelligent control has been transferred into the automated systems development. The most important field in controls and robotics is the working of a system with maximum accuracy and minimum errors and if an error occurs the system should be capable of taking a significant action against it.
In manufacturing lines are usually a number of inter related processes which should coherently work efficiently to result in an efficient production. However if there is bottleneck at any stage the whole production line should run at a lower efficiency and capacity. Improvement may need to be made using lean manufacturing regarding some of the manufacturing aspects but these are small incremental changes and do not play any vital role to enhance the production efficiency and capacity.
This article is describing a industry collaborative project to investigate a manufacturing production line and identify the bottlenecks. The goal was to design and develop a robotic or automated system to overcome the bottleneck process problems in order to increase the manufacturing efficiency to reach the desired production capacity. Base on the investigation carried out by the company involved, available automated systems do not cover the whole production line and the custom made are extremely expensive. Therefore the idea was to come up with a solution which was within a reasonable budget for the company.
An automated transportation and manipulation flexible robotic system for handling, banding, snapping, and sorting of general purpose labels would improve substantially overall production efficiency of the concerned company where a large scale investment in new equipment could not be made. The goal was to almost double the production by automating a serious bottleneck in the manufacturing line. A typical range of different size and shape labels to demonstrate the handling complexity of the required robotic system.
In manufacturing lines are usually a number of inter related processes which should coherently work efficiently to result in an efficient production. However if there is bottleneck at any stage the whole production line should run at a lower efficiency and capacity. Improvement may need to be made using lean manufacturing regarding some of the manufacturing aspects but these are small incremental changes and do not play any vital role to enhance the production efficiency and capacity.
This article is describing a industry collaborative project to investigate a manufacturing production line and identify the bottlenecks. The goal was to design and develop a robotic or automated system to overcome the bottleneck process problems in order to increase the manufacturing efficiency to reach the desired production capacity. Base on the investigation carried out by the company involved, available automated systems do not cover the whole production line and the custom made are extremely expensive. Therefore the idea was to come up with a solution which was within a reasonable budget for the company.
An automated transportation and manipulation flexible robotic system for handling, banding, snapping, and sorting of general purpose labels would improve substantially overall production efficiency of the concerned company where a large scale investment in new equipment could not be made. The goal was to almost double the production by automating a serious bottleneck in the manufacturing line. A typical range of different size and shape labels to demonstrate the handling complexity of the required robotic system.
Space Tethered Autonomous Robotic Satellite
Main mission of STARS-I (Space Tethered Autonomous Robotic Satellite I) is to verify technology for Tethered Space Robot (TSR). STARS-I contains of two subsatellites called daughter satellite and Mother Satellite, respectively. Those satellites are connected through a peace of tether, and tether is deployed for 1m – 10m. The minimum success level is set as:
• Deployment and retrieval of daughter satellite from mother satellite
• Attitude control of daughter satellite by arm motion
Mother satellite has a function to deploy and retrieve of tether, and daughter satellite has Tethered Space Robot function, that is attitude control by its own link motion under tether tension. Experimental mission is performed as:
• Mother satellite gives an initial velocity to daughter satellite
• Daughter satellite is deployed and retrieved under tether control by mother satellite
• Daughter satellite docks with mother satellite.
Daughter satellite and mother satellite have following subsystems respectively.
• Electrical power subsystem
• Data handling subsystem
• Camera subsystem
• Telecommunication subsystem
• Structure subsystem,
Daughter and mother satellites have specific subsystems “attitude subsystem” and “deployment subsystem”, respectively.
Functions of electrical subsystem are: cattery charging control, delivering electrical power to other subsystems through data handling subsystem, and monitoring electrical current consumption and temperature of electrical circuit board. Charging control IC controls charging Li-Ion battery. Regulator generates voltages 4.2V, 5.0V, and 6.0V to deliver to subsystems. Ammeters and voltmeter monitor at respective points, and thermo sensor monitors electrical circuit board temperature.
Data handling subsystem operates data among other subsystems, and delivers electrical power from electrical power subsystem to other subsystems. It controls sequences and monitors condition of the satellite. Data of each subsystem is kept in the data handling subsystem, and sent to the ground station through telecommunication subsystem. Experimental command, reset command of electrical power subsystem, taking picture command, etc, from the ground station through telecommunication subsystem are delivered to each subsystem by data handling subsystem.
• Deployment and retrieval of daughter satellite from mother satellite
• Attitude control of daughter satellite by arm motion
Mother satellite has a function to deploy and retrieve of tether, and daughter satellite has Tethered Space Robot function, that is attitude control by its own link motion under tether tension. Experimental mission is performed as:
• Mother satellite gives an initial velocity to daughter satellite
• Daughter satellite is deployed and retrieved under tether control by mother satellite
• Daughter satellite docks with mother satellite.
Daughter satellite and mother satellite have following subsystems respectively.
• Electrical power subsystem
• Data handling subsystem
• Camera subsystem
• Telecommunication subsystem
• Structure subsystem,
Daughter and mother satellites have specific subsystems “attitude subsystem” and “deployment subsystem”, respectively.
Functions of electrical subsystem are: cattery charging control, delivering electrical power to other subsystems through data handling subsystem, and monitoring electrical current consumption and temperature of electrical circuit board. Charging control IC controls charging Li-Ion battery. Regulator generates voltages 4.2V, 5.0V, and 6.0V to deliver to subsystems. Ammeters and voltmeter monitor at respective points, and thermo sensor monitors electrical circuit board temperature.
Data handling subsystem operates data among other subsystems, and delivers electrical power from electrical power subsystem to other subsystems. It controls sequences and monitors condition of the satellite. Data of each subsystem is kept in the data handling subsystem, and sent to the ground station through telecommunication subsystem. Experimental command, reset command of electrical power subsystem, taking picture command, etc, from the ground station through telecommunication subsystem are delivered to each subsystem by data handling subsystem.
Software System of Teleoperation Robot
The proposed teleoperation system is being implemented as a distributed server system using CORBA. The distributed server system contains of an input device server, a stabilizer, a whole body motion generator, and the I/O board of the robot.
The input device server is implemented and constructed on a remote Linux PC. The whole body motion generator and stabilizer are implemented on a real time operating system, ART Linux, on the slave robot board. Motor commands to the I/O board are sent every 5msec, with all the communications and processes between all servers being done within this control cycle.
It has designed a set of joystick operation rules for whole body manipulation and walking pattern generation of the slave humanoid robot. The input device server receives input from the joystick devices and interpret the axis and button conditions of the joystick devices to register them as parameters for target point manipulation and walking pattern generation. The parameter for each walking patterns and target points are described as parameters are registered as zero if there are no joystick inputs. The parameters are being accessed every 5msec by the real time control software on the remote robot for the whole body motions generation. The input device server limits the maximum values of the displacement of the wrists and torso in order to maintain standing stability.
The joint configurations for both legs are generated first in order to realize the target displacement of the orientation and position of the torso for the tasks that do not requires changes in foot position. And then the joint angles for both arms, the head and both hands are calculated based on the target values provided by the input device server.
For instance, the humanoid robot manipulating to pick up a bottle on the floor in front of the robot using its right wrist requires simultaneous manipulations of the torso and the right wrist.
The input device server is implemented and constructed on a remote Linux PC. The whole body motion generator and stabilizer are implemented on a real time operating system, ART Linux, on the slave robot board. Motor commands to the I/O board are sent every 5msec, with all the communications and processes between all servers being done within this control cycle.
It has designed a set of joystick operation rules for whole body manipulation and walking pattern generation of the slave humanoid robot. The input device server receives input from the joystick devices and interpret the axis and button conditions of the joystick devices to register them as parameters for target point manipulation and walking pattern generation. The parameter for each walking patterns and target points are described as parameters are registered as zero if there are no joystick inputs. The parameters are being accessed every 5msec by the real time control software on the remote robot for the whole body motions generation. The input device server limits the maximum values of the displacement of the wrists and torso in order to maintain standing stability.
The joint configurations for both legs are generated first in order to realize the target displacement of the orientation and position of the torso for the tasks that do not requires changes in foot position. And then the joint angles for both arms, the head and both hands are calculated based on the target values provided by the input device server.
For instance, the humanoid robot manipulating to pick up a bottle on the floor in front of the robot using its right wrist requires simultaneous manipulations of the torso and the right wrist.
Whole Body Teleoperation of Humanoid Robots
The construction of an effective communication interface between the slave robot and the operator, and the establishment of an effective teleoperation method to manipulate the complex multi joint humanoid robot are the two technical challenges for the development of a whole body teleoperation system for humanoid in order to interact with a remote environment by controlling a remote robot proxy, an effective communication interface, which provides a two way information link connecting the human operator with the remote robot, is great importance. An effective interface should be able to provide the following functions:
• Sensor Information Display: function to display the remote environment state as being sensed by the sensors of the remote robot.
• Robot Information Display: function to display information of the remote robot condition.
• Robot manipulation Command Input: function to transmit physical actions to interact with the remote environment by using manipulating the remote robot.
The effectiveness of these functions is significant as they affect the operator’s perception and performance during teleoperation. An effective interface should be able to extend the operator’s sensory perception into accurately remote environment and be able to provide flexible manipulation to teleoperate of the remote robot.
A boy immerses himself into the virtual game environment feeling the sense of becoming one to the human form game character only by controlling his gamepad. He manipulates the game character following pre-defined motion generation rules using the simple input device. His perception in the motions of the game character in the virtual world fits the expected effects of the actions he performs using the gamepad. The existence of continuing perception action loop enables him to develop the sense of becoming one to the virtual character.
A humanoid robot is human like in physical form and is expected to move in a human like manner. Human operators should be able to manipulate the slave humanoid robot safely and stable by constructing a teleoperation system of a humanoid robot with motion generation rules similar to that of a human.
• Sensor Information Display: function to display the remote environment state as being sensed by the sensors of the remote robot.
• Robot Information Display: function to display information of the remote robot condition.
• Robot manipulation Command Input: function to transmit physical actions to interact with the remote environment by using manipulating the remote robot.
The effectiveness of these functions is significant as they affect the operator’s perception and performance during teleoperation. An effective interface should be able to extend the operator’s sensory perception into accurately remote environment and be able to provide flexible manipulation to teleoperate of the remote robot.
A boy immerses himself into the virtual game environment feeling the sense of becoming one to the human form game character only by controlling his gamepad. He manipulates the game character following pre-defined motion generation rules using the simple input device. His perception in the motions of the game character in the virtual world fits the expected effects of the actions he performs using the gamepad. The existence of continuing perception action loop enables him to develop the sense of becoming one to the virtual character.
A humanoid robot is human like in physical form and is expected to move in a human like manner. Human operators should be able to manipulate the slave humanoid robot safely and stable by constructing a teleoperation system of a humanoid robot with motion generation rules similar to that of a human.
Wheelchair Robotic Speech of Errors Type
A series of preliminary tests using both Sphinx-4 and HTK, three male and one female student in their early twenties recorded 179 – 183 commands each. Subjects were presented with a command script, corresponding to each task, and instructed to read them in order.
The results in term of substitutions (when the speech recognizer fails to recognize a word and substitutes another incorrect word), insertions (words that were not spoken but added by the recognizer), deletions (words that were spoken but missed by the recognizer), word error rate (proportion of recognized incorrectly words including substitutions, insertions and deletions), and sentence error rate (proportion of sentences in which one or more words are recognized incorrectly).
Both speech recognition packages showed equivalent performance over the board. The mean sentence error rate was 46.7% when using HTK and 45.2% when using Sphinx-4, while the mean word error rate was 16.6% for HTK and 16.1% with Sphinx-4. this analysis suggests the performance of the two speech recognition systems is equal.
While the error rates are quite high, upon closer analysis the situation is not as discouraging as it would seem. Observed errors can be classified into one of three types, design errors, semantic errors, and Syntax errors.
Design errors
Errors of this type are introduced because of the design error of the task vocabulary. For instance, a number of substitutions occurred when Sphinx recognized METER as METER. The task grammar within the speech recognizer was modified to avoid such minor errors.
Semantic errors
Errors of this type are introduced when the speech recognition system fails to recognize the correct command. For instance, when subject 3 said, DESCEND THE CURB, Sphinx recognized it as ASCEND THE CURB, it is necessary to reason about the environment and the user’s intents.
Syntax errors
Errors of this type are introduced because the task grammar contains many ways to say the same thing. For instance, when subject 1 said ROLL BACK ONE METER, HTK recognized it as ROLL BACKWARD ONE METER. This is counted as one error of substitution.
The results in term of substitutions (when the speech recognizer fails to recognize a word and substitutes another incorrect word), insertions (words that were not spoken but added by the recognizer), deletions (words that were spoken but missed by the recognizer), word error rate (proportion of recognized incorrectly words including substitutions, insertions and deletions), and sentence error rate (proportion of sentences in which one or more words are recognized incorrectly).
Both speech recognition packages showed equivalent performance over the board. The mean sentence error rate was 46.7% when using HTK and 45.2% when using Sphinx-4, while the mean word error rate was 16.6% for HTK and 16.1% with Sphinx-4. this analysis suggests the performance of the two speech recognition systems is equal.
While the error rates are quite high, upon closer analysis the situation is not as discouraging as it would seem. Observed errors can be classified into one of three types, design errors, semantic errors, and Syntax errors.
Design errors
Errors of this type are introduced because of the design error of the task vocabulary. For instance, a number of substitutions occurred when Sphinx recognized METER as METER. The task grammar within the speech recognizer was modified to avoid such minor errors.
Semantic errors
Errors of this type are introduced when the speech recognition system fails to recognize the correct command. For instance, when subject 3 said, DESCEND THE CURB, Sphinx recognized it as ASCEND THE CURB, it is necessary to reason about the environment and the user’s intents.
Syntax errors
Errors of this type are introduced because the task grammar contains many ways to say the same thing. For instance, when subject 1 said ROLL BACK ONE METER, HTK recognized it as ROLL BACKWARD ONE METER. This is counted as one error of substitution.
Teleoperate Humanoid Robots
The tremendous advance in robotic and network technologies have provided with the infrastructure to transmit not only text, images and sounds but also physical actions. As telephone facilitates human as a tool for extending human voice, humanoid robots with network technologies could be powerful tools for extending human existence. Humanoid robots are potential tools to function in the real world, which is designed for human. They can be proxies for human to do dirty or dangerous work that would not be done by human if there is a choice, hence providing human with more safety, time and freedom.
Projects utilizing teleoperated humanoid robots have begun in search for systems which can do the same work presently done by human in critical environments, for instance executing space mission. There have been reports on humanoid robot teleoperation systems equipped with full master slave manipulation interface and high autonomy of system utilizing teleoperation interface.
During phase of the HRP (Humanoid Robotics Project) a full master slave teleoperation platform was developed to control a humanoid robot. This system enables the operator to control the slave humanoid robot as if the operator has become one to the robot with its exoskeleton master device and immersive displays. Although such full master slave teleoperation systems allow flexible manipulation of the humanoid robot, they require complex and large interface. It would not be very comfortable to operate in an exoskeleton device fixed to the body all the time to accomplish all tasks.
There are also projects utilizing GUI (Graphical User Interface) to teleoperate humanoid robots with higher autonomy. Although this kind of highly autonomous, supervisory control-like teleoperation systems require only simple input devices, they are normally less flexible and can perform pre-defined motions only.
Considering utilizing teleoperated humanoid robots to perform tasks in critical environments and during emergencies, a teleoperation system will have the great effectiveness.
Projects utilizing teleoperated humanoid robots have begun in search for systems which can do the same work presently done by human in critical environments, for instance executing space mission. There have been reports on humanoid robot teleoperation systems equipped with full master slave manipulation interface and high autonomy of system utilizing teleoperation interface.
During phase of the HRP (Humanoid Robotics Project) a full master slave teleoperation platform was developed to control a humanoid robot. This system enables the operator to control the slave humanoid robot as if the operator has become one to the robot with its exoskeleton master device and immersive displays. Although such full master slave teleoperation systems allow flexible manipulation of the humanoid robot, they require complex and large interface. It would not be very comfortable to operate in an exoskeleton device fixed to the body all the time to accomplish all tasks.
There are also projects utilizing GUI (Graphical User Interface) to teleoperate humanoid robots with higher autonomy. Although this kind of highly autonomous, supervisory control-like teleoperation systems require only simple input devices, they are normally less flexible and can perform pre-defined motions only.
Considering utilizing teleoperated humanoid robots to perform tasks in critical environments and during emergencies, a teleoperation system will have the great effectiveness.
Robot Control System of Interaction Manager
The robot control system can be implemented using different publicly available software packages, for instance the popular Player application or the Carmen robot navigation toolkit. It has both throughout our preliminary experiments. The aim of this component is to handle tasks such as mapping, path planning and localization. We do not discuss this component further as it is somewhat orthogonal to the main focus.
The Interaction Manager acts as the core decision making unit in the robot architecture. This module is responsible ultimately for selecting the behavior of the robot throughout the interaction with the user.
The Interaction Manger can be seen as an Input/Output device, where information about the world is received via the grammar system and the low level robot navigation system. The unit then outputs actions in the form speech and display responses, or issuing the control commands to the navigation unit. These actions are processed through the behavior manager, to extract a preset sequence of low level operations, before being sent to the respective modules (visuo tactile unit, speech synthesis, robot control system).
The aim of the Interaction Manager is to provide a robust decision making mechanism capable of handling the complexity of the environment. This is challenging target due to the high degree of noise in the environment. While the semantic grammar can help handle some of the noise, even properly transcribed speech can consist ambiguities. Accounting for this set of possible outcomes can be crucial in providing robust action selection.
The POMDP (Partially Observable Markov Decision Process) paradigm has been shown to be a powerful tool for modeling a wide range of robot related applications featuring uncertainty, including dialogue management, robot navigation, and behavior tracking, one of the advantages of the POMDP model is its ability to capture the idea of partial observability, namely that the state of the world can not be observed directly, but instead must be inferred through noisy observations.
The Interaction Manager acts as the core decision making unit in the robot architecture. This module is responsible ultimately for selecting the behavior of the robot throughout the interaction with the user.
The Interaction Manger can be seen as an Input/Output device, where information about the world is received via the grammar system and the low level robot navigation system. The unit then outputs actions in the form speech and display responses, or issuing the control commands to the navigation unit. These actions are processed through the behavior manager, to extract a preset sequence of low level operations, before being sent to the respective modules (visuo tactile unit, speech synthesis, robot control system).
The aim of the Interaction Manager is to provide a robust decision making mechanism capable of handling the complexity of the environment. This is challenging target due to the high degree of noise in the environment. While the semantic grammar can help handle some of the noise, even properly transcribed speech can consist ambiguities. Accounting for this set of possible outcomes can be crucial in providing robust action selection.
The POMDP (Partially Observable Markov Decision Process) paradigm has been shown to be a powerful tool for modeling a wide range of robot related applications featuring uncertainty, including dialogue management, robot navigation, and behavior tracking, one of the advantages of the POMDP model is its ability to capture the idea of partial observability, namely that the state of the world can not be observed directly, but instead must be inferred through noisy observations.
The Development of Wheelchairs Robots
For many people suffering from chronic mobility impairments, such as spinal cord injuries or multiple sclerosis, using a powered wheelchair to move around their environment can be difficult. According to a survey, 40% of patients found daily steering and maneuvering tasks to be difficult or impossible, and clinicians believe that between 61% and 91% of all wheelchairs users would benefit from a smart wheelchairs. Such numbers suggest that the deployment of intelligent wheelchairs catering to those patients need could have a deep social impact.
Robotics technology has made progress on a number of important issues pertaining to mobility over the last decade. Many of these developments can be transferred to the design of the intelligent wheelchairs. Yet many challenges remain, both practical and technical, when it comes to the development of human robot interaction components. The present survey of the literature on smart wheelchairs suggest that while voice control has often been used to control smart wheelchairs, it remains difficult to implement successfully.
The work addresses two main challenges pertaining to the development of voiced controlled assistive robots. Firstly, it tackles the problem of robust processing of speech commands. It propose to complete architecture for handling speech signals, which include not only signal processing, also semantic and synthetic processing, as well as probabilistic decision making for response production.
Secondly, it tackles the issue of developing standards and tools for the formal testing of assistive robots. The use of standard testing has been common currency in some sub-tasks pertaining to human robot interaction, most notably speech recognition. However few tools are available for the standardized and rigorous testing of fully integrated system.
It proposes a novel environment and methodology for the standardized testing of smart wheelchairs. The procedure is inspired from one commonly used in the evaluation of conventional or non-intelligent wheelchairs.
Robotics technology has made progress on a number of important issues pertaining to mobility over the last decade. Many of these developments can be transferred to the design of the intelligent wheelchairs. Yet many challenges remain, both practical and technical, when it comes to the development of human robot interaction components. The present survey of the literature on smart wheelchairs suggest that while voice control has often been used to control smart wheelchairs, it remains difficult to implement successfully.
The work addresses two main challenges pertaining to the development of voiced controlled assistive robots. Firstly, it tackles the problem of robust processing of speech commands. It propose to complete architecture for handling speech signals, which include not only signal processing, also semantic and synthetic processing, as well as probabilistic decision making for response production.
Secondly, it tackles the issue of developing standards and tools for the formal testing of assistive robots. The use of standard testing has been common currency in some sub-tasks pertaining to human robot interaction, most notably speech recognition. However few tools are available for the standardized and rigorous testing of fully integrated system.
It proposes a novel environment and methodology for the standardized testing of smart wheelchairs. The procedure is inspired from one commonly used in the evaluation of conventional or non-intelligent wheelchairs.
Wheelchair Robotic Speech Recognition
A speech interface provides a natural and comfortable input modality for users with limited mobility. Speech requires little training, and is relatively high bandwidth, thus allowing for rich communication between the human and robot. The speech performance recognition systems are influenced by many aspects, including the vocabulary, acoustic and language models, speaking mode, etc. some of these aspects have to be taken into account when designing the speech interface.
Selecting a speech recognizer that performs well for the task at hand is important. It considered two open source speech recognition systems: HTK and CMU’s Sphinx to preserve flexibility in the development process. Both if these systems are speaker independent, continuous speech recognition systems, whish typically require less customization than commercial systems. It is important that the system be pre-trained on a large speech corpus such that appropriate acoustic models can be pre-computed because customization is minimal. Usually such corpora falls under one of two categories: those developed for acoustic phonetic research and those developed for very specific tasks. Since SmartWheeler is still at an early stage development, and domain specific data is not available.
A small vocabulary makes speech recognition more accurate but requires the user to learn which word or phrases are allowed. And while our recent focus is on building and validating an interaction platform for a specific set of tasks, it also want the user to be able to interact with the system in the same way, when interacting with any caregiver, and with very little prior training. Thus a fixed set of tasks is considered, but several possible commands for each task are allowed. For instance, if a user wants to drive forward two meters, possible commands include:
• ROLL TWO METERS FORWARD
• ROLL FORWARD TWO METERS
• DRIVE FORWARD TWO METERS FAST
• DRIVE FAST TWO METERS FORWARD
Selecting a speech recognizer that performs well for the task at hand is important. It considered two open source speech recognition systems: HTK and CMU’s Sphinx to preserve flexibility in the development process. Both if these systems are speaker independent, continuous speech recognition systems, whish typically require less customization than commercial systems. It is important that the system be pre-trained on a large speech corpus such that appropriate acoustic models can be pre-computed because customization is minimal. Usually such corpora falls under one of two categories: those developed for acoustic phonetic research and those developed for very specific tasks. Since SmartWheeler is still at an early stage development, and domain specific data is not available.
A small vocabulary makes speech recognition more accurate but requires the user to learn which word or phrases are allowed. And while our recent focus is on building and validating an interaction platform for a specific set of tasks, it also want the user to be able to interact with the system in the same way, when interacting with any caregiver, and with very little prior training. Thus a fixed set of tasks is considered, but several possible commands for each task are allowed. For instance, if a user wants to drive forward two meters, possible commands include:
• ROLL TWO METERS FORWARD
• ROLL FORWARD TWO METERS
• DRIVE FORWARD TWO METERS FAST
• DRIVE FAST TWO METERS FORWARD
Robust High Fidelity Sensors
We focus on two types of sensing especially important for health care and medicine: implantable/biocompatible sensors and tactile/force sensing. These sensors along with perception algorithms are often necessary to give state of a physician/caregiver, the patient, and the environment.
Implantable/biocompatible sensors would be a great catalyst to major advancements in this field. The close physical interaction between patients and robots requires system that will not harm biological tissues or cease to function when in contact with them. In surgery, mechanisms must be design that will not unintentionally damage tissues and sensors need to be able to function appropriately in environment debris, wetness, and variable temperature. For prosthetics, sensors and probes must access neurons, muscles, and brain tissue and maintain functionality over long periods without performance degradation. These devices and sensors must be designed with medical and health robotics application in mind, in order to define performance requirements.
When robots work in unstructured environments, especially around and in contact with humans, using the sense of touch is crucial to accurate, safe operations and efficient. Force, tactile and contact data is required for informed manipulation of soft materials, from human organs to blanket and other objects in the household. It is challenging particularly to acquire and interpret spatially distributed touch information, due to the large area and high resolution required of the sensors. Current sensors are limited in robustness, deformability , resolution, and size.
For systems ranging from ultra minimally invasive surgery robots to human size prosthetic fingers, robots need very small actuators and mechanisms with high power to weight ratio. These designs will allow us to build that are smaller, less costly, and use less power. This enables greater effectiveness, as well as dissemination to population in need. We will highlight below two examples of how advances in mechanism and actuators could improve medicine.
Implantable/biocompatible sensors would be a great catalyst to major advancements in this field. The close physical interaction between patients and robots requires system that will not harm biological tissues or cease to function when in contact with them. In surgery, mechanisms must be design that will not unintentionally damage tissues and sensors need to be able to function appropriately in environment debris, wetness, and variable temperature. For prosthetics, sensors and probes must access neurons, muscles, and brain tissue and maintain functionality over long periods without performance degradation. These devices and sensors must be designed with medical and health robotics application in mind, in order to define performance requirements.
When robots work in unstructured environments, especially around and in contact with humans, using the sense of touch is crucial to accurate, safe operations and efficient. Force, tactile and contact data is required for informed manipulation of soft materials, from human organs to blanket and other objects in the household. It is challenging particularly to acquire and interpret spatially distributed touch information, due to the large area and high resolution required of the sensors. Current sensors are limited in robustness, deformability , resolution, and size.
For systems ranging from ultra minimally invasive surgery robots to human size prosthetic fingers, robots need very small actuators and mechanisms with high power to weight ratio. These designs will allow us to build that are smaller, less costly, and use less power. This enables greater effectiveness, as well as dissemination to population in need. We will highlight below two examples of how advances in mechanism and actuators could improve medicine.
Robotic Modeling, Simulation and Analysis
The models varieties are important for health and medical robotics applications. We can divide it into two main categories relevant health and medical robotics: people modeling (from tissue biomechanical to human physical and cognitive behavior) and engineered systems modeling (including information/integration flow, and open platforms and architectures). The models can be of physiology, biomechanics, environment, dynamics, geometry, state, interactions, tasks, cognition, and behavior. The models can be used for many tasks, including optimal design, planning, control, task execution, testing and validation, diagnosis and prognosis, training, and social and cognitive interaction.
Now we provide some specific examples of models needed for health care and medicine. In tele-operated or remote surgery with time delays, model of patient are required to allow natural interaction between the surgeon and the remote operating environment. Tissue models in general are needed for planning procedures, training simulators, and automated guidance systems. These are just beginning to be applied in needle based operations, but more sophisticated models would enable planning and context appropriate guidance for a wider variety of procedures, such as surgery cellular and laparoscopic surgery. Models that are realistic sufficiently to be rendered in real time would enable high fidelity surgical simulations for general training and patient specific practice conducted by surgeons. We need models of human cognition and behavior in order to provide appropriate motivational assistance for assistive healthcare robots. Physical models of a patient’s whole body are also needed for a robot to provide physical assistance for tasks such as getting out of bed or eating.
Another sample, consider a rehabilitation system that uses robotic technology for early and accurate diagnosis. Such a system would need models of the patient and his deficit in order to design appropriate treatments, and assess accurately outcomes. Ideally, the model of the patient would change after treatment. Such models are also needed for robotic technology to participate in and augment diagnosis.
Now we provide some specific examples of models needed for health care and medicine. In tele-operated or remote surgery with time delays, model of patient are required to allow natural interaction between the surgeon and the remote operating environment. Tissue models in general are needed for planning procedures, training simulators, and automated guidance systems. These are just beginning to be applied in needle based operations, but more sophisticated models would enable planning and context appropriate guidance for a wider variety of procedures, such as surgery cellular and laparoscopic surgery. Models that are realistic sufficiently to be rendered in real time would enable high fidelity surgical simulations for general training and patient specific practice conducted by surgeons. We need models of human cognition and behavior in order to provide appropriate motivational assistance for assistive healthcare robots. Physical models of a patient’s whole body are also needed for a robot to provide physical assistance for tasks such as getting out of bed or eating.
Another sample, consider a rehabilitation system that uses robotic technology for early and accurate diagnosis. Such a system would need models of the patient and his deficit in order to design appropriate treatments, and assess accurately outcomes. Ideally, the model of the patient would change after treatment. Such models are also needed for robotic technology to participate in and augment diagnosis.
Physical Human Robot Interaction
Physical human robot interaction is inherent in most medical applications. Such interactions require appropriate perception, sensing and action. Sensing the human could use conventional robot sensors or implantable/biocompatible sensors such as brain machine interfaces. Such sensor data must be combined with modeling to enable perception. Simulation and/or modeling of human form and function are the basis for the design the robot that comes into physical contact with humans. Much work needs to be done in this area, since we do not understand fully what model of humans are useful for optimizing robot perception, design, control and planning.
An important aspect of the physical contact between robots and humans is the technology of touch (haptics). When clinicians or patients use robot to interact with environment that are remote in distance or scale, the operator needs to have a natural interface that make the robot seem transparent. The operator of the surgical robot, prosthesis, or rehabilitation robot should feel as if he or she is manipulating directly a real environment rather than interacting with a robot. Haptic, force and tactile, displays give feedback to the user that is akin to what he or she feels in the real world. This haptic feedback can improve performance in terms of efficiency, accuracy and comfort.
Effective social interaction with a user is critically important for enabling health and medical robotics to become useful for improving health outcomes in convalescence, wellness and rehabilitation applications. The user’s willingness to engage with a socially assistive robot in order to accept interact, advise, and ultimately alter behavior practices toward the desired improvements, rests directly on the robot ability to obtain the user’s trust and sustain the user’s interest. Finally, user’s interface and input devices must be developed that are easy and intuitive for a range of users, including those with special needs.
An important aspect of the physical contact between robots and humans is the technology of touch (haptics). When clinicians or patients use robot to interact with environment that are remote in distance or scale, the operator needs to have a natural interface that make the robot seem transparent. The operator of the surgical robot, prosthesis, or rehabilitation robot should feel as if he or she is manipulating directly a real environment rather than interacting with a robot. Haptic, force and tactile, displays give feedback to the user that is akin to what he or she feels in the real world. This haptic feedback can improve performance in terms of efficiency, accuracy and comfort.
Effective social interaction with a user is critically important for enabling health and medical robotics to become useful for improving health outcomes in convalescence, wellness and rehabilitation applications. The user’s willingness to engage with a socially assistive robot in order to accept interact, advise, and ultimately alter behavior practices toward the desired improvements, rests directly on the robot ability to obtain the user’s trust and sustain the user’s interest. Finally, user’s interface and input devices must be developed that are easy and intuitive for a range of users, including those with special needs.
The Types of Service Robotics
Service robotics is define as those robotic systems that assist people in their daily lives at work, in their houses, for leisure, and as part of assistance to elderly and handicapped. In industrial robotics the task is typically to automate the tasks to achieve a homogeneous quality of production or a high speed of execution. In contrast, service robotics tasks are performed in spaces occupied by humans and typically in collaboration with people directly. Service robotics is normally divided into personal and professional services.
Personal service robots on the other hand are deployed for assistance to people in their daily lives in their hones or as assistants to them for compensation for physical or mental limitations. The by far largest personal service robots group consists of domestic vacuum cleaner, over 3 millions iRobot Roomba’s alone have been sold worldwide and the market is growing 60%+/year. A large number of robots have been deployed for leisure applications such as artificial pets (AIBO), dolls, etc. With more than 2 millions units sold over that last 5 years, the market for such leisure robots is experiencing exponential growth and is expected to remain one of the most promising in robotics.
Professional service robotics is including emergency response, agriculture, pipelines and the national infrastructure, forestry, transportation, professional cleaning, and various other disciplines. These systems typically augment people for execution tasks in the workplace. According to the AFR/VDMA World Robotics more than 38,000 professional robots are in use now and the market is growing every year.
There was general agreement among those present at the meeting that we are still 10 to 15 years away from a wide variety of solutions and applications incorporating full scale, general autonomous functionality. Some of the key technology issues that need to be addressed to reach that point are discussed. There was further agreement among those present that the technology has sufficiently progressed to enable an increasing number of limited scale and/or semi autonomous solutions that are affordable, pragmatic, and provide real value.
Personal service robots on the other hand are deployed for assistance to people in their daily lives in their hones or as assistants to them for compensation for physical or mental limitations. The by far largest personal service robots group consists of domestic vacuum cleaner, over 3 millions iRobot Roomba’s alone have been sold worldwide and the market is growing 60%+/year. A large number of robots have been deployed for leisure applications such as artificial pets (AIBO), dolls, etc. With more than 2 millions units sold over that last 5 years, the market for such leisure robots is experiencing exponential growth and is expected to remain one of the most promising in robotics.
Professional service robotics is including emergency response, agriculture, pipelines and the national infrastructure, forestry, transportation, professional cleaning, and various other disciplines. These systems typically augment people for execution tasks in the workplace. According to the AFR/VDMA World Robotics more than 38,000 professional robots are in use now and the market is growing every year.
There was general agreement among those present at the meeting that we are still 10 to 15 years away from a wide variety of solutions and applications incorporating full scale, general autonomous functionality. Some of the key technology issues that need to be addressed to reach that point are discussed. There was further agreement among those present that the technology has sufficiently progressed to enable an increasing number of limited scale and/or semi autonomous solutions that are affordable, pragmatic, and provide real value.
Robotic Image Guided Intervention
Now we consider robotic image guided intervention, which concentrates on visualization on the internal structures of a patient in order to guide a robotic device and/or its human operator. This is usually associated with interventional radiology and surgery, although the concepts described here could more broadly apply to any health care needs in which the patient can not be visualized naturally. No matter the application, such interventions require advances in image acquisition and analysis, robots developments that are compatible with imaging environments, and methods for the robots and their human operators to use the image data.
Sensor data are essential for building models and acquiring real time information during interventional radiology and surgery. Real time medical imaging techniques such as MRI (Magnetic Resonance Imaging), spectroscopy, ultrasound, and optical coherence tomography (OCT) can provide significant benefits when they enable the physician to see subsurface structures and/or tissue properties. Image acquired pre-operatively can be used for simulation and planning. New techniques such as elastography, which non-invasively quantifies tissue compliance, are needed in order to provide images that provide useful, quantitative physical information. The necessary speed and resolution of imager is not yet understood for robot control. We must determine how to integrate these with robotic systems to provide useful information to the surgeon and the robot to react to patient in real time.
One of the most useful forms of imaging is MRI (Magnetic Resonance Imaging). The MRI design compatible robots is especially challenging because MRI relies on a strong magnetic field and RF (Radio Frequency) pulses, and so it is not possible to use components that can interfere with, or be susceptible to, these physical effects. This rules out most components used for typical robots, such as ferromagnetic materials and electric motor. Interventional radiology or surgery inside an imager places severe constraints on robot size and geometry as well as the nature of the clinician robot interaction.
Sensor data are essential for building models and acquiring real time information during interventional radiology and surgery. Real time medical imaging techniques such as MRI (Magnetic Resonance Imaging), spectroscopy, ultrasound, and optical coherence tomography (OCT) can provide significant benefits when they enable the physician to see subsurface structures and/or tissue properties. Image acquired pre-operatively can be used for simulation and planning. New techniques such as elastography, which non-invasively quantifies tissue compliance, are needed in order to provide images that provide useful, quantitative physical information. The necessary speed and resolution of imager is not yet understood for robot control. We must determine how to integrate these with robotic systems to provide useful information to the surgeon and the robot to react to patient in real time.
One of the most useful forms of imaging is MRI (Magnetic Resonance Imaging). The MRI design compatible robots is especially challenging because MRI relies on a strong magnetic field and RF (Radio Frequency) pulses, and so it is not possible to use components that can interfere with, or be susceptible to, these physical effects. This rules out most components used for typical robots, such as ferromagnetic materials and electric motor. Interventional radiology or surgery inside an imager places severe constraints on robot size and geometry as well as the nature of the clinician robot interaction.
Robotic Deployment Issues
Deployment of complete health robotics systems requires practical issues of reliable, safe and continuous operation in human environments. The systems must be private and secure, and interoperable with other systems in the home. To move from incremental progress to system level implications, the medical field and heath robotics needs new principled measurement tools and methods for efficient demonstration, evaluation and certification.
The challenge of system evaluation is compounded by the nature of the problem: evaluating human function and behavior as part of the system itself. Quantitative characterization of pathology is an existing problem in medicine; robotics has the potential to contribute to solving this problem by enabling methods for the analysis and collection of quantitative data about human function and behavior. Some health care delivery is inherently qualitative in nature, motivation, having to do with therapy social interaction at the same time; while such methods are standard in the social sciences, they are not recognized or accepted by the medical community. Because medical and health robotics must work with both trained specialists and lay users, it is necessary to gain acceptance from both communities.
This necessities reproducibility of experiments, code reuse, standards, hardware platform re-use/sharing, sufficient data for claims of efficacy, clinical trial and moving robots from lab to real world. As systems become increasingly intelligent and autonomous, it is necessary to develop methods for measuring and evaluating adaptive technologies that change with the interaction with the user.
Affordability of robotic technology must be addressed at several different levels. The hospital pays a significant cost in terms of capital investment to acquire a robot, the maintenance costs are high, and the developing cost of robots is immense, given their complexity and stringent performance requirements foe medical applications. Policies are needed to address regulatory barriers, the licensure issue and state by state certification, rules for proctoring and teaching with robots, and reimbursement via insurance companies.
The challenge of system evaluation is compounded by the nature of the problem: evaluating human function and behavior as part of the system itself. Quantitative characterization of pathology is an existing problem in medicine; robotics has the potential to contribute to solving this problem by enabling methods for the analysis and collection of quantitative data about human function and behavior. Some health care delivery is inherently qualitative in nature, motivation, having to do with therapy social interaction at the same time; while such methods are standard in the social sciences, they are not recognized or accepted by the medical community. Because medical and health robotics must work with both trained specialists and lay users, it is necessary to gain acceptance from both communities.
This necessities reproducibility of experiments, code reuse, standards, hardware platform re-use/sharing, sufficient data for claims of efficacy, clinical trial and moving robots from lab to real world. As systems become increasingly intelligent and autonomous, it is necessary to develop methods for measuring and evaluating adaptive technologies that change with the interaction with the user.
Affordability of robotic technology must be addressed at several different levels. The hospital pays a significant cost in terms of capital investment to acquire a robot, the maintenance costs are high, and the developing cost of robots is immense, given their complexity and stringent performance requirements foe medical applications. Policies are needed to address regulatory barriers, the licensure issue and state by state certification, rules for proctoring and teaching with robots, and reimbursement via insurance companies.
Socially Assistive Robotics
Convalescence, rehabilitation, and management of life long cognitive, social and physical disorders requires ongoing behavioral therapy, consisting of physical and/or cognitive exercises that must be sustained at the appropriate frequency and correctness. The intensity of practice and self-efficacy has been shown to be the keys to recovery and minimization of disability. Because of the fast growing demographic trends of many of the affected populations, the available health care needed to provide coaching and supervision for such behavior therapy is already lacking and on a recognized steady decline.
SAR (Socially Assistive Robotics) is a comparatively new field of robotics that focuses on developing robots aimed at addressing precisely this growing need. SAR is developing systems capable of assisting users through social rather than the physical interaction. The robot’s physical embodiment is at the heart of SAR assistive effectiveness, as it leverages the inherently human tendency to engage with lifelike social behavior. People ascribe readily intention, personality and emotion to even the simplest robots, from LEGO toys to iRobot Roomba vacuum cleaners. SAR uses this engagement toward the development of socially interactive robots capable of motivating, monitoring, encouraging, and sustaining user activities and improving human performance.
SAR has the potential to enhance the quality of life for large population of users, including the elderly, individuals with cognitive impairments, those rehabilitating from stroke and other neuromotor disabilities, and children with socio-developmental disorders such as autism. Robots can help to improve the function of a wide variety of people, and can do so not just functionally but also socially, by augmenting and embracing the emotional connection between robot and human.
HRI (Human Robot Interaction for SAR is a growing research area at the intersection of engineering, psychology, social science, health science, and cognitive science. An effective socially assistive robot must understand and interact with environment, focus its attention, exhibit social behavior, and communication on the user, sustain engagement with the user and achieve specific assistive goals.
SAR (Socially Assistive Robotics) is a comparatively new field of robotics that focuses on developing robots aimed at addressing precisely this growing need. SAR is developing systems capable of assisting users through social rather than the physical interaction. The robot’s physical embodiment is at the heart of SAR assistive effectiveness, as it leverages the inherently human tendency to engage with lifelike social behavior. People ascribe readily intention, personality and emotion to even the simplest robots, from LEGO toys to iRobot Roomba vacuum cleaners. SAR uses this engagement toward the development of socially interactive robots capable of motivating, monitoring, encouraging, and sustaining user activities and improving human performance.
SAR has the potential to enhance the quality of life for large population of users, including the elderly, individuals with cognitive impairments, those rehabilitating from stroke and other neuromotor disabilities, and children with socio-developmental disorders such as autism. Robots can help to improve the function of a wide variety of people, and can do so not just functionally but also socially, by augmenting and embracing the emotional connection between robot and human.
HRI (Human Robot Interaction for SAR is a growing research area at the intersection of engineering, psychology, social science, health science, and cognitive science. An effective socially assistive robot must understand and interact with environment, focus its attention, exhibit social behavior, and communication on the user, sustain engagement with the user and achieve specific assistive goals.
High Dexterity Robotic Manipulation
Device design and control is key to the operation of all medical and health robotics, since they interact physically with their environment. One of the most important technical challenges is in the area mechanisms. For instance, in surgical applications, the smaller a robot is, the less invasive the procedure is for the patient accordingly. And in most procedures, the results increased dexterity in more efficient and accurate surgeries. We also consider the possibility of cellular scale surgery; proof of concept of this has already been implemented in the laboratory. Another example is rehabilitation; current rehabilitation robots are large and relegated to the clinic. Similarly, human physical therapists have limited availability. For many patients, effective long term therapy calls clearly for longer and more frequent training sessions than is affordable or practical in the clinic.
Human scale wearable devices, or at least ones that can be easily carried home, would allow rehabilitative therapies to be applied in unprecedented ways. At the end, consider a dexterous prosthetic hand. To fully replicate the joints of a real hand, using current mechanisms, power sources, and actuators designs would require the hand to be too heavy or large for human to use naturally. Small, dexterous mechanisms would make great strides toward more life-like prosthetic limbs.
Miniaturization challenge in large part because current electromechanical actuators are relatively large. Biological analogs are far superior to engineered systems in terms of compactness, low impedance, energy efficiency, and high force output. These biological systems often combine actuation and mechanisms into an integrated, inseparable system. Novel mechanism design will go hand in hand with actuator development. Every actuator or mechanism combination will need to be controlled for it to achieve its full potential behavior, especially when dexterity is required. Models need to be developed in order to optimize control strategies, this may even motivate the design of mechanisms that are especially straightforward to model.
Human scale wearable devices, or at least ones that can be easily carried home, would allow rehabilitative therapies to be applied in unprecedented ways. At the end, consider a dexterous prosthetic hand. To fully replicate the joints of a real hand, using current mechanisms, power sources, and actuators designs would require the hand to be too heavy or large for human to use naturally. Small, dexterous mechanisms would make great strides toward more life-like prosthetic limbs.
Miniaturization challenge in large part because current electromechanical actuators are relatively large. Biological analogs are far superior to engineered systems in terms of compactness, low impedance, energy efficiency, and high force output. These biological systems often combine actuation and mechanisms into an integrated, inseparable system. Novel mechanism design will go hand in hand with actuator development. Every actuator or mechanism combination will need to be controlled for it to achieve its full potential behavior, especially when dexterity is required. Models need to be developed in order to optimize control strategies, this may even motivate the design of mechanisms that are especially straightforward to model.
Quantitative Robotic Diagnosis and Assessment
Robot coupled to system information can acquire data from patients in unprecedented ways. They can use sensors to provide the physiologic status of the patient, engage the patient in physical interaction in order to acquire external measure of the health such as strength, interact with the patient in social ways to acquire behavioral data, such as eye gaze, gesture, joint attention, etc, more repeatedly and objectively than a human observer could. The robot can be made aware of the history of the particular health condition and its treatment, and be informed by sensors of the interaction that occur between the patient and the physician or caregivers. Quantitative diagnosis and assessment requires sensing of the patient, application of stimuli to gauge responses, and the intelligence to use the acquired data for assessment and diagnosis. When diagnosis or assessment is uncertain, the robot can be directed to acquire more appropriate data. The robot should be able to interact with the caregiver or physician to help them to make a diagnosis or assessment with sophisticated domain knowledge. As robots facilitate aging in place e.g. in the home, automated assessment becomes more important as a means to alert a caregiver, who may not always be present, about potential health problems.
Many technological components related to assessment and diagnosis, such as micro electromechanical lab on a chip sensor for chemical analysis and smart clothing that records heart rate and other physiologic phenomena, borrow from ideas in the field of the robotics or have been used by the robots in assessment and diagnosis. Others, such as using intelligent socially assistive robots to quantify behavioral data, are novel entirely and present new ways of treating data that had, to date, been only qualitative. The myriad steps in assessment and diagnosis need to each be improved and then combined into a seamless process.
Many technological components related to assessment and diagnosis, such as micro electromechanical lab on a chip sensor for chemical analysis and smart clothing that records heart rate and other physiologic phenomena, borrow from ideas in the field of the robotics or have been used by the robots in assessment and diagnosis. Others, such as using intelligent socially assistive robots to quantify behavioral data, are novel entirely and present new ways of treating data that had, to date, been only qualitative. The myriad steps in assessment and diagnosis need to each be improved and then combined into a seamless process.
Intuitive Physical Human Robot Interaction and Interfaces
The use robotics in medicine inherently involves physical interaction between patients, caregivers, and robots in all combinations. Developing intuitive physical interfaces between humans and robots require all the classic elements of a robotic system: perception, action and sensing. A great variety of sensing and perception tasks are required, including recording the forces and motions of a surgeon to their intent, determining the mechanical parameters of human tissue, and estimation the force between rehabilitation robot and a moving stroke patient. The reciprocal nature of interaction means that the robot will also need to provide useful feedback to the human operator, whether that person is a patient or a caregiver. We need to consider systems that involve many human senses, the most common of which are vision, sound and haptic (tactile and force).
A major reason why system involving physical collaboration between humans and robots are so difficult to design well is that, from the perspective a robot, humans are uncertain extremely. Unlike a passive, humans change their motion, static environment, strength and immediate purpose on a regular basis. This can be as simple as physiologic movement, or as complex as the motions of a surgeon suturing during surgery. The human is an integral part of a closed loop feedback system during physical interaction with a robot, simultaneously exchanging energy and information with the robotic system, and can not be thought simply of as an external input system.
The loop is often closed with both visual feed back and human force, each with its own delays and errors, this can potentially cause instabilities in the human robot system. There are several approaches to solving these problems, which can be used parallel: modeling the human with as much detail as possible, sensing the human’s physical behavior in a very large number of dimensions, and developing robot behaviors that will ensure appropriate interaction no matter what the human does.
A major reason why system involving physical collaboration between humans and robots are so difficult to design well is that, from the perspective a robot, humans are uncertain extremely. Unlike a passive, humans change their motion, static environment, strength and immediate purpose on a regular basis. This can be as simple as physiologic movement, or as complex as the motions of a surgeon suturing during surgery. The human is an integral part of a closed loop feedback system during physical interaction with a robot, simultaneously exchanging energy and information with the robotic system, and can not be thought simply of as an external input system.
The loop is often closed with both visual feed back and human force, each with its own delays and errors, this can potentially cause instabilities in the human robot system. There are several approaches to solving these problems, which can be used parallel: modeling the human with as much detail as possible, sensing the human’s physical behavior in a very large number of dimensions, and developing robot behaviors that will ensure appropriate interaction no matter what the human does.
Adaptation to Robotic User’s Changing Needs
The need for system learning and adaptation is especially evident in human robot interaction domains. Each user has specific characteristics, preferences and needs can change over time as the user get accustomed to the system and as the health state of the user changes, both over the short term, medium term, an long term. Usable and effective to be accepted, robot system interacting with human users must be able to adapt and learn in new context and at extended time scales, in a variety of environments and contexts.
Challenge in long term learning includes the integration of multi-modal information about the user over time, in light of consistencies and changes in behavior, and unexpected experiences. Machine learning, including robot learning has been adopting increasingly principled statistical methods. However, the work has no addressed the complexities of the real world uncertain data such as noisy, inconsistent and incomplete, multimodal data about a user such as ranging from signal level information from tests, electrodes, probes, wearable devices and long term data.
The ability to interact with the user through intuitive interfaces such as gestures, speech, wands, and learn from demonstration and imitation have been topics of active research for some time. They present a novel challenge for in home long term interaction where the system is subject to user learning and habituation, as well as diminishing patience and novelty effects. Robotic learning systems have not been tested yet on truly long term studies and life long learning is not more than concept yet.
At the end, because learning systems are typically difficult to asses and analyze, it is particularly important that such personalized, adaptive technologies be equipped with intuitive visualization tools of their system state as well as the health state of the user. These challenges taking into account, an ideal adaptive, learning health care robot system would be able to predict changes in the health state of the user or patient and adjust the delivery of its services accordingly.
Challenge in long term learning includes the integration of multi-modal information about the user over time, in light of consistencies and changes in behavior, and unexpected experiences. Machine learning, including robot learning has been adopting increasingly principled statistical methods. However, the work has no addressed the complexities of the real world uncertain data such as noisy, inconsistent and incomplete, multimodal data about a user such as ranging from signal level information from tests, electrodes, probes, wearable devices and long term data.
The ability to interact with the user through intuitive interfaces such as gestures, speech, wands, and learn from demonstration and imitation have been topics of active research for some time. They present a novel challenge for in home long term interaction where the system is subject to user learning and habituation, as well as diminishing patience and novelty effects. Robotic learning systems have not been tested yet on truly long term studies and life long learning is not more than concept yet.
At the end, because learning systems are typically difficult to asses and analyze, it is particularly important that such personalized, adaptive technologies be equipped with intuitive visualization tools of their system state as well as the health state of the user. These challenges taking into account, an ideal adaptive, learning health care robot system would be able to predict changes in the health state of the user or patient and adjust the delivery of its services accordingly.
Robot Assisted Recovery and Rehabilitation
A patient suffering from diseases or neuromuscular injuries, such as occur in the aftereffects of stroke, often benefit form neuro-rehabilitation. This process exploits the use dependent plasticity of the human neuromuscular system, in which use alters the property of neurons and muscles, including their pattern of their connectivity, and their function. Sensory motor therapy, in which patient makes upper extremity or lower extremity movements physically assisted by a human therapist or robot, helps people re-learn how to move. This process is time consuming and labor intensive, but pays large dividends in term of patient health care cost and return the labor productivity. As an alternative to human only therapy, a robot has several key advantages for intervention:
• The robot can provide consistent, lengthy, a personalized therapy without tiring after set up.
• The robot can acquire data to give an objective quantification of recovery using sensors.
• The robot can implement therapy exercises not possible by human therapist.
There are already significant clinical results from the use of robot to retrain lower and upper limb movement abilities for individual who have had neurological injury such as cerebral stroke. These rehabilitation robots provide many different forms of mechanical input, such as resisting, assisting, stretching and perturbing, based on the subject’s real time response. For instance, the commercially available MIT –Manus rehabilitation robot showed improved recovery of both chronic and acute stroke patient.
Another exciting implication of sensory motor therapy with robot is that they can help neuroscientists improve their general function of understanding brain. Through knowledge of robot based perturbations to the patient and quantification response of patients with damage to particular areas of brain, robot can make unprecedented recording of stimulus response. These relationships understanding also give neuroscientists and neurologists insight into function of brain, which can contribute to basic research in those fields.
• The robot can provide consistent, lengthy, a personalized therapy without tiring after set up.
• The robot can acquire data to give an objective quantification of recovery using sensors.
• The robot can implement therapy exercises not possible by human therapist.
There are already significant clinical results from the use of robot to retrain lower and upper limb movement abilities for individual who have had neurological injury such as cerebral stroke. These rehabilitation robots provide many different forms of mechanical input, such as resisting, assisting, stretching and perturbing, based on the subject’s real time response. For instance, the commercially available MIT –Manus rehabilitation robot showed improved recovery of both chronic and acute stroke patient.
Another exciting implication of sensory motor therapy with robot is that they can help neuroscientists improve their general function of understanding brain. Through knowledge of robot based perturbations to the patient and quantification response of patients with damage to particular areas of brain, robot can make unprecedented recording of stimulus response. These relationships understanding also give neuroscientists and neurologists insight into function of brain, which can contribute to basic research in those fields.
Robotic Surgery Makes Recovery Times Shorter
Robots have become routine in the manufacturing world and other repetitive labor. While industrial robots were developed primarily to automate dull, dirty, and dangerous tasks, medical and health robots are design for entire different environment and tasks. Those that involve direct interaction with human users, in the surgical theater, the rehabilitation center and the family room.
Robotics is already beginning to affect healthcare. Telerobotics systems such as the da Vinci Surgical system are being used to perform surgery, resulting in shorter recovery times and more reliable outcomes in some procedures. The robotics use as a part of a computer integrated surgery system enable accurate, targeted medical interventions. It has been hypothesized the interventional and surgery radiology will be transformed through the computer integration and robotics much on the way that manufacturing was revolutionized by automation several decades ago. Haptic devices, a robotic form, are already used for simulations to train medical personnel.
Robotics system such as MIT –Manus are delivering successfully physical and occupational therapy. Robots enable a greater treatment intensity that is adaptable continuously to a patient’s needs. They have already proven more effective than conventional approach, especially to assist recovery after stroke, the leading permanent cause disability in the US. The future potential robotics systems can provide therapy oversight, motivation and coaching that supplement human care with little or no supervision by human therapist, and can continue long term therapy in the home after hospitalization.
Robotic technology also has a role in basic augmenting research into human health. The ability to create a robotic system that mimics biology is one way to study and test how the human body and brain function. Furthermore, robots can be used to acquire data from biological systems with unprecedented accuracy, enabling us to gain quantitative insights into both social and physical behaviors.
Robotics is already beginning to affect healthcare. Telerobotics systems such as the da Vinci Surgical system are being used to perform surgery, resulting in shorter recovery times and more reliable outcomes in some procedures. The robotics use as a part of a computer integrated surgery system enable accurate, targeted medical interventions. It has been hypothesized the interventional and surgery radiology will be transformed through the computer integration and robotics much on the way that manufacturing was revolutionized by automation several decades ago. Haptic devices, a robotic form, are already used for simulations to train medical personnel.
Robotics system such as MIT –Manus are delivering successfully physical and occupational therapy. Robots enable a greater treatment intensity that is adaptable continuously to a patient’s needs. They have already proven more effective than conventional approach, especially to assist recovery after stroke, the leading permanent cause disability in the US. The future potential robotics systems can provide therapy oversight, motivation and coaching that supplement human care with little or no supervision by human therapist, and can continue long term therapy in the home after hospitalization.
Robotic technology also has a role in basic augmenting research into human health. The ability to create a robotic system that mimics biology is one way to study and test how the human body and brain function. Furthermore, robots can be used to acquire data from biological systems with unprecedented accuracy, enabling us to gain quantitative insights into both social and physical behaviors.