Many years of human factors research have shown that the development of efficient, effective and usable interfaces requires the inclusion of the user’s perspective throughout the entire design and development process. Many times interfaces are developed late in the process of design and development process with minimal user input. The result tends to be an interface that simply can not be employed to complete the actual users or the required tasks are unwilling accept the technology. Many of the issues that are raised also apply to HRI (Human-Robot Interface) development. According to Johnson’s book concentrates on Graphical User Interfaces, he listed the following principles:
• Consider function first then presentation later.
• Focus on the users and their tasks but not the technology.
• Confirm the task user’s view.
• Do not complicate the task of user.
• Promote learning.
• Design for responsiveness.
• Deliver information but not just data.
• Try out on users, and then fix it.
The user incorporation into the design process has for many years been termed UCD (User Centered Design). In addition to the work related to user center design, human factors research has concentrated on systems Complex Man-Machine. Such domains include cockpit design, air traffic control, chemicals processing plant and nuclear power plants. Here are many theories and results related to vigilance, operator workload, situation awareness, and human error that can also be applied to HRI development while these domains differ from robotics.
As an example of the parallels between human robotic systems and the above mentioned domains, consider a domain such as air traffic control. The air traffic controllers monitor a particular air space. While monitoring all the aircraft within the air space, these controllers act in a supervisory role. One could consider the controller as the operators monitoring a large team of robots and aircraft as individual robots.
Blog about Robotics Introduction, news of the robot, information about the robot, sharing knowledge about the various kinds of robots, shared an article about robots, and others associated with the robot
Robot Prototyping Framework
According to the condition of the robot process, the cardboard prototyping framework consist of three parameters, they are: CPs (Constant Parameters), APs (Additional parameters), and VPs (Variable Parameters). CPs is unchanged factors throughout the process like styling guidelines, styling concept, and information format. CPs is main factors linking each prototype to a framework systematic. Robot engineers and designers can efficiently approach every types of robot platform during the development process owing CPs. VPs are changed factors within earlier stages and the following stages. Tools, materials and detail information are included in the VPs. Fidelity of prototyping increases from the exchange of the VPs. APs are new supporting factors when starts the next stage. Factors of APs are an actuator for motion and other components standard for structuring. These factors are necessary in manufacturing and prototyping working.
The three types of parameters are the action factors to practice prototyping platform using DPs for mechanical FRs. The designer evolves a prototype from a concept to a final solution through the Robot Cardboard Prototyping System.
There are two cases of the implementation of kinematic cardboard prototyping. First, a styling prototype of high fidelity is suitable in between the conceptual design stage and the hardware platform construction stage. The prototype was easy to produce moderately and comparatively. Designer can improve the speed of production by connecting numerical control (NC) equipment. The robot was built with focus on appearance. Its appearance design is according to the mechanical drawings. Its component frames were designed considering engineering bolting and jointing. Gluing replaces bolting in this prototyping.
Physical prototypes were verified for real volumes and sizes, interference in moving and an array of inside wiring and components. In virtual model, these factors are not easy to evaluate and it is easy to modify forms in this prototype. Components are easy to cut and replace like a real sized clay model.
The three types of parameters are the action factors to practice prototyping platform using DPs for mechanical FRs. The designer evolves a prototype from a concept to a final solution through the Robot Cardboard Prototyping System.
There are two cases of the implementation of kinematic cardboard prototyping. First, a styling prototype of high fidelity is suitable in between the conceptual design stage and the hardware platform construction stage. The prototype was easy to produce moderately and comparatively. Designer can improve the speed of production by connecting numerical control (NC) equipment. The robot was built with focus on appearance. Its appearance design is according to the mechanical drawings. Its component frames were designed considering engineering bolting and jointing. Gluing replaces bolting in this prototyping.
Physical prototypes were verified for real volumes and sizes, interference in moving and an array of inside wiring and components. In virtual model, these factors are not easy to evaluate and it is easy to modify forms in this prototype. Components are easy to cut and replace like a real sized clay model.
Constraint for Development of Robot Prototyping
Recently, the general prototyping method for robot platform development is a pre-design kit consisting components format module. The present robot development trend uses the physical prototype form of the virtual prototype after computer simulations. Developers can quickly build dynamic locomotion patterns using a digital mock up. Thus robot prototypes are aimed at the verified concept of materialization, moving from virtual space to a real world. Developers are good at using ready made robots or pre-designed kit to save time and costs, and these saving pay for the development of new hardware components. But pre-designed kits limit the creativity of developers, so it is disadvantageous for development form. When using a commercial tool it is hard to connect mechanisms, form design and creative structural design.
The environment of robot prototyping consists of several subsystems, such as simulation, design, control, hardware selection, monitoring, part ordering, CAD/CAM modeling, physical assembly and testing. Robot prototyping has to satisfy engineering needs. The design factor should be linked closely with the prototype. The general process design includes the styling process and the engineering design process has a spiral structure of evaluation consisting of design, build and test. The result of the engineering evaluation process was accumulated to next process. But the styling result is renewed at every stage by changing the requirements of engineering functional.
The concept of a styling framework is a prototyping system connecting prototyping at each stage of the robot design to produce the end model. Each prototyping link in the system evolution system is combined with the manufacturing method and the material information.
Some improvements are needed for more effective robot design despite the advantages of cardboard prototyping. Cardboard prototyping has weaknesses, starting with the test engineering in robot development. This prototyping has no accuracy due to the model is built by hand. And other weakness is stiffness because of material. Accuracy can be increased b y using kinematic cardboard prototype into the late stages of robot design.
The environment of robot prototyping consists of several subsystems, such as simulation, design, control, hardware selection, monitoring, part ordering, CAD/CAM modeling, physical assembly and testing. Robot prototyping has to satisfy engineering needs. The design factor should be linked closely with the prototype. The general process design includes the styling process and the engineering design process has a spiral structure of evaluation consisting of design, build and test. The result of the engineering evaluation process was accumulated to next process. But the styling result is renewed at every stage by changing the requirements of engineering functional.
The concept of a styling framework is a prototyping system connecting prototyping at each stage of the robot design to produce the end model. Each prototyping link in the system evolution system is combined with the manufacturing method and the material information.
Some improvements are needed for more effective robot design despite the advantages of cardboard prototyping. Cardboard prototyping has weaknesses, starting with the test engineering in robot development. This prototyping has no accuracy due to the model is built by hand. And other weakness is stiffness because of material. Accuracy can be increased b y using kinematic cardboard prototype into the late stages of robot design.
Traditional Robot Prototyping
Form prototyping is a common practice in the early stages of the development process of mechanical engineering. Form prototyping is quick and depends on the new ideas of developers. Developers can construct a real scale form using this prototyping. This prototyping is an effective way for communicating ideas among developers, for instance doll armatures that are the skeletal frames for stop motion animation models and characters.
Pre-supplied and pre-designed mechanical kits are used universally in robot prototyping, starting with a virtual concept model. These kits are also used in mechanical design development that incorporates the hands on activities experience. The robot mechanical kits of worldwide standard are LEGO’s Mindstorms variations. In earlier mechanical concept development stages, a simpler design tool of mechanical like Homby’s was used. Mechanical kit toys such as Meccano are appropriate. LEGO, K’NEX, Erector Set, Fish Ertechnik, Robotix and Capsela have advantage in training in the developers of robots structure. A simple assembly kit using a notebook computer was developed in recent years. These robot kits consists functional modules and by mixing these modules we can increase the robots functions.
Ready-made robots are used in artificial intelligent (AI) research-related or software development as service robots. Some developers of robot can fine tune a ready made robot to meet their goals. Robot kits are ideal for robot developers who do not like the aspects of construction robotics, but instead want to concentrate on programming or electronics. These developers prefer O WIKITS and MOVITS which are precision made miniature robots in kit form.
Having a cardboard prototype body is an effective method for fast prototyping of robot. This is an effective technique for prototyping of robot, through it does not a produce a sturdy prototype. It is an advantage to constantly replace and rip out parts with slightly improved alternatives. It is possible to make cardboard products that have sufficient compressive strength to carry structural loads.
Pre-supplied and pre-designed mechanical kits are used universally in robot prototyping, starting with a virtual concept model. These kits are also used in mechanical design development that incorporates the hands on activities experience. The robot mechanical kits of worldwide standard are LEGO’s Mindstorms variations. In earlier mechanical concept development stages, a simpler design tool of mechanical like Homby’s was used. Mechanical kit toys such as Meccano are appropriate. LEGO, K’NEX, Erector Set, Fish Ertechnik, Robotix and Capsela have advantage in training in the developers of robots structure. A simple assembly kit using a notebook computer was developed in recent years. These robot kits consists functional modules and by mixing these modules we can increase the robots functions.
Ready-made robots are used in artificial intelligent (AI) research-related or software development as service robots. Some developers of robot can fine tune a ready made robot to meet their goals. Robot kits are ideal for robot developers who do not like the aspects of construction robotics, but instead want to concentrate on programming or electronics. These developers prefer O WIKITS and MOVITS which are precision made miniature robots in kit form.
Having a cardboard prototype body is an effective method for fast prototyping of robot. This is an effective technique for prototyping of robot, through it does not a produce a sturdy prototype. It is an advantage to constantly replace and rip out parts with slightly improved alternatives. It is possible to make cardboard products that have sufficient compressive strength to carry structural loads.
Configuration and Control System of ASIMO
The robot’s size was chosen to allow it to operate in the human living space freely and make it people friendly. When designing ASIMO, Honda engineers studied the reach of the robot hand and its squatting position to access things, such as light switches, doorknobs, electric outlet and other things in the daily life environment. The robot’s location elbow and shoulder is dictated by the normal height of workbenches and desks. The current ASIMO is Honda’s first prototype robot that is capable of tackling both indoor and off-road environments.
ASIMO shoulder are positioned at 910mm height, allowing it to benefit from an upper reach as high as 1290mm. its leg are 610mm which allow it to go up and down stairs. ASIMO has 1200mm tall in total. ASIMO’s width 450mm and has depth of 440mm. so that can pass easily through narrow corridors and doorways. ASIMO’s joint link configuration is partly modified from the previous prototypes. This configuration is the same as P3 for the legs. The shoulder angle has been extended to allow the function in a wider operational space for the arms, and when the degrees of freedom in its wrists have been reduced, it’s fitted with motor driven five finger hands.
To help with the control and movement of ASIM, it is around 20 CPUs and a large number of sensors are installed into the robot. Several of theses are used in each of the robot’s subsystems, including audio visual sensing and recognition, the actuation of its arm and legs, communication with the operator, and power management. Those are installed in various places around its body, including a number of CPUs. The present ASIMO has improved greatly its interaction and communication with both its environments and operators. ASIMO’s walking functions allowing it to walk smoothly, stable way and flexible.
ASIMO shoulder are positioned at 910mm height, allowing it to benefit from an upper reach as high as 1290mm. its leg are 610mm which allow it to go up and down stairs. ASIMO has 1200mm tall in total. ASIMO’s width 450mm and has depth of 440mm. so that can pass easily through narrow corridors and doorways. ASIMO’s joint link configuration is partly modified from the previous prototypes. This configuration is the same as P3 for the legs. The shoulder angle has been extended to allow the function in a wider operational space for the arms, and when the degrees of freedom in its wrists have been reduced, it’s fitted with motor driven five finger hands.
To help with the control and movement of ASIM, it is around 20 CPUs and a large number of sensors are installed into the robot. Several of theses are used in each of the robot’s subsystems, including audio visual sensing and recognition, the actuation of its arm and legs, communication with the operator, and power management. Those are installed in various places around its body, including a number of CPUs. The present ASIMO has improved greatly its interaction and communication with both its environments and operators. ASIMO’s walking functions allowing it to walk smoothly, stable way and flexible.
Honda’s Robot Development from E4 to ASIMO
Since 1991 to 1993 Honda researched focus on the completing the basic functions of two-legged walking and establishing technology for stable walking, they are E4, E5 and E6. The final robot in the series, E6, was the last of Honda’s robots with only legs and benefited from the integration of all the autonomous walking functions developed thus far into one self-contained system. Environment maps were installed at the stage to aid its navigation.
Since 1993, work began in developing a completely independent humanoid robot. And Honda announced the world’s first self-regulating, two legged humanoid robot called P2 (prototype 2) in December 1996. Its tall was 182 cm and weight 210kg.
The torso contained a computer, 32 motors, a battery, using wireless technique, a wireless radio and other necessary devices, all of which were built into the robot. In this early prototypes, walking down and up stairs, independent walking, trolley pushing and other simple operations were also achieved wirelessly, allowing the robot operate independently. These robots also benefited from the number sensors introduction, including gyros fiber optics, ground reaction force sensors, inclination sensors and four cameras.
P2’s degrees of freedom were as follows, arms, 14 (7X2); legs, 12 (6X2); hands, 4 (2X2); and cameras, 4 (2X2). While P2 was a major advance in Honda’s robotic research, there were still many areas that improvements needed, for example, P2 was far too heavy and large, and it had operational time of only 15 minutes. Improvements to the robots maintenance and reliability were required.
P2 led to the development of P3 (prototypes 3). The major difference between P3 and previous prototypes was its reduce size and weight, 160 cm tall and 130kg weight. It also replaced the existing motor with brushless DC motors to improve the reliability.
From P2 and P3 experience the research began on new technology for actual use. ASIMO represents the fruition of the pursuit and the latest biped robot. ASIMO stands for Advanced Step in Innovative Mobility. ASIMO was conceived to function in actual human living environment in the near future.
Since 1993, work began in developing a completely independent humanoid robot. And Honda announced the world’s first self-regulating, two legged humanoid robot called P2 (prototype 2) in December 1996. Its tall was 182 cm and weight 210kg.
The torso contained a computer, 32 motors, a battery, using wireless technique, a wireless radio and other necessary devices, all of which were built into the robot. In this early prototypes, walking down and up stairs, independent walking, trolley pushing and other simple operations were also achieved wirelessly, allowing the robot operate independently. These robots also benefited from the number sensors introduction, including gyros fiber optics, ground reaction force sensors, inclination sensors and four cameras.
P2’s degrees of freedom were as follows, arms, 14 (7X2); legs, 12 (6X2); hands, 4 (2X2); and cameras, 4 (2X2). While P2 was a major advance in Honda’s robotic research, there were still many areas that improvements needed, for example, P2 was far too heavy and large, and it had operational time of only 15 minutes. Improvements to the robots maintenance and reliability were required.
P2 led to the development of P3 (prototypes 3). The major difference between P3 and previous prototypes was its reduce size and weight, 160 cm tall and 130kg weight. It also replaced the existing motor with brushless DC motors to improve the reliability.
From P2 and P3 experience the research began on new technology for actual use. ASIMO represents the fruition of the pursuit and the latest biped robot. ASIMO stands for Advanced Step in Innovative Mobility. ASIMO was conceived to function in actual human living environment in the near future.
Evolution of Honda Humanoid Robot E0 to E3
Honda was researched into bipedal humanoid robots began in 1986. The first milestone was to develop of prototype of bipedal that could walk in a static and straight way. It was from this progress early that were able to reach the next key stage of development, which was to develop a more stable and dynamic form of walking. Coupled with this was the need to walking of master over uneven surfaces and then stairs. A torso and two arms were successfully added to complete the first truly humanoid robot in 1993.
The next was to modify the robots so that it could operate and adapt in real world environments. This development stage saw the robot’s structure and operating systems become lighter and smaller. It was also in this phase that aids of communication were introduced as well as the early intelligence stages, allowing the robot to interact and recognize with people.
Honda’s research is continuing and with this will bring further improvements, which will see ASIMO moving closer to becoming a viable and real assistant for people in our human environment.
E0 was Honda’s first robot (E means Electronics) and with this Honda took on the challenges of creating a two-legged robot that could walk. Walking by placing one leg before the other was successfully reached, and this was assisted by the application of linear actuators to its joints. However it was taking 30s between steps, it walked very slowly in straight line. To increase the speed and to allow the robot to walk on slopes or uneven surfaces, faster walking speeds need to be achieved.
On the next stages from E1 through to E3, human walking was thoroughly researched and analyzed. It was through these studies that a faster walking program was created and input into the robot. It is the history of Honda robot developments, E1 saw the introduction of a basic joint structure. E2 achieved the first dynamic walking robot, which could also go down and up stairs. E3 was able to increase the walking speed up to 4.7 km/h. it also could carry a payload of 70kg.
The next was to modify the robots so that it could operate and adapt in real world environments. This development stage saw the robot’s structure and operating systems become lighter and smaller. It was also in this phase that aids of communication were introduced as well as the early intelligence stages, allowing the robot to interact and recognize with people.
Honda’s research is continuing and with this will bring further improvements, which will see ASIMO moving closer to becoming a viable and real assistant for people in our human environment.
E0 was Honda’s first robot (E means Electronics) and with this Honda took on the challenges of creating a two-legged robot that could walk. Walking by placing one leg before the other was successfully reached, and this was assisted by the application of linear actuators to its joints. However it was taking 30s between steps, it walked very slowly in straight line. To increase the speed and to allow the robot to walk on slopes or uneven surfaces, faster walking speeds need to be achieved.
On the next stages from E1 through to E3, human walking was thoroughly researched and analyzed. It was through these studies that a faster walking program was created and input into the robot. It is the history of Honda robot developments, E1 saw the introduction of a basic joint structure. E2 achieved the first dynamic walking robot, which could also go down and up stairs. E3 was able to increase the walking speed up to 4.7 km/h. it also could carry a payload of 70kg.
Robotic Independent Platform of AAS
A central objective of the Model-Driven Development (MDD) approach is to separate the design and architecture from concrete realizations. The developer should be able to design the application at an abstract level and not be confronted with platform specific details of implementation. The conceptual of realizing design the functional requirements is specified in the Platform Independent Model (PIM). It is transformed to one or more PSM (Platform Specific Model) which provide the base for the actual implementation.
The developer focuses on the overall robotic system architecture in terms of communication and modules channels between them in SPICA. The platform independent model is called AAS (Abstract Architecture Specification). The transformation process from the AAS to the finally resulting source code in one programming language or multiple programming languages.
The main AAS modeling entitles are messages, protocols, and modules. Modules represent the architecture building blocks. Messages are structure data exchanged between the modules, while protocols are behavior transmission descriptions.
The AAS is defined using three domains of specific modeling languages: Message Description Language (MDL), Protocol Description Language (PDL), and Data Flow Description Language (DFDL).they are tailored to the different architecture aspects. These three languages are used in order separate clearly the orthogonal concerns of data flow, protocol state transitions, and message structure.
The AAS can be considered as the overall model system which represent the layout of target software architecture. Currently, it is prepared in a text format but now the developers are working on a graphical composer incorporating UML 2.0.
The AASTra (AAS Transformer) is the central part of SPICA. It performs model transformations and integrity checks from the AAS down to the concrete implementation. The AIR (AAS Intermediate Representation) is generated from the AAS model, forms a data pool for the final code transformation step and represent s the PSM. The tree representation of AAS is AIR.
The developer focuses on the overall robotic system architecture in terms of communication and modules channels between them in SPICA. The platform independent model is called AAS (Abstract Architecture Specification). The transformation process from the AAS to the finally resulting source code in one programming language or multiple programming languages.
The main AAS modeling entitles are messages, protocols, and modules. Modules represent the architecture building blocks. Messages are structure data exchanged between the modules, while protocols are behavior transmission descriptions.
The AAS is defined using three domains of specific modeling languages: Message Description Language (MDL), Protocol Description Language (PDL), and Data Flow Description Language (DFDL).they are tailored to the different architecture aspects. These three languages are used in order separate clearly the orthogonal concerns of data flow, protocol state transitions, and message structure.
The AAS can be considered as the overall model system which represent the layout of target software architecture. Currently, it is prepared in a text format but now the developers are working on a graphical composer incorporating UML 2.0.
The AASTra (AAS Transformer) is the central part of SPICA. It performs model transformations and integrity checks from the AAS down to the concrete implementation. The AIR (AAS Intermediate Representation) is generated from the AAS model, forms a data pool for the final code transformation step and represent s the PSM. The tree representation of AAS is AIR.
Central Control Hub of the Robot
The central control of the robot derives from a hub of three heterogeneous microprocessors that provide coordination between joints, integrate sensor information, and process the vision input. This hub also provides communication to the outside world through user interfaces and communication peripherals.
The primary component of the central controller is an iPAQ pocket PC from Compaq. He iPAQ features a 208 MHz StrongARM microcontroller, 32 Mb of RAM and a 320 x 240 color screen. The screen is touch sensitive allowing stylus input of text and graphics. The iPAQ has 16Mb of Flash ROM to store the operating system. The iPAQ in the GuRoo operates with Windows CE. As well as the touch screen interface, the iPAQ is equipped with a speaker and microphone, a joypad, and four push buttons. It has an infra red interface for external communication.
The second component of the central hub is TMS320F243 microcontroller that acts as an adapter and filter the robots internal CAN network. The microcontroller communicates with the robot’s distributed control system through the CAN network, and to the iPAQ through the iPAQ’s USB serial communication port. The microcontroller also manages the power supply providing centralized control of the robot power supply in the event of system failure.
The primary component of the central controller is an iPAQ pocket PC from Compaq. He iPAQ features a 208 MHz StrongARM microcontroller, 32 Mb of RAM and a 320 x 240 color screen. The screen is touch sensitive allowing stylus input of text and graphics. The iPAQ has 16Mb of Flash ROM to store the operating system. The iPAQ in the GuRoo operates with Windows CE. As well as the touch screen interface, the iPAQ is equipped with a speaker and microphone, a joypad, and four push buttons. It has an infra red interface for external communication.
The second component of the central hub is TMS320F243 microcontroller that acts as an adapter and filter the robots internal CAN network. The microcontroller communicates with the robot’s distributed control system through the CAN network, and to the iPAQ through the iPAQ’s USB serial communication port. The microcontroller also manages the power supply providing centralized control of the robot power supply in the event of system failure.
Bipedal Walking Robot
Research into bipedal walking robots can be split into two categories: active and passive. The passive or un-powered category is of interest as it illustrates that walking is fundamentally a dynamic problem. Passive walkers do not require actuators, sensors, or computers in order to make them move, but walk down gentle slopes generating motion by the hardware geometry. The passive walkers also illustrate the walking can be performed with very little power input.
Active walker can further be split into two categories; those that employ the natural dynamics of specialized actuators, and those that are fully power operated. These robots have been shown to have robust and stable performance from relatively simple control mechanisms.
The alternate approach is to control the joints through pre-specified trajectories to a known “good” gait pattern. This is a simple approach, but lack robustness to disturbances. This approach becomes more complex when additional layers are added to provide adjustments to the gait for disturbance. Controlling a fully powered biped in a manner that depends on the dynamic model is complicated by the complex dynamic equations for the robot’s motion. It moved a dynamic torso with significant mass through 2 DOF to keep the Zero Moment Point (ZMP) within the polygon of the support foot.
Active walker can further be split into two categories; those that employ the natural dynamics of specialized actuators, and those that are fully power operated. These robots have been shown to have robust and stable performance from relatively simple control mechanisms.
The alternate approach is to control the joints through pre-specified trajectories to a known “good” gait pattern. This is a simple approach, but lack robustness to disturbances. This approach becomes more complex when additional layers are added to provide adjustments to the gait for disturbance. Controlling a fully powered biped in a manner that depends on the dynamic model is complicated by the complex dynamic equations for the robot’s motion. It moved a dynamic torso with significant mass through 2 DOF to keep the Zero Moment Point (ZMP) within the polygon of the support foot.
Reasoning to Build Humanoid Robot
There are several reasons to build a robot with humanoid form. It has been argued that to build a machine with human like intelligence, it must be embodied in a human like body. Others argue that for humans to interact naturally with a robot, it will be easier for the humans if that robot has humanoid form. A third and perhaps more concrete, reason for building a humanoid robot is to develop a machine that interacts naturally with human spaces. The architectural constraints on our working and living environments are based on the form and dimensions of the human body. Consider the design of stairs, cupboards and chairs. A robot that lives and works with humans in an unmodified environment must have a form that can function with everyday objects. The only form that is guaranteed to work in all cases is the form of humanoid.
The one example of Humanoid robot is the GuRoo project by Robotic laboratory in the University of Queensland. It is humanoid robot with 1.2 m tall that is capable of balancing, walking, turning, crouching, and standing from a prostrate position. The target mass for the robot is 30kg, including on-board power and computation. The robot will have active, monocular, color vision and vision processing.
The intended challenge task for the robot is to playa game of soccer with or against human player or other humanoid robots. To complete this challenge, the robot must be able to move freely on its two legs. It requires a vision sense that can detect the objects in a soccer game, such as the ball, the player from the both teams, the goals and the boundaries.
The one example of Humanoid robot is the GuRoo project by Robotic laboratory in the University of Queensland. It is humanoid robot with 1.2 m tall that is capable of balancing, walking, turning, crouching, and standing from a prostrate position. The target mass for the robot is 30kg, including on-board power and computation. The robot will have active, monocular, color vision and vision processing.
The intended challenge task for the robot is to playa game of soccer with or against human player or other humanoid robots. To complete this challenge, the robot must be able to move freely on its two legs. It requires a vision sense that can detect the objects in a soccer game, such as the ball, the player from the both teams, the goals and the boundaries.
3D Scorpion Robot Vision Features and Tools
Application areas for 3D machine vision are
• 3D robot vision.
• Volume measurements
• Automotive Part measurements
The following parameters are calculated in the system above: x, y, and z position and corresponding angles and volume: height, depth, and width.
Scorpion 3D can be used with all digital firewire, gigE, USB cameras and the entire range of Sony Smart Cameras.
The most important features and tools are:
• Simple twos step 3D camera calibration using External Reference 3D – it is accurate and easy to use 3D camera calibration. This tool is the basis for all 3D tools.
• Full 3D visualization and 3D types in Scorpion including a 3D geometry method set.
• Measuring object size independent of its position.
• 3DMaMa – an extremely powerful tool to find multiple objects in a 3D point cloud.
• ChangeReference3D – Moves a 2D plane using the 3D camera calibration. In robot vision this removes the need for multiple plane calibrations.
• Locate3D – Fast and accurate location of objects in space – x, y, z – using one, two, three or four cameras.
• ObjectPosition3D – easy location of unknown 3D objects by combining information from multiple cameras.
• MonoPose3D – locates an object in 3D with one camera.
• Retrofit3D Robot vision onto existing solutions without any hardware change – applies when camera is mounted on the robot.
• 3D robot vision.
• Volume measurements
• Automotive Part measurements
The following parameters are calculated in the system above: x, y, and z position and corresponding angles and volume: height, depth, and width.
Scorpion 3D can be used with all digital firewire, gigE, USB cameras and the entire range of Sony Smart Cameras.
The most important features and tools are:
• Simple twos step 3D camera calibration using External Reference 3D – it is accurate and easy to use 3D camera calibration. This tool is the basis for all 3D tools.
• Full 3D visualization and 3D types in Scorpion including a 3D geometry method set.
• Measuring object size independent of its position.
• 3DMaMa – an extremely powerful tool to find multiple objects in a 3D point cloud.
• ChangeReference3D – Moves a 2D plane using the 3D camera calibration. In robot vision this removes the need for multiple plane calibrations.
• Locate3D – Fast and accurate location of objects in space – x, y, z – using one, two, three or four cameras.
• ObjectPosition3D – easy location of unknown 3D objects by combining information from multiple cameras.
• MonoPose3D – locates an object in 3D with one camera.
• Retrofit3D Robot vision onto existing solutions without any hardware change – applies when camera is mounted on the robot.
Application of Scorpion Robot
The some applications are able to do by Scorpion robot vision as following:
Pick and place system
One successful high precision scorpion robot system was the pick and place system for Ericsson’s logo to be mounted on a mobile telephone. The system was purchased by Mikron’s factory outside Oslo, Norway.
Robot vision inspection
At Electrolux Motor in Sarsborg, Norway, Scorpion is working together with a Rexroth Bosch Scara Robot. The system is verifying the gap of a chainsaw sword with a precision of 0.01 mm. more than 400 measurements are performed on each sword. Each year 1.3 millions sword are automatically inspected.
One cycle consists of the following operations: pick sword, present side one to Scorpion, present side two to Scorpion, Sort swords: defects are placed on left hand side – accepted swords are placed on the right hand side of the robot.
Picking products from a pallet is easy
Scorpion is able to pick rings from a pallet. The green cross indicates the center of the ring. The rings are randomly placed on the pallet and the center of the ring is located within a couple of millimeters. The solution is made with Scorpion Premium and a VGA rewire camera.
Pick and place system
One successful high precision scorpion robot system was the pick and place system for Ericsson’s logo to be mounted on a mobile telephone. The system was purchased by Mikron’s factory outside Oslo, Norway.
Robot vision inspection
At Electrolux Motor in Sarsborg, Norway, Scorpion is working together with a Rexroth Bosch Scara Robot. The system is verifying the gap of a chainsaw sword with a precision of 0.01 mm. more than 400 measurements are performed on each sword. Each year 1.3 millions sword are automatically inspected.
One cycle consists of the following operations: pick sword, present side one to Scorpion, present side two to Scorpion, Sort swords: defects are placed on left hand side – accepted swords are placed on the right hand side of the robot.
Picking products from a pallet is easy
Scorpion is able to pick rings from a pallet. The green cross indicates the center of the ring. The rings are randomly placed on the pallet and the center of the ring is located within a couple of millimeters. The solution is made with Scorpion Premium and a VGA rewire camera.
Generation Facts with Robotic Meld Programming Language
Meld is a logic-based declarative programming language that operates on facts using a collection of production rules. A Meld program is a collection of rules for deriving, or proving, new facts by combining existing ones. Using a process called forward chaining, Meld starts with a set of base facts, checks them against the rules, and sees if any new facts can be generated. These are then added to the collection of facts and the process continues iteratively until all provable facts under the given system of rules and base facts are generated. This forward chaining and generation of facts constitutes the execution of a Meld program.
The meld logic itself makes no presumptions on the meaning of facts, leaving this to programmer. However, in practice, it is useful to maintain certain conventions. Furthermore, to make the language useful in a robotics context, the generation of some facts can have side effects, permitting robots to move, perform actions, or otherwise affect the physical world.
Base facts reflect physical state, and facts with side effects correspond to the sensing and actuation primitives available on the system. Changes to base facts, due to actuation, for example, will trigger the generation of new facts as well as the deletion of old facts that can no longer be proved.
The meld logic itself makes no presumptions on the meaning of facts, leaving this to programmer. However, in practice, it is useful to maintain certain conventions. Furthermore, to make the language useful in a robotics context, the generation of some facts can have side effects, permitting robots to move, perform actions, or otherwise affect the physical world.
Base facts reflect physical state, and facts with side effects correspond to the sensing and actuation primitives available on the system. Changes to base facts, due to actuation, for example, will trigger the generation of new facts as well as the deletion of old facts that can no longer be proved.
The Advantage Robot Controlled by Logix Platform
A few of the many advantages to having an entire manufacturing line, including the robot, controlled by the Logix platform, include:
• Streamline design – reduced control components lower overall system costs and help optimize floor space for end users. Further, end users benefit from a more synchronized and efficient production line when machine builders leverage the ability to use common Rockwell Automation products throughput the line, such as I/O, servo drives, motors and safety products.
• Simplified programming – one software package is used to program and configure the robot and the rest of the control system. An extensive library of robotic application adds-on instructions (AOIs) within the RSLogix software make robot integration fast and easy.
• Enhanced performance – with all system control elements residing on the same hardware chassis and the same control architecture, faster communication and data manipulation is possible than with a system that employs multiple controller. This highly synchronized and efficient production line helps companies manufacture and with greater accuracy. Also, integrated control and the ability to use one common hardware and software architecture helps improve machine flexibility and can reduce changeover time, thus helping end users to more easily meet certifications and industry regulations concerning software validation and traceability.
• Streamline design – reduced control components lower overall system costs and help optimize floor space for end users. Further, end users benefit from a more synchronized and efficient production line when machine builders leverage the ability to use common Rockwell Automation products throughput the line, such as I/O, servo drives, motors and safety products.
• Simplified programming – one software package is used to program and configure the robot and the rest of the control system. An extensive library of robotic application adds-on instructions (AOIs) within the RSLogix software make robot integration fast and easy.
• Enhanced performance – with all system control elements residing on the same hardware chassis and the same control architecture, faster communication and data manipulation is possible than with a system that employs multiple controller. This highly synchronized and efficient production line helps companies manufacture and with greater accuracy. Also, integrated control and the ability to use one common hardware and software architecture helps improve machine flexibility and can reduce changeover time, thus helping end users to more easily meet certifications and industry regulations concerning software validation and traceability.
Robot Integration Dedicated or Multiple Discipline Control?
Consider the complexity and potential process integration issues that can occur when using a dedicated, special purpose robot controller. A dedicated controller requires an extensive amount of peripheral technology and hardware to operate, including:
• Separate control cabinets consuming valuable floor space.
• Complex and costly network and/or discrete interface.
• Servo drives, motors and cables that are different from those used on the rest f the line.
• Separate programming and configuration software than what is used for the line controller.
Interfacing the robot controller to the main line controller using a discrete or a network interface can result in lower system performance, additional program complexity, and increase solution cost. Other issues include additional integration, and training time and costs, extra spare parts, limited or inconsistent safety solutions, and limited ability to select best-of-breed components.
Machine builders can leverage a more practical control solution for robotics by incorporating robotic control directly into the main system’s programmable automation controller (PAC). Rockwell Automation addressed the industry’s need for one common hardware and software architecture to support multiple control disciplines with the Rockwell Automation Integrated Architecture framework. The Allen Bradley ControlLogix family of PACs and the Rockwell Software RSLogix 5000 programming software allow manufacturers too ingrate simple, multi-axes robot control into the main Logix control platform.
• Separate control cabinets consuming valuable floor space.
• Complex and costly network and/or discrete interface.
• Servo drives, motors and cables that are different from those used on the rest f the line.
• Separate programming and configuration software than what is used for the line controller.
Interfacing the robot controller to the main line controller using a discrete or a network interface can result in lower system performance, additional program complexity, and increase solution cost. Other issues include additional integration, and training time and costs, extra spare parts, limited or inconsistent safety solutions, and limited ability to select best-of-breed components.
Machine builders can leverage a more practical control solution for robotics by incorporating robotic control directly into the main system’s programmable automation controller (PAC). Rockwell Automation addressed the industry’s need for one common hardware and software architecture to support multiple control disciplines with the Rockwell Automation Integrated Architecture framework. The Allen Bradley ControlLogix family of PACs and the Rockwell Software RSLogix 5000 programming software allow manufacturers too ingrate simple, multi-axes robot control into the main Logix control platform.
Robot Programming through Touch
Many people consider robots to be sophisticated machines that are too complex for the layperson to learn to control. If robots are to be accepted in homes and offices, this has to change. People need to be able to interact with robots in natural and in familiar manner; they must not find them intimidating. Touch-based interactions are instinctive for humans and have an important role in learning. Therefore it is important to study how humans can interact in a physical manner with robots.
It will describe a method for programming a robot through touch. It is aimed at allowing children too quickly and easily program exciting robot behavior. It can also support collaborative programming of a robot, involving users on a computer and users having the robot as an interface. We used a robotic platform developed by NEC called the PaPeRo.
PaPeRo has sensors embedded underneath its hard exterior at certain locations on the body, and these sensors detect the touch of a user’s hand. When a sensor is touched, the robot performs an action that has been mapped to that sensor. The user can string sequences of simple actions together to make the robot perform higher level tasks, such as moving in circle. This approach avoids involving the user in low level details, which can be confusing. Moreover, the user can learn to program complex actions simply by playing with the robot. This method of programming does not require the user to learn the syntax of a programming language or even to use a computer.
It will describe a method for programming a robot through touch. It is aimed at allowing children too quickly and easily program exciting robot behavior. It can also support collaborative programming of a robot, involving users on a computer and users having the robot as an interface. We used a robotic platform developed by NEC called the PaPeRo.
PaPeRo has sensors embedded underneath its hard exterior at certain locations on the body, and these sensors detect the touch of a user’s hand. When a sensor is touched, the robot performs an action that has been mapped to that sensor. The user can string sequences of simple actions together to make the robot perform higher level tasks, such as moving in circle. This approach avoids involving the user in low level details, which can be confusing. Moreover, the user can learn to program complex actions simply by playing with the robot. This method of programming does not require the user to learn the syntax of a programming language or even to use a computer.
Robot Programming by Demonstration
This is the most common method of automatic programming. Programming by Demonstration (PbD) systems may use touch/pendants for the demonstration, or they may use other, more natural communication methods such as gestures and voices.
A traditional PbD system uses a teach-pendant to demonstrate the movements the robot should perform. This technique has been used for industrial manipulators for many years. The demonstrator performs the task using the teach pendant. The position of the pendant is recorded and the results used to generate a robot program that will move the robot arm through the same motions. Alternatively, the demonstrator may move the robot arm through the required motions either physically or using a controller. Though simple, this type of system has been effective at rapidly creating assembly programs.
There are two current PbD research directions. The first is to produce better robot programs from the demonstrations. The second is to enhance demonstration through the use of multi-modal communications systems.
Significant work has been conducted in recent years to develop PbD systems that are able to take the information produced from a demonstration, such as sensor and joint data, and extract more useful information from it, particularly for industrial tasks. Traditional PbD systems simply record and play back a single demonstration with no variation to account for changes or errors in the world. Much current research aims to introduce some intelligence to PbD systems to allow for flexible task execution rather than pure imitation.
A traditional PbD system uses a teach-pendant to demonstrate the movements the robot should perform. This technique has been used for industrial manipulators for many years. The demonstrator performs the task using the teach pendant. The position of the pendant is recorded and the results used to generate a robot program that will move the robot arm through the same motions. Alternatively, the demonstrator may move the robot arm through the required motions either physically or using a controller. Though simple, this type of system has been effective at rapidly creating assembly programs.
There are two current PbD research directions. The first is to produce better robot programs from the demonstrations. The second is to enhance demonstration through the use of multi-modal communications systems.
Significant work has been conducted in recent years to develop PbD systems that are able to take the information produced from a demonstration, such as sensor and joint data, and extract more useful information from it, particularly for industrial tasks. Traditional PbD systems simply record and play back a single demonstration with no variation to account for changes or errors in the world. Much current research aims to introduce some intelligence to PbD systems to allow for flexible task execution rather than pure imitation.
Robot Automatic Programming Systems
Automatic programming systems provide little or no direct control over the program code the robot will run. Instead, robot code is generated from information entered into the system in a variety of indirect ways. Often a robot system must be running while automatic “programming” is performed, and these systems have been referred to as “online” programming systems. However, automatic programming may also be performed on simulated or virtual robots, for example in industrial robotic CAD systems. In this case the real robot is off-line but the virtual robot is online. For example, the IGRIP (2003) system provides full simulation capabilities for creating and verifying robot programs.
The three categories that automatic systems can be placed into: learning systems, programming by demonstration (PbD) and instructive systems. Learning systems create a program by inductive inference from user provided examples and self-exploration by the robot. In the long run it will be crucial for a robot to improve its performance in these ways.
Examples include a hierarchy of neural networks developed for learning the motion of a human arm in 3D (Billard and Schall, 2001), and a robot that can learn simple behaviours and chain these together to form larger behaviours. Smart and Kaelbing (2002) propose reinforcement learning for programming mobile robots. In the first phase the robot watches as the task is performed. In the second phase the robot attempts to perform the t ask on its own.
The three categories that automatic systems can be placed into: learning systems, programming by demonstration (PbD) and instructive systems. Learning systems create a program by inductive inference from user provided examples and self-exploration by the robot. In the long run it will be crucial for a robot to improve its performance in these ways.
Examples include a hierarchy of neural networks developed for learning the motion of a human arm in 3D (Billard and Schall, 2001), and a robot that can learn simple behaviours and chain these together to form larger behaviours. Smart and Kaelbing (2002) propose reinforcement learning for programming mobile robots. In the first phase the robot watches as the task is performed. In the second phase the robot attempts to perform the t ask on its own.
Robot Behaviour based Languages
Behaviour-based languages provide an alternative approach to the procedural languages. They typically specify how the robot should react to the different conditions, rather than providing a procedural description. A behavioural system is more likely to be used by a robot developer than the end user. The developer would use it to define functionality that the end user would use to perform tasks.
Functional Reactive Programming (FRP) is a good example of a behavioural programming paradigm. In FRP, both continuous and discrete events can be used to trigger actions. Recently, there have been two language extensions of note based on a functional language. These systems allow the programmer to specify how the robot reactions using very little code compared with procedural languages. The descriptions are based on behaviours and events.
FRP is not limited to languages such as Haskell. Dai et al (2002) have implemented an FRP system in C++. It provides similar functionality to Frob, but also allows existing C++ code. One obvious trend is the change away from simple, command based languages, and towards higher-level languages that provide more support to the user, which is illustrated by the increasing popularity of behavioural languages. With more intelligent programming systems, the programmer is required to do less work to achieve the same results, increasing productivity.
Functional Reactive Programming (FRP) is a good example of a behavioural programming paradigm. In FRP, both continuous and discrete events can be used to trigger actions. Recently, there have been two language extensions of note based on a functional language. These systems allow the programmer to specify how the robot reactions using very little code compared with procedural languages. The descriptions are based on behaviours and events.
FRP is not limited to languages such as Haskell. Dai et al (2002) have implemented an FRP system in C++. It provides similar functionality to Frob, but also allows existing C++ code. One obvious trend is the change away from simple, command based languages, and towards higher-level languages that provide more support to the user, which is illustrated by the increasing popularity of behavioural languages. With more intelligent programming systems, the programmer is required to do less work to achieve the same results, increasing productivity.
Robot Generic Procedural Languages
Generic languages provide an alternative to controller-specific languages for programming robots. “Generic” means a high-level multi-purpose language, for example, C++, that has been extended in some way to provide robot-specific functionality. This is particularly common in research environments, where generic languages are extended to meet the needs of the research project. The choice of the base language varies, depending upon what the researchers are trying to achieve. A language developed in this way may be aimed at system programming or application level programming.
The most common extension to a multi-purpose language is a robot abstraction, which is a set of classes, methods, or similar construct that provides access to common robot functions in a simple way. They remove the needs to handle low-level functionality such as setting output port high to turn on motors or translating raw sensor data. It might also provide higher-level abstractions, such as methods to make the robot move to a point using path planning. It is common now for a research robot from a manufacturer to provide such a system with their robots.
To improve this situation, many researches have developed their own robot abstraction systems. Player/stage is commonly used robot programming systems, which provides drivers for many robots and abstractions for controlling them. To prevent the abstractions from being limited to one robot architecture, they use Java classes to provide common abstractions and programming interfaces.
The most common extension to a multi-purpose language is a robot abstraction, which is a set of classes, methods, or similar construct that provides access to common robot functions in a simple way. They remove the needs to handle low-level functionality such as setting output port high to turn on motors or translating raw sensor data. It might also provide higher-level abstractions, such as methods to make the robot move to a point using path planning. It is common now for a research robot from a manufacturer to provide such a system with their robots.
To improve this situation, many researches have developed their own robot abstraction systems. Player/stage is commonly used robot programming systems, which provides drivers for many robots and abstractions for controlling them. To prevent the abstractions from being limited to one robot architecture, they use Java classes to provide common abstractions and programming interfaces.
Roomba the Simpler Robots
Robots are complex machines and significant technical knowledge and skill are needed to control them. While simpler robots exist, for example the Roomba vacuuming robot from iRobot., in these cases the robots are specifically designed for a single application and the control method reflects this simplicity. The Roomba robot’s control panel allows a user to select different room sizes and to start the vacuuming process with a single button push.
However, most robots do not have simple interfaces and are not targeted side’s single, simple function such as vacuuming floors. Most robots have complex interfaces, usually involving a text-based programming language with few high-level abstractions. While the average user will not want to program their robot at a low level, a system is needed that provides the required level of user control over the robot’s tasks.
Robots are becoming more powerful, with more sensors, more intelligence, and cheaper components. As a result robots are moving out of controlled industrial environments such as homes, hospitals and workplaces where they perform tasks ranging from delivery services to entertainment. It is this increase in the exposure of robots to unskilled people that requires robots to become easier to program and manage it.
However, most robots do not have simple interfaces and are not targeted side’s single, simple function such as vacuuming floors. Most robots have complex interfaces, usually involving a text-based programming language with few high-level abstractions. While the average user will not want to program their robot at a low level, a system is needed that provides the required level of user control over the robot’s tasks.
Robots are becoming more powerful, with more sensors, more intelligence, and cheaper components. As a result robots are moving out of controlled industrial environments such as homes, hospitals and workplaces where they perform tasks ranging from delivery services to entertainment. It is this increase in the exposure of robots to unskilled people that requires robots to become easier to program and manage it.
Robot Controller Specific Languages
Controller-specific languages were the original method of controlling industrial robots, and are still the most common method today. Every robot control has some form of machine language, and there is usually a programming language to go with it that can be used to create programs for that robot. These programming languages are usually very simple, with a BASIC-like syntax and simple commands for controlling the robot and program flow. A good example is the language provided by KUKA for its industrial robots. Programs written in this language can be run on a suitable KUKA robot or tested in the simulation system provided by KUKA.
Despite having existed for as long as industrial robots have been in use, controller-specific languages have seen only minor advances. In one case, Freund and Luedemand-Ravit (2002) have created a system that allows industrial robot programs to be generalized around some aspects of a task, with a customized version of the robot program being generated as necessary before being downloaded into a robot controller. The system uses a “generation plan” to provide the basic program for a task. For example, a task to cut shaped pieces of metal could be customized by the shape of the final result. While such a system can help reduce the time for producing programs for related products, it does not reduce the initial time to develop the robot program.
Despite having existed for as long as industrial robots have been in use, controller-specific languages have seen only minor advances. In one case, Freund and Luedemand-Ravit (2002) have created a system that allows industrial robot programs to be generalized around some aspects of a task, with a customized version of the robot program being generated as necessary before being downloaded into a robot controller. The system uses a “generation plan” to provide the basic program for a task. For example, a task to cut shaped pieces of metal could be customized by the shape of the final result. While such a system can help reduce the time for producing programs for related products, it does not reduce the initial time to develop the robot program.
Robot Manual Programming Systems
Users of a manual programming system must create the robot program by hand, which is typically performed without the robot. The finished program is loaded into the robot afterwards. These are often off-line programming systems, where a robot is not present while programming. It is conceivable for manual programming to control a robot online, using for example an interpreted language, where there are no safety concerns.
Manual programming systems can be divided into text-based and graphical systems (also known as icon-based systems). Graphical programming is not considered automatic programming because the user must create the robot program code by hand before running it on the robotic system. There is direct correspondence between the graphical icons and the program statements.
A text-based system uses a traditional programming language approach and is one of the most common methods, particularly in industrial environments where it is often used in conjunction with Programming by Demonstration. Text-based systems can be distinguished by the type of language used, in terms of the type of programming performed by the user.
A manual system programming may use a text-based or graphical interface for entering the program. The text-based systems contain controller specific languages, generic procedural languages, and behavior based languages. Graphical systems contain graph systems, flowchart systems, and diagrammatic systems.
Manual programming systems can be divided into text-based and graphical systems (also known as icon-based systems). Graphical programming is not considered automatic programming because the user must create the robot program code by hand before running it on the robotic system. There is direct correspondence between the graphical icons and the program statements.
A text-based system uses a traditional programming language approach and is one of the most common methods, particularly in industrial environments where it is often used in conjunction with Programming by Demonstration. Text-based systems can be distinguished by the type of language used, in terms of the type of programming performed by the user.
A manual system programming may use a text-based or graphical interface for entering the program. The text-based systems contain controller specific languages, generic procedural languages, and behavior based languages. Graphical systems contain graph systems, flowchart systems, and diagrammatic systems.
Mixed Societies of Robots and Animals
Interaction between robots and animals in mixed societies is really a big challenge and an absolutely new research field. This kind of research is basis for further research that can be applied in agriculture and may be one day for a better interaction with the most sophisticated animal: human.
This is a first challenge of building very small robots that can be compatible with animals. Secondly it is of interest to study perception and sensors for bio-interaction. Finally the behavior aspects are very important for collective robotics. The exact goals are:
• Behavioral model. To propose a formal behavioral model, this applies to mixed societies, and studies its properties. It will formalize the behavior in a programming language.
• Interpretation and real worlds: the mixed societies. To provide a validation of behavioral model, i.e. show that it gives an understanding of the computational capabilities of animal societies.
• Controlling the global behavior of the society. To control mixed societies.
• Towards some general methodology. To provide a general methodology for the study and control mixed societies. For instance it will answer such questions as “are there typically configuration patterns that support a priori behavioral organization of mixed societies”.
• Relevance of our results to qualify of life and management of living resources. The evidence of the relevance of result to other configurations of mixed societies will be provided.
This is a first challenge of building very small robots that can be compatible with animals. Secondly it is of interest to study perception and sensors for bio-interaction. Finally the behavior aspects are very important for collective robotics. The exact goals are:
• Behavioral model. To propose a formal behavioral model, this applies to mixed societies, and studies its properties. It will formalize the behavior in a programming language.
• Interpretation and real worlds: the mixed societies. To provide a validation of behavioral model, i.e. show that it gives an understanding of the computational capabilities of animal societies.
• Controlling the global behavior of the society. To control mixed societies.
• Towards some general methodology. To provide a general methodology for the study and control mixed societies. For instance it will answer such questions as “are there typically configuration patterns that support a priori behavioral organization of mixed societies”.
• Relevance of our results to qualify of life and management of living resources. The evidence of the relevance of result to other configurations of mixed societies will be provided.
Robots and Animals in Mixed Societies
Interaction between robots and animals in mixed societies is really a big challenge and an absolutely new research field. During many years researchers over the world have developed robots that are mechanically inspired by animals or robots that uses biologic actuators but only few robots that interacts with animals and none which tries to be accepted in the society as another animal.
This kind of research is basis for further research that can be applied in agriculture and may be one day for a better interaction with the most sophisticated animal: humans.
The exact goals are described below:
• Behavioral model. It will propose a formal behavioral model, which applies to mixed societies, and study its properties.
• Interpretation and “real” worlds: the mixed-societies. It will provide a validation of the behavioral model. i.e. show that it gives an understanding of the computational capabilities of animal societies.
• Controlling the global behavior of society. It will control mixed societies. We will show that it is actually feasible to change the global behavior of a mixed society and a demonstration will be provided on “real” mixed society.
• Toward some general methodology. It will provide a general methodology for the study and control of mixed society.
• Relevance of our results to quality of life and management of living resources. The evidence of the relevance of the results to other configurations of mixed societies will be provided.
This kind of research is basis for further research that can be applied in agriculture and may be one day for a better interaction with the most sophisticated animal: humans.
The exact goals are described below:
• Behavioral model. It will propose a formal behavioral model, which applies to mixed societies, and study its properties.
• Interpretation and “real” worlds: the mixed-societies. It will provide a validation of the behavioral model. i.e. show that it gives an understanding of the computational capabilities of animal societies.
• Controlling the global behavior of society. It will control mixed societies. We will show that it is actually feasible to change the global behavior of a mixed society and a demonstration will be provided on “real” mixed society.
• Toward some general methodology. It will provide a general methodology for the study and control of mixed society.
• Relevance of our results to quality of life and management of living resources. The evidence of the relevance of the results to other configurations of mixed societies will be provided.
Robot Hardware for Remote Control Vehicles
Beobot is the product of the emerging power of open source software as well as the entry into the market of consumer grade robotic devices that previously only existed in industrial as well as scientific applications. For instance, servomotors are now widely used in remote control (RC) hobby vehicles. Given of the nature RC racing, these servos must be cheap, durable and have ample torque for their size. Additionally, the motors used to run RC cars have become more powerful allowing for the construction of larger lower cost RC vehicles. The Beobot is based upon such a vehicle. The Traxxas E-Maxx RC car was one of the largest electric RC cars on the market. It is a 4-wheel drive truck that is able to reach speeds over 35 MPH, and is servo controlled. This provides as easy interface to computer control.
Additionally, it should be noted that many RC cars including gasoline powered RC cars are also servo controlled. Thus, with some creative judgment, the base component vehicle could take on many forms. At this point: we should also note the imitations of using off the shelf servos.
The design of a robot is of course more than taking a computer and dropping it on a drive train. The choice of the computer is also important. The server market of computers has created an ideal computer form for our robot called PICMG.
Additionally, it should be noted that many RC cars including gasoline powered RC cars are also servo controlled. Thus, with some creative judgment, the base component vehicle could take on many forms. At this point: we should also note the imitations of using off the shelf servos.
The design of a robot is of course more than taking a computer and dropping it on a drive train. The choice of the computer is also important. The server market of computers has created an ideal computer form for our robot called PICMG.
Miniature Water Strider Robot
Adapting highly efficient, multi-functional, and sub-optimal biological system working principles to synthetic technologies is one of the current challenges of engineering design. Biologically inspired systems and robots can enable us to understand nature in more depth, and also provide alternative means of developing smart and advanced novel robotic mechanisms.
Conventional macro scale locomotive systems on water rely on the buoyancy force, which is proportional to volume submerged under the surface of the water. However, when the floating object is scaled down to millimeter sizes by a ratio of I/L, buoyancy force decreases by I/L3. Then, surface forces such as repulsive surface tension forces that are proportional to I/L start to dominate the buoyancy force. Water striders use this scaling effect to stay and walk on water without breaking the water surface. Therefore, this unique locomotion mechanism on water has very little drag and enables highly maneuverable and fast motion.
Recently, the unique characteristics of the water strider have been studied and understood, including the super-hydrophobicity of the legs and its static and dynamic locomotion behaviors. These features suggest a new mechanism that will enable miniature robots to walk on water. Another advantage of utilizing surface tensions as the primary source of locomotion on this robot is its added mobility on and accessibility to shallow water, where boat-like designs are limited by the device displacing water underneath the surface for movements.
Conventional macro scale locomotive systems on water rely on the buoyancy force, which is proportional to volume submerged under the surface of the water. However, when the floating object is scaled down to millimeter sizes by a ratio of I/L, buoyancy force decreases by I/L3. Then, surface forces such as repulsive surface tension forces that are proportional to I/L start to dominate the buoyancy force. Water striders use this scaling effect to stay and walk on water without breaking the water surface. Therefore, this unique locomotion mechanism on water has very little drag and enables highly maneuverable and fast motion.
Recently, the unique characteristics of the water strider have been studied and understood, including the super-hydrophobicity of the legs and its static and dynamic locomotion behaviors. These features suggest a new mechanism that will enable miniature robots to walk on water. Another advantage of utilizing surface tensions as the primary source of locomotion on this robot is its added mobility on and accessibility to shallow water, where boat-like designs are limited by the device displacing water underneath the surface for movements.
Effective Robot Home Applications for Hobbyists
The evolution of robotic seems in many ways to mirror the evolution of the computer. Today robots can be found in many businesses and practically every major research institutions. However, the promise of the common robot envisioned by many prognosticators and authors to exist in the homes and lives of the average person has yet to be fully realized in the same way the personal computer has come to be as ubiquitous as the refrigerator. As such we believe that the next logical step in the evolution of robotic is to place robots in hand of hobbyists. Additionally, these robots must be powerful and flexible enough to spur this next step. It has thus designed a powerful, durable yet relatively low cost robot that relies almost exclusively on off the shelf parts.
While the usage of off the shelf parts and an open source design principle makes the assembly of the hardware components easier, it is also necessary that software implementation be easy to understand, as well as effective enough that the hobbyists can create robotic applications that are useful in the home. This is analogues to computer programmers writing simple programs to balance their checkbook in the early days of home computing. While such application may not have been efficient for their time, it created the foundation for idealistic development that would lead to the spreadsheet and the useful home applications that came later. As such, we are developing a comprehensive open source toolkit based upon biological principles to bring powerful software applications to the end user hobbyist to experiment with in the hopes of creating highly useful home and real world applications.
Biological Designs for Motor Control
It would like to emulate animal design features in an autonomous robot. Significant physical design feature includes energy source and density; sensors and density; and the density; robustness; and flexibility of neuronal axons or wires. It is inspired by computational design because of its unrivaled flexibility, fault tolerance, and power to manage vast arrays of sensory information and novel tasks. Biological systems physical design falls short of current technologies in the communication speeds between computing elements. Neural axons conduct their digital signals or action potentials at speeds less than 120 meters per second. The maximum rate of action potentials on the axon is low, less than 500 Hz, and there are substantial delays. These biological short-comings lead to long delays in any neuron control loop.
However biological systems show that an enormously powerful, robust and adaptive system can be constructed despite neurons inherent delays. The human brain is unparalleled for flexible motion control, planning and abstract cognition. The vast numbers of sensors, the number of parallel neurons used to process information, individual neuron’s complex processing capabilities, and a highly-evolved architecture all compensate for the delays. These assets construct a highly distributed, adaptable, and robust system of computational elements for internal model-based prediction, control, and communication.
However biological systems show that an enormously powerful, robust and adaptive system can be constructed despite neurons inherent delays. The human brain is unparalleled for flexible motion control, planning and abstract cognition. The vast numbers of sensors, the number of parallel neurons used to process information, individual neuron’s complex processing capabilities, and a highly-evolved architecture all compensate for the delays. These assets construct a highly distributed, adaptable, and robust system of computational elements for internal model-based prediction, control, and communication.
Robots Design Base on Biological and Neurobiological
Biological designs and neurobiological controls continue to inspire technological development. Biology provides working example and conceptual proofs that push the engineering envelope. The human form and its augmentation are major sources of technological innovation. Engineering helps we interpret biological adaptations observed in nature. Many of our major historical advances, such as tools, telescopes, and writing, are derived from technologies improving action, augmenting perception, and providing cognitive aids to the individual. These advances have increased the speed, power, spatial range, appropriateness, and precision human actions and perceptions.
During the last two decades, biologically inspired robotics developed into a burgeoning field exploring the ideas of artificial life and adaptive behaviors. It won’t be long before robotic lobsters, cockroaches, flies, lamprey, and tuna enter the commercial market. It hopes these efforts lead to the ultimate robot able to mimic aspects of human action, perception and cognition in remote or hazardous environments such as deep space or radiation spills.
These approaches reveal the inner workings of the most flexible and sophisticated motor controllers in existence. They also provide novel and important insights into biological organization, which can be translated into engineering designs. The framework provides a common language for neuroscientists, engineers and computer scientists to collaborate into the code development of robotic neuroprosthetic and neural network components.
During the last two decades, biologically inspired robotics developed into a burgeoning field exploring the ideas of artificial life and adaptive behaviors. It won’t be long before robotic lobsters, cockroaches, flies, lamprey, and tuna enter the commercial market. It hopes these efforts lead to the ultimate robot able to mimic aspects of human action, perception and cognition in remote or hazardous environments such as deep space or radiation spills.
These approaches reveal the inner workings of the most flexible and sophisticated motor controllers in existence. They also provide novel and important insights into biological organization, which can be translated into engineering designs. The framework provides a common language for neuroscientists, engineers and computer scientists to collaborate into the code development of robotic neuroprosthetic and neural network components.
Subscribe to:
Posts (Atom)