Artificial Intelligence Pervasive Robotics

Pervasive robotics will need, in a near future, small, light and cheap robots that exhibit complex behaviors. These requirements led to the development of the M2-M4 Macaco project - a robotic active vision head. Macaco is a portable system, capable to emulate the head of different creatures both functionally and aesthetically. It integrates mechanisms for autonomous navigation, social interactions, and object analysis. One AI approach is the development of robots whose embodiment and situatedness in the world evoke behaviors that obviate constant human supervision.

Implementation of Pervasive Robotics

Security is one possible operational scenario for this active head. For this class of applications, Macaco robot was equipped with a behavioral system capable of searching for people or faces, and to further recognize them. In addition, human gaze direction might reveal security threats, and thus a head gaze detection algorithm was developed. Probable targets for such gazings are other people and mostly important, explosives and/or guns. Therefore, salient objects situated in the world are processed for 3D information extraction and texture/color analysis. Current work is also underway for object and scene recognition from contextual cues.

EMARO - European Master in Advanced Robotics

The target of this partnership application is to consolidate the European Consortium of EMARO master course with Asian partners. Duration of the partnership is three years. English is the working language in all the partnership institutions. Objectives of EMARO partnership:

RTA Systems Engineering in Robotics

There are many advances in robotics and autonomy depend on increased computational power. Therefore, it is advances in high performance, low power space onboard the computers are central to more capable the robotics. Current efforts in this direction include exploiting high performance field of the programmable gate arrays (FPGAs), multi-core processors, and enabling use in space of commercial grade the computer components through shielding, hardware redundancy, and fault tolerant the software design.

Further pushes in these or other directions to achieve greater in-space computing power are needed. The modular interfaces are needed to enable tool change-out for arms on rovers and for in-space robotics assembly and servicing. When the robots and humans need to work in close proximity; sensing, planning, and autonomous control system for the robots, and overall operational procedures for the robots and humans, it will have to be designed to ensure the human safety around the robots. Developing modular the robotic interfaces will also allow the multiple robots to operate together. These modular interfaces will allow the structural, mechanical, electrical, data, fluid, pneumatic and other interaction. The tools and end effectors can also be developed in a modular manner allowing interchangeability and a reduced the logistics footprint.

The modular interfaces will be the building block for the modular self-replicating robots, and self-assembling robotic systems. The reconfigurable system design offers the ability to reconfigure mechanical, electrical and computing assets in the response to system failures. Reconfigurable computing offers the ability to internally reconfigure in response to the chip level failures caused by the environmental (i.e. space radiation), the life limitations, or the fabrication errors. System verification will be a new challenge for the human rated spacecraft bound for the deep space. New V&V approaches and techniques will be required, and in-flight re-verification following a repair may be necessary.

Autonomous Rendezvous and Docking of Robotic

AR&D is a capability requiring many vehicle subsystems to operate in concert. It is important to clarify that AR&D is not a system and cannot be purchased off the shelf. This strategy focuses on development of a certified, standardized capability suite of subsystems enabling AR&D for different mission classes and needs. This suite will be incrementally developed, tested and integrated over a span of several missions. This technology roadmap focuses on four specific subsystems required for any AR&D mission.
1. Relative Navigation Sensors – During the course of RPOD, varying accuracies of bearing, range, and relative attitude are needed for AR&D. Current implementations for optical, laser, and RF systems are mid-TRL (Technology Readiness Level) and require some development and flight experience to gain reliability and operational confidence. Inclusion of the ability for cooperating AR&D pairs to communicate directly can greatly improve the responsiveness and robustness of the system.
2. Robust AR&D GN&C Real-Time Flight Software (FSW) – AR&D GN&C algorithms are maturing, however, implementing these algorithms into FSW is an enormous challenge. A best practice based implementation of automated/autonomous GN&C algorithms into real-time FSW operating systems needs to be developed and tested.
3. Docking/Capture – NASA is planning for the imminent construction of a new low-impact docking mechanism built to an international standard for human spaceflight missions to ISS. A smaller common docking system for robotic spacecraft is also needed to enable robotic spacecraft AR&D within the capture envelopes of these systems. Assembly of the large vehicles and stages used for beyond LEO exploration missions will require new mechanisms with new capture envelopes beyond any docking system currently used or in development. Development and testing of autonomous robotic capture of non cooperative target vehicles in which the target does not have capture aids such as grapple fixtures or docking mechanisms is needed to support satellite servicing/rescue.
4. Mission/System Managers – A scalable spacecraft software executive that can be tailored for various mission applications, for the whole vehicle, and various levels of autonomy and automation is needed to ensure safety and operational confidence in AR&D software execution. Numerous spacecraft software executives have been developed, but the necessary piece that is missing is an Agencywide open standard which will minimize the costs of such architectures and its ability to evolve over time to help overcome general fears about autonomy/automation.

Robotic Autonomous System

Autonomy, in the context of a system (robotic, spacecraft, or aircraft), is the capability for the system to operate independently from external control. For NASA missions there is a spectrum of Autonomy in a system from basic automation (mechanistic execution of action or response to stimuli) through to fully autonomous systems able to act independently in dynamic and uncertain
environments. Two application areas of autonomy are:
(i) increased use of autonomy to enable an independent acting system, and
(ii) automation as an augmentation of human operation. Autonomy’s fundamental benefits are;
increasing a system operations capability, cost savings via increased human labor efficiencies and reduced needs, and increased mission assurance or robustness to uncertain environments.

An “autonomous system” is as a system that resolves choices on its own. The goals the system is trying to accomplish are provided by another entity; thus, the system is autonomous from the entity on whose behalf the goals are being achieved. The decision-making processes may in fact be simple, but the choices are made locally. The selections have been made already, and encoded in some way, or will be made externally to the system Key attributes of such autonomy for a robotic system include the ability for complex decision making, including autonomous mission execution and planning, the ability to self-adapt as the environment in which the system is operating changes, and the ability to understand system state and react accordingly.

Variable (or mixed initiative) autonomy refers to systems in which a user can specify the degree of autonomous control that the system is allowed to take on, and in which this degree of autonomy can be varied from essentially none to near or complete autonomy. For example, in a human-robot system with mixed initiative, the operator may switch levels of autonomy onboard the robot. Controlling levels of autonomy is tantamount to controlling bounds on the robot's authority, response, and operational capabilities.

Robonaut 2 Mission to ISS

During FY11 the Robonaut 2 system will be launched on STS-133 and delivered to the ISS in what will become the Permanent Multipurpose Module (PMM). Robonaut 2 (R2) is the latest in a series of dexterous robots built by NASA as technology demonstration, nowevolving from Earth to in-space experiments. The main objectives are to explore dexterous manipulation in zero gravity, test human-robot safety systems, test remote supervision techniques for operation across time delays, and experiment with ISS equipment to begin offloading crew of housekeeping and other chores. The R2 was built in a partnership with General Motors, with a shared vision of a capable but safe robot working near people.

The R2 has the state of the art in tactile sensing and perception, as well as depth map sensors, stereo vision, and force sensing. The R2 will be deployed initially on a fixed pedestal with zero mobility, but future upgrades are planned to allow it to climb and reposition itself at different worksites. Robonaut 2’s dexterous manipulators are the state of the art, with three levels of force sensing for safety, high strength to weight ratios, compliant and back drivable drive trains, soft and smooth coverings, fine force and position control, dual arm coordination, and kinematic redundancy.

Human interfaces for the R2 include direct force interaction where humans can manually position the limbs, trajectory design software tools, and script engines. R2 is designed to be directly tele-operated, remotely supervised, or run in an automated manner. The modular design can be upgraded over time to extend the Robonaut capabilities with new limbs, backpacks, sensors and software.

The Robotic Refueling Dexterous Demonstration (R2D2) is a multifaceted payload designed for representative tasks required to robotically refuel a spacecraft. Once mounted to the International Space Station, the demonstration will utilize the R2D2 payload complement, the Special Purpose Dexterous Manipulator (SPDM) robotic arms, and 4 customized, interchangeable tools to simulate the tasks needed to refuel a spacecraft using its standard ground fill‐and‐drain valve.
Related Posts Plugin for WordPress, Blogger...