Implementation of Pervasive Robotics

Security is one possible operational scenario for this active head. For this class of applications, Macaco robot was equipped with a behavioral system capable of searching for people or faces, and to further recognize them. In addition, human gaze direction might reveal security threats, and thus a head gaze detection algorithm was developed. Probable targets for such gazings are other people and mostly important, explosives and/or guns. Therefore, salient objects situated in the world are processed for 3D information extraction and texture/color analysis. Current work is also underway for object and scene recognition from contextual cues.


Visual Pre-Attentive System
A log polar attentional system was developed to select relevant information from the cameras output, and to combine it in a saliency map. This map is segmented into three regions of stimuli saliency - the attentional focus.

The eyes of Macaco’s robotic are equipped with a set of basic movements made by frontal eyed, foveal animals: Ballistic movements are executed without visual feedback; Vergence to a target is maintained by a disparity signal from stereo images; and Vestibulo-ocular-reflex (VOR) stabilizes the cameras using data from the inertial sensor.

Post-Attentional Vision
The brain for the M2-M4 Macaco robotic head consists of a flexible, modular and highly interconnected architecture that integrates social interaction, object analysis and functional navigation modules, Object Analysis: Texture and Color Segmentation algorithms run in parallel, and are integrated with 3D object reconstruction to obtain a rendered object model. Social Mechanisms: For the robot to achieve a convincing social role, the vision system is equipped with face detection and recognition modules, together with an algorithm for the detection of human gaze direction.

Navigation: Although 3D information is lost with low visibility, the platform is still able to operate thanks toa thermal camera. A navigation algorithm based on monocular cues runs at frame-rate, for night navigation.

Architecture
The software architecture includes, besides the Visual Attention system, releasers from body sensors and motivational drives that modulate attentional gains. Action is determined by competing behaviors, which also share resources to achieve multi-behavior tasking.

No comments:

Post a Comment