Since its inception, Aldebaran Robotics has followed its aim of making companion and personal assistant robots accessible to the greatest number. Having developed a first generation of humanoid robots, Nao, the company has taken up a new challenge: development of a larger robot enabling more effective integration in a day-to-day environment such as opening a door or picking up objects placed on a table.
The Romeo robot is being developed in a number of phases.
The research on Romeo was initiated within the scope of an interministerial fund (FUI) project in January 2009. Accredited by the Cap Digital competitiveness cluster, the project was funded by DGCIS, the Ile de France Region and the Municipality of Paris. As a structural project for the French robotics sector, Romeo has brought together a dozen industrial and academic partners.
The Romeo project had four objectives covering a number of major aspects of robotics:
- An interactive, open and modular mechatronic and software platform
- A Personal Assistant Robot, monitoring and human-machine interaction functions.
- A robust research platform
- The foundations of an industrial robotics cluster
In four years, Romeo has moved on from an ambitious vision of service robotics to a 1.4m robot, known all over the world. The first models have been ordered by French and European laboratories. The pioneering dynamism of the French service robotics sector, generated by this FUI project, is already spreading to further national and European joint projects.
The Romeo 2 project was launched in November 2012. Supported by Bpifrance as a structural competitiveness structure project (PSPC) in the Future investment programme, this 4-year project includes 16 industrial and academic partners.
Romeo 2 is based on the foundations initiated by the FUI project and focuses on the areas which could not be covered and which are nonetheless essential for the acceptability of a large-scale humanoid robot in the homes of subjects with diminishing independence:
- I.T. and physical security
- Capability of learning the user's habits in order to have a better understanding of needs and intentions
- Personal assistance applications
Romeo from design to second prototype
During the first phase of the project (2009 – 2012), Romeo's physical platform was entirely assembled by Aldebaran. It has seen two versions, as envisaged when the project was drafted. Between the two versions, the design of the spinal column has changed, the definitive electronics has been integrated in the head, the upper body shells have been developed in a stronger material, the electronic wiring of the legs has been improved, the batteries have been integrated, etc.
However, all these improvements were not deemed sufficient to achieve a quality suitable for meeting the requirements of the laboratories who ordered Romeo models. A third design cycle was thus initiated after the end of the project to finalise the hands, arms, internal architecture of the torso and head in particular. It was also necessary to streamline the electronic board wiring and layout. The cable cylinder-based leg actuators devised by CEA LIST have been optimised with Aldebaran in order to improve their reliability with a view to their use for walking. It was not possible to integrate them in the carbon exoskeleton during the project, but this has been done since then. Aldebaran had not finished developing an operational actuation system for the spinal column but LISV has developed two designs for the column, one of which is actuated but had not been integrated in the complete prototype. The robot's wrist and hand, which were not one of the project's priorities, were not operational on the prototype in late 2012 but have been integrated since then.
Touch cap (CEA LIST)
In the robot's head, the CEA LIST touch cap had been integrated; an original eye actuation system, proposed by LPPA, was developed during the project and has been optimised since then by Aldebaran. The moving eyes are coupled with a vestibular system intended to enable stabilisation of the robot's gaze, an essential aspect in order to access more dynamic gait modes recommended by LPPA. The audio processing board devised by Aldebaran and Telecom Paris Tech to handle a 16-micro antenna was not ready for start-up in late 2012 but has since been taken on by Telecom Paris Tech.
Eye actuation (LPPA, Aldebaran)
The work on the auditory and vocal system, crucial for natural interaction with the robot, has nonetheless made considerable headway with the doctoral research conducted by Telecom Paris Tech on source separation and the first integration of the results on the microcontroller to be fitted in the audio processing board. At its end, LIMSI had recorded and annotated a number of corpora used for the development of the non-verbal recognition functions developed in the course of its two doctoral research projects: emotion detection and speaker recognition. These functions have been integrated in the architecture of Aldebaran robots in conjunction with Voxler. Acapela has improved its speech recognition function and designed the synthetic voice for Romeo. Voxler has developed a tool for modifying Romeo's voice and has equipped Nao with a musical game using its musical analysis components.
Corpus recording (LIMSI, Institut de la Vision)
CEA LIST and Aldebaran have worked on the robot's visual system to give it the ability to recognise objects, gestures and navigate in its environment. On object instance recognition, i.e. objects with pre-learned images, CEA LIST has obtained performances placing it second in its field worldwide for precision and number one for the speed/precision ratio; however, work remains to be done on improving the computing time for this recognition. In respect of object class recognition (consisting in recognising a chair while the learning base contains images of different chairs from those to be recognised), CEA LIST has developed a method called Fast Shared Boosting which reaches the level of, or even surpasses, the best current performances in the world.
Object recognition (CEA LIST)
For operator gestural perception, Aldebaran has focused on the use of a number of 3D sensor technologies (IR pattern projection, flight time, stereo-vision) to be able to conduct the first gestural interaction trials which have confirmed the importance of this type of interaction. However, it would not have been possible to choose the 3D sensor technology at the end of the project since all those tested have advantages and drawbacks and that chosen will need to be the subject of a tricky compromise with technical constraints in addition to industrial constraints. Also in terms of location and navigation, Aldebaran has explored a number of pathways (visual aimers, laser) which have made it possible to achieve satisfactory results, but here again, the definitive choice of technology has proven to be difficult and the experiments need to be extended before deciding on the industrial solution.
The partners who have worked on the planning and sensory-motor control of Romeo's motion have unfortunately not been able to work on the complete Romeo prototype but have succeeded in developing their algorithms on realistic simulators of Romeo, on Nao, on other humanoid robots available in their lab or on subassemblies of Romeo. LAAS has worked on generating stable full-body movements enabling the robot to generate complex movements while keeping its balance. Based on LPPA research on the top-down control paradigm, LAAS has proposed an oculocentric control framework which should make it possible to obtain reactive gait modes of superior quality.
Romeo gait simulation (Aldebaran, LAAS, INRIA)
LAAS has also improved its motion planning algorithms to handle complex tasks such as opening a door. Research has also been conducted to take further inspiration from human movements in order to provide robots with "natural" gestures which enable easier acceptability of large humanoid robots in our environment. INRIA has also taken a further step towards eye-gaze-guided gait, a reactive gait capable of accounting for a disturbance detected mid-gait in one or more steps. A new gait speed, making use of the flexibility of Romeo's toes and enabling vertical pelvic oscillation, has been developed. It enables the robot to walk not only more naturally but also at a quicker pace, due to larger steps, while limiting the speed and loads required of the motors. Furthermore, Aldebaran has used some of these principles to improve Nao's gait. LPPA has worked on a probabilistic balance model for handling the lack of precision of the sensors on Nao (and on Romeo) in order to offer high resistance to abnormal situations such as external exertions of pressure.
Trajectory planning (LAAS)
Dynamic Romeo simulation (LAAS)
This type of stabilisation will be crucial for a large-scale robot such as Romeo. Finally, studying the biomimetic field, LPPA has worked on a neural controller for biped locomotion using genetic algorithms. This controller has given some promising results but requires further work to be integrated in a complete gait algorithm. So that partners' algorithms can be transferred easily to Romeo, Aldebaran needs to provide a robot behaving in the same way as the simulators used. This means that it is capable of offering satisfactory position servo control: the joints reach the position requested by the gait algorithms. The teams have thus worked on a new joint control framework making it possible to obtain more precise joint movements, by means of low-level robot dynamic handling.
Neural Controller (LPPA, ISIR)
The work on behaviour, dialogue and emotions needed to make use of all the results of the research described above to enable the robot to adapt the its actions to the context in which it finds itself. Spirops has implemented its decision-making tool in the Aldebaran robot development environment in order to offer simple means, for the application developed, to describe the reasons behind the robot's decision to undertake one action rather than another according to the context. Voxler and Spirops have resumed their work on dialogue, left incomplete after the disappearance of As An Angel. They have developed a system specifically for "small talk". To add content to this dialogue, Spirops has developed links with Linkedin, Google Contact and Google Calendar. Using these three applications, the robot can process information on its interlocutor's life, relationships and schedule to carry out discussions. Spirops has also developed an interlocutor knowledge management system that the robot can use to determine which emotion to express at a given time. LIMSI has worked on speaker characterisation in emotional interaction with the robot via vocal behaviour in order to guide the robot's behaviour. This consisted of determining the interlocutor's emotional profile based on the emotions detected in his/her voice, to implement, in the Spirops decision-making tool, this profile in the form of rules and the influence of this profile on the choice of behaviour to be adopted by the robot. Finally Spirops and Aldebaran have integrated all these developments in the context of dialogue between the human and the robot. The integration was partial, since the robot was not available for all the partners before the end of the project and was not tested with its end users. However, models on Nao have been used to validate the feasibility of integrating partner developments on Romeo. Institut de la Vision has tested, on Nao, some of the basic applications developed with some of its patients. The results obtained with this test proved to be quite positive and have aroused genuine interest from impaired subjects for whom it would offer genuine assistance for their day-to-day lives. A list of recommendations to be included for the actual implementation of Romeo has been drawn up.
Presentation of Nao to a visually impaired subject (Institut de la Vision)