Table des matières

  • 1. Introduction
  • 2. Aldebaran
    • 2.1. Navigation
    • 2.2. Conception
  • 3. CEA
  • 4. Strate
  • 5. INRIA
    • 5.1. Manipulation mobile
    • 5.2. Commande de référencée vision
  • 6. LIMSI
  • 7. Participation à des Workshops et des tables rondes

1. Introduction

Pendant la deuxième période du projet Romeo2, allant du mois 16 (avril 2014) au mois 32 (juin 2015), 28 publications scientifiques ont été réalisées par les partenaires : 23 dans des conférences et des workshops, 5 dans des revues.

Conférences:

  • IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man) 2015
  • International Symposium of Robotics Research (ISRR) 2015
  • International Conference on Social Robotics (ICSR) 2015
  • IEEE International Conference on Humanoid Robots (Humanoids) 2014
  • IEEE International Conference on Intelligent Robots and Systems (IROS) 2014, 2015
  • International Conference on Advanced Robotics (ICAR) 2015
  • Genetic and Evolutionary Computation Conference (GECCO) 2015
  • International Conference on Computer Vision Theory and Applications (VISAPP) 2015

Journaux :

  • International Journal of Humanoid Robotics
  • International Journal of social robotics
  • Journal of Lovotics

Aldebaran et ses partenaires ont également été invités à présenter leurs travaux dans le cadre de Romeo2 lors de différents workshops internationaux. Ce livrable présente le détail de la dissémination scientifique du projet.

2. Aldebaran

2.1.Proceedings de workshops et de conférences

Lafaye, J.; Gouaillier, D.; Wieber, P.-B., « Linear model predictive control of the locomotion of Pepper, a humanoid robot with omnidirectional wheels, » in Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on , vol., no., pp.336-341, 18-20 Nov. 2014
The goal of this paper is to present a new real-time controller based on linear model predictive control for an omnidirectionnal wheeled humanoid robot. It is able to control both the mobile base of the robot and its body, while taking into account dynamical constraints. It makes it possible to have high velocity and acceleration motions by predicting the dynamic behavior of the robot in the future. Experimental results are proposed on the robot Pepper made by Aldebaran, showing good performance in terms of robustness and tracking control, efficiently managing kinematic and dynamical constraints.

Lafaye, J.; Collette, C.; Wieber, P.-B., « Model predictive control for tilt recovery of an omnidirectional wheeled humanoid robot, » in Robotics and Automation (ICRA), 2015 IEEE International Conference on , vol., no., pp.5134-5139, 26-30 May 2015
The goal of this paper is to present a real-time controller for an omnidirectional wheeled humanoid robot which can be strongly disturbed and tilt around its wheels. It is based on two linear model predictive controllers, managed by a tilt supervisor, which detects changes of the dynamic model caused by the tilt of the robot. Experimental results are proposed on the robot Pepper made by Aldebaran, showing good performance in term of stability and robustness.

Amit Kumar Pandey, Rodolphe Gelin, Rachid Alami, Renaud Viry, Axel Buendia, Roland Meertens, Mohamed Chetouani, Laurence Devillers, Marie Tahon, David Filliat, Yves Grenier, Mounira Maazaoui, Abderrahmane Kheddar, Frederic Lerasle, and Laurent Fitte Duval, “Romeo2 Project: Humanoid Robot Assistant and Companion for Everyday Life: I. Situation Assessment for Social Intelligence”, International workshop on Artificial Intelligence and Cognition, Torino, Italy (AIC 2014), pp141-147. http://ceur-ws.org/Vol-1315/paper12.pdf
For a socially intelligent robot, different levels of situation assessment are required, ranging from basic processing of sensor input to high-level analysis of semantics and intention. However, the attempt to combine them all prompts new research challenges and the need of a coherent framework and architecture. This paper presents the situation assessment aspect of Romeo2, a unique project aiming to bring multi-modal and multi-layered perception on a single system and targeting for a unified theoretical and functional framework for a robot companion for everyday life. It also discusses some of the innovation potentials, which the combination of these various perception abilities adds into the robot’s socio-cognitive capabilities.

Amit Kumar Pandey, Rodolphe Gelin, “Human Robot Interaction can Boost Robot’s Affordance Learning: A Proof of Concept”, 17th International Conference on Advanced Robotics, ICAR 2015, Istanbul, Turkey.
Affordance, being one of the key building blocks behind how we interact with the environment, is also studied widely in robotics from different perspectives, for navigation, for task planning, etc. Therefore, the study is mostly focused on affordances of individual objects and for robot environment interaction, and such affordances have been mostly perceived through vision and physical interaction. However, in a human centered environment, for a robot to be socially intelligent and exhibit more natural interaction behavior, it should be able to learn affordances also through day-to-day verbal interaction and that too from the perspective of what does the presence of a specific set of objects affords to provide. In this paper, we will present the novel idea of verbal interaction based multi-object affordance learning and a framework to achieve that. Further, an instantiation of the framework on the real robot within office context is analyzed. Some of the potential future works and applications, such as fusing with activity pattern and interaction grounding will be briefly discussed.

2.2.Journaux

Amit Kumar Pandey, “Lovotics, the Uncanny Valley and the Grand Challenges”, Journal of Lovotics, Volume 1(1), 2014.
Lovotics, the relatively new direction of robotics research, aims to bring love, affection and friendship between the human and the robot. In this paper, we will discuss the key aspects and raise some basic questions, which must be addresses for designing a ‘lovotics robot’, which is expected to be capable of stimulating mutual love-like bond between the human and the robot. We must also be careful of not falling in the uncanny valley.

Amit Kumar Pandey, Rachid Alami and Kazuhiko Kawamura, “Developmental Social Robotics: an Applied Perspective”, International Journal of Social Robotics, Editorial of Special Issue. pages=417- 420, Volume 7(4), Aug. 2015.
For robots to coexist with us in harmony and be our companion, they should be able to explicitly reason about humans, their presence, the social and human-centered environment, and the social-cultural norms, to behave in socially expected and accepted manner. To develop such capabilities, from psychology, child developmental and human behavioral research, we can identify some of the key ingredients, such as the abilities to distinguish between self and others, and to reason about affordance, perspective taking, shared spaces, social signals, emotions, theory of mind, social situation, etc., and the capability to develop social intelligence through the process of social learning. Researchers across the world are working to equip robots with some of these aspects from diverse perspectives and developing various interesting and innovative applications. This special issue is intended to reflect some of those high-quality research works, results and potential applications.
http://link.springer.com/article/10.1007%2Fs12369-015-0312-0#

3. Armines

3.1.Proceedings de workshops et de conférences

Ferland, F., Cruz-Maya, A., Tapus, A. « Adapting an Hybrid Behavior-Based Architecture with Episodic Memory to Different Humanoid Robots« , Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Kobe, Japan, pp. 797-802. 2015
A common goal of robot control architecture designers is to create systems that are sufficiently generic to be adapted to different robot hardware. Beyond code reuse from a software engineering standpoint, having a common architecture could lead to long-term experiments spanning multiple robots and research groups. This paper presents a first step toward this goal with HBBA, a Hybrid Behavior-Based Architecture first developed on the IRL-1 humanoid robot and integrating an Adaptive Resonance Theory-based episodic memory (EM-ART). This paper presents the first step of the adaptation of this architecture to two different robots, a Meka M-1 and a NAO from Aldebaran, with a simple scenario involving learning and sharing objects’ information between both robots. The experiment shows that episodes recorded as sequences of people and objects presented to one robot can be recalled in the future on either robot, enabling event anticipation and sharing of past experiences.

4. INRIA

4.1.Proceedings de workshops et de conférences

Sherikov, A.; Dimitrov, D.; Wieber, P.-B., « Whole body motion controller with long-term balance constraints, » in Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on , vol., no., pp.444-450, 18-20 Nov. 2014
The standard approach to real-time control of humanoid robots relies on approximate models to produce a motion plan, which is then used to control the whole body. Separation of the planning stage from the controller makes it difficult to account for the whole body motion objectives and constraints in the plan. For this reason, we propose to omit the planning stage and introduce long-term balance constraints in the whole body controller to compensate for this omission. The new controller allows for generation of whole body walking motions, which are automatically decided based on both the whole body motion objectives and balance preservation constraints. The validity of the proposed approach is demonstrated in simulation in a case where the walking motion is driven by a desired wrist position. This approach is general enough to allow handling seamlessly various whole body motion objectives, such as desired head motions, obstacle avoidance for all parts of the robot, etc.

Cazy, N.; Dune, C.; Wieber, P.-B.; Robuffo Giordano, P. , Pose error correction for visual features prediction, Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, 14-18 Sept. 2014, pp. 791 – 796
Predicting the behavior of visual features on the image plane over a future time horizon is an important possibility in many different control problems. For example when dealing with occlusions (or other constraints such as joint limits) in a classical visual servoing loop, or also in the more advanced model predictive control schemes recently proposed in the literature. Several possibilities have been proposed to perform the initial correction step for then propagating the visual features by exploiting the measurements currently available by the camera. But the predictions proposed so far are inaccurate in situations where the depths of the tracked points are not correctly estimated. We then propose in this paper a new correction strategy which tries to directly correct the relative pose between the camera and the target instead of only adjusting the error on the image plane. This correction is then analyzed and compared by evaluating the corresponding improvements in the feature prediction phase.

Cazy, N.; Wieber, P.-B.; Giordano, P.R.; Chaumette, F., « Visual servoing when visual information is missing: Experimental comparison of visual feature prediction schemes, » in Robotics and Automation (ICRA), 2015 IEEE International Conference on , vol., no., pp.6031-6036, 26-30 May 2015
One way to deal with occlusions or loss of tracking of the visual features used for visual servoing tasks is to predict the feature behavior in the image plane when the measurements are missing. Different prediction and correction methods have already been proposed in the literature. The purpose of this paper is to compare and experimentally validate some of these methods for eye-in-hand and eye-to-hand configurations. In particular, we show that a correction based both on the image and the camera/target pose provides the best results.

G. Claudio, F. Spindler, F. Chaumette, “Grasping by Romeo with visual servoing” Journées Nationales de la Robotique Humanoïde, JNRH, Nantes, June 2015.
The purpose this work is to develop a visual servoing framework for the upper body of Romeo. The main application is a manipulation task using one of its arms. The robot has to detect and to track with its gaze a box placed on a table in front of him, estimate the pose of the box with respect to one of its eye’s camera, approach its arm near the box and then move the arm using visual feedback so that it is able to grasp the box accurately. Once this is achieved, it detects a human and delivers the box.
Link: http://www.irisa.fr/lagadic/pdf/2015_jnrh_claudio.pdf

4.2.Journaux

Escande A., Mansard N., Wieber P.-B., Hierarchical quadratic programming: Fast online humanoid-robot motion generation The International Journal of Robotics Research June 2014 vol. 33 no. 71006-1028
Hierarchical least-square optimization is often used in robotics to inverse a direct function when multiple incompatible objectives are involved. Typical examples are inverse kinematics or dynamics. The objectives can be given as equalities to be satisfied (e.g. point-to-point task) or as areas of satisfaction (e.g. the joint range). This paper proposes a complete solution to solve multiple least-square quadratic problems of both equality and inequality constraints ordered into a strict hierarchy. Our method is able to solve a hierarchy of only equalities 10 times faster than the iterative-projection hierarchical solvers and can consider inequalities at any level while running at the typical control frequency on whole-body size problems. This generic solver is used to resolve the redundancy of humanoid robots while generating complex movements in constrained environments.

5. LIMSI

5.1.Proceedings de workshops et de conférences

Laurence Devillers, Marie Tahon, Mohamed Sehili, Agnès Delaborde, Détection des états affectifs lors d’interactions parlées: robustesse des indices non verbaux, TAL-55-2, special issue sur le traitement automatique du langage parlé, 2015, http://www.atala.org/IMG/pdf/5.Devillers-TAL55-2.pdf
In a Human-Machine Interaction context, automatic in-voice affective state detection systems have to be robust to variabilities and computationally efficient. This paper presents the performance that can be reached using para-verbal (non-verbal) cues. We propose a methodology to select robust parameters families, based on the study of three sets of descriptors, and tested on three different corpora of spontaneous data collected in Human-Machine Interaction contexts. The key finding of our study puts forward perceptive parameters linked to spectral energy, particularly energy on Bark bands, which yield the same performance on a four-emotion detection task as the reference set of descriptors used in the Interspeech 2009 challenge.

Mohamed A. Sehili, Fan Yang and Laurence Devillers, Attention Detection in Elderly People – Robot Spoken Interactions, ICMI, Istanbul, Turquey, 2014.
http://dl.acm.org/citation.cfm?doid=2666499.2666502
In many human-robot social interactions, where the robot is to interact with only one human throughout the interaction, the « human » side of a conversation is very likely to interact with other humans present in the same room and temporarily loses the focus on the main interaction. These human-human interactions can be a very brief chat or a pretty long discussion. To effectively build a human-robot spoken interaction system, one should enable the robot to be aware of the situations where it is (or it is not) the addressee. In many works, gaze tracking and audio localization techniques are used to detect the attention of the subject. In this work we used a combination of voice analysis and head-turning detection to detect if the subject is addressing the robot or if their attention is captured when talking to another person. A subset of the ROMEO2 project corpus is used for experiment. The corpus is made up of 9 hours of social interaction between 27 elderly people and a humanoid robot. This work is done in the context of the ROMEO2 project1 whose goal is to develop a humanoid robot that can act as a comprehensive assistant for persons suffering from loss of autonomy.

Mohamed Sehili, Fan Yang, Violaine Leynaert, Laurence Devillers, A corpus of social interaction between Nao and elderly people, Workshop on EMOTION, SOCIAL SIGNALS, SENTIMENT & LINKED OPEN DATA (ES3LOD2014.), LREC 2014, mai 2014.
http://www.researchgate.net/publication/272999352_A_corpus_of_social_interaction_between_Nao_and_elderly_people
This paper presents a corpus featuring social interaction between elderly people in a retirement home and the humanoid robot Nao. This data collection is part of the French project ROMEO2 that follows the ROMEO project. The goal of the project is to develop a humanoid robot that can act as a comprehensive assistant for persons suffering from loss of autonomy. In this perspective, the robot is able to assist a person in their daily tasks when they are alone. The aim of this study is to design an affective interactive system driven by interactional, emotional and personality markers. In this paper we present the data collection protocol and the interaction scenarios designed for this purpose. We will then describe the collected corpus (27 subjects, average age: 85) and discuss the results obtained from the analysis of two questionnaires (a satisfaction questionnaire, and the Big-five questionnaire).

Marie Tahon, Laurence Devillers, Laughter and other affect bursts and emotion detection for onlinehuman-robot interaction, Workshop on laughter, 2015. https://laughterworkshop2015.files.wordpress.com/2014/11/tahon_laughter_detection_for_on-line_human-robot_interaction.pdf
This paper presents a study of laugh classification using a cross-corpus protocol. It aims at the automatic detection of laughs in a real-time human-machine interaction. Positive and negative laughs are tested with different classification tasks and different acoustic feature sets. F.measure results show an improvement on positive laughs classifi- cation from 59.5% to 64.5% and negative laughs recognition from 10.3% to 28.5%. In the context of the Chist- Era JOKER project, positive and negative laugh detec- tion drives the policies of the robot Nao. A measure of engagement will be provided using also the number of positive laughs detected during the interaction.

5.2.Journaux

Laurence Devillers, Marie Tahon, Mohamed Sehili, Agnès Delaborde, Inference of Human Beings’ Emotional States from Speech in Human-Robot Interactions, International Journal of social robotics, Special Issue Dev, 2015, http://link.springer.com/article/10.1007/s12369-015-0297-8?no-access=true

The challenge of this study is twofold: recognizing emotions from audio signals in naturalistic Human–Robot Interaction (HRI) environment, and using a cross-dataset recognition for robustness evaluation. The originality of this work lies in the use of six emotional models in parallel, generated using two training corpora and three acoustic feature sets. The models are obtained from two databases collected in different tasks, and a third independent real-life HRI corpus (collected within the ROMEO project—http://www.projetromeo.com/) is used for test. As primary results, for the task of four-emotion recognition, and by combining the probabilistic outputs of six different systems in a very simplistic way, we obtained better results compared to the best baseline system.
Moreover, to investigate the potential of fusing many systems’ outputs using a “perfect” fusion method, we calculate the oracle performance (oracle considers a correct prediction if at least one of the systems outputs a correct prediction). The obtained oracle score is 73 % while the auto-coherence score on the same corpus (i.e. performance obtained by using the same data for training and for testing) is about 57 %. We experiment a reliability estimation protocol that makes use of outputs from many systems. Such reliability measurement of an emotion recognition system’s decision could help to construct a relevant emotional and interactional user profile which could be used to drive the expressive behavior of the robot.

6. LAAS

6.1.Proceedings de workshops et de conférences

M. Naveau, J. Carpentier, S. Barthelemy, O. Stasse and P. Soueres, METAPOD – Template META-Programming applied to Dynamics: COP-COM trajectory filtering, IEEE Int. Conference on Humanoid Robots, Madrid, Espagne, Novembre 2014.
In this contribution, Metapod, a novel C++ library computing efficiently dynamic algorithms is presented. It uses template-programming techniques together with code- generation. The achieved performances shows some advantage over the state-of-the art dynamic library RBDL mostly on ATOM processor and for the inertia matrix computation, which are relevant for robotics application. On recent desktop computer, the ratio of the gain is not so obvious and in general the time achieved by both library is not significantly different for inverse dynamics. The advantage of this library is that it is open-source and does not rely on any external symbolic computational software. A main
drawback is the increase complexity in debugging the code source due to template programming. Additionally we show how it can help in current control problems for humanoid robots, and more specifically for dynamic filtering of walking gait trajectories.

A. Del Prete, N. Mansard, F. Nori, Giorgio Metta and Lorenzo Natale, Partial Force Control of Constrained Floating-Base Robots, IEEE/ International Conference on Intelligent Robots and Systems (IROS), 2014
Legged robots are typically in rigid contact with the environment at multiple locations, which add a degree of complexity to their control. We present a method to control the motion and a subset of the contact forces of a floating-base robot. We derive a new formulation of the lexicographic optimization problem typically arising in multi- task motion/force control frameworks. The structure of the constraints of the problem (i.e. the dynamics of the robot) allows us to find a sparse analytical solution. This leads to an equivalent optimization with reduced computational complexity, comparable to inverse-dynamics based approaches. At the same time, our method preserves the flexibility of optimization based control frameworks. Simulations were carried out to achieve different multi-contact behaviors on a 23-degree-of- freedom humanoid robot, validating the presented approach. A comparison with another state-of-the-art control technique with similar computational complexity shows the benefits of our controller, which can eliminate force/torque discontinuities

F. Lamiraux, J. Mirabel, HPP: a new software framework for manipulation planning IEEE/International Conference on Intelligent Robots and Systems (IROS), Hambourg Germany, septembre 2015
Abstract: We present a new open-source software frame- work (called Humanoid Path Planner –HPP) for path and manipulation planning. Some features like built-in kinematic chains and implementation of non-linear constraints make the framework a good fit for humanoid robot applications. The implementation of kinematic chains takes into account the Lie-group structure (rotation in 3D space SO(3)) of robot configuration spaces. Robots and obstacles can be loaded from ROS-URDF files. Manipulation problems are modeled by a graph of constraints. At installation, a corba server is installed. The server can be controlled by python scripting to define and solve a problem.

A. Mifsud, M. Benallegue, F. Lamiraux. Estimation of Contact Forces and Floating Base Kinematics of a Humanoid Robot Using Only Inertial Measurement Units, IEEE/ International Conference on Intelligent Robots and Systems (IROS), Hambourg Germany, septembre 2015
A humanoid robot is underactuated and only relies on contacts with environment to move in the space. The ability to measure contact forces and torques enables then to predict the robot dynamics including balance. In classical cases, a humanoid robot is considered as a multi-body system with rigid limbs and joints and interactions with the environment are modeled as stiff contacts. Forces and torques at contacts are generally estimated with sensors which are expensive and sensitive to calibration errors. However, a robot is not perfectly rigid and contacts may have flexibilities. Therefore, external forces create geometric deformations of the body or its environment. These deformations may modify the robot dynamics and produce unwanted and unbalanced motions. Nonetheless, if we have a model of contact stiffness and are able to reconstruct reliably the geometric deformation, we can reconstruct forces and torques at contact. This study aims at estimating contact forces and torques and to observe the body kinematics of the robot with only an Inertial Measurements Unit (IMU). We show that we are able to reconstruct efficiently the position of the Center of Pressure (CoP) of the robot with only the IMU and proprioceptive data from the robot.

Benallegue Mehdi, Lamiraux, Florent. Humanoid flexibility deformation can be efficiently estimated using only inertial measurement units and contact information. Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on. IEEE, 2014. p. 246-251.
Most robots are today controlled as being entirely rigid. But often, as for HRP-2 robot, there are flexible parts, intended for example to absorb impacts. The deformation of this flexibility changes the configuration of the robot, particularly in orientation. Nevertheless, robots have usually inertial sensors (IMUs) to reconstruct their orientation based on gravity and inertial effects. Moreover, humanoids have usually to ensure a firm contact with the ground, which provides reliable information on the surrounding environment. We show in this study, how important it is to take into account these information to improve IMU-based position/orientation reconstruction. We use an extended Kalman filter to rebuild the deformation, making the fusion between IMU and contact information, and without making any assumption on the dynamics of the flexibility. We show how with this simple setting, we are able to compensate for perturbations and to stabilize the end-effector’s position/orientation in the world reference frame.

L.Fitte-Duval, A.Mekonnen, F.Lerasle, Upper body detection and feature set evaluation for body pose classification, Int. Conf. on Computer Vision Theory and Applications (VISAPP), Berlin, Mars 2015.
This work investigates some visual functionalities required in Human-Robot Interaction (HRI) to evaluate the intention of a person to interact with another agent (robot or human). Analyzing the upper part of the human body which includes the head and the shoulders, we obtain essential cues on the person’s intention. We propose a fast and efficient upper body detector and an approach to estimate the upper body pose in 2D images. The upper body detector derived from a state-of-the-art pedestrian detector identifies people using Aggregated Channel Features (ACF) and fast feature pyramid whereas the upper body pose classifier uses a sparse representation technique to recognize their shoulder orientation. The proposed detector exhibits state- of-the-art result on a public dataset in terms of both detection performance and frame rate. We also present an evaluation of different feature set combinations for pose classification using upper body images and report promising results despite the associated challenges.

L.Fitte-Duval, A.Melonnen, F.Lerasle, Détection de bustes et évaluation de descripteurs combinés pour la classification d’orientations, Congrès francophone ORASIS, Amiens, Juin 2015.
Cet article présente des modalités visuelles permettant, dans un contexte d’interaction homme-robot, d’évaluer l’intention d’une personne d’interagir avec un autre agent (homme ou robot). L’analyse de la partie supérieure du corps humain comprenant la tête et les épaules fournit des indices révélateurs de l’intention d’une personne. Nous proposons une approche rapide et efficace pour détecter le buste d’une personne ainsi qu’un classifieur des orientations possibles du buste dans des images 2D. Notre détecteur, dérivé́d’un performant détecteur de piétons de la littérature, s’appuie sur les descripteurs ACF 1, obtenus par combinaison de descripteurs hétérogènes, et sur une représentation pyramidale rapide de ces descripteurs durant la détection. Notre classifieur d’orientations utilise une représentation en matrices creuses pour reconnaître l’orientation du buste. Le détecteur présenté montre des résultats comparables à ceux de la littérature sur une base de données publique en termes de précision et de coût CPU. Nous évaluons également différentes combinaisons de descripteurs pour la classification d’orientations présentant des résultats prometteurs malgré́les défis associés.

6.2.Journaux

Benallegue Mehdi, Lamiraux Florent. Estimation and stabilization of humanoid flexibility deformation using only inertial measurement units and contact information. International Journal of Humanoid Robotics, May 2015, p. 1550025.
Most robots are today controlled as being entirely rigid. But often, as for HRP-2 robot, there are flexible parts, intended for example to absorb impacts. The deformation of this flexibility modifies the orientation of the robot and endangers balance. Nevertheless, robots have usually inertial sensors (IMUs) to reconstruct their orientation based on gravity and inertial effects. Moreover, humanoids have usually to ensure a firm contact with the ground, which provides reliable information on surrounding environment. We show in this study how important it is to take into account these information to improve IMU-based position/orientation reconstruction. We use an extended Kalman filter to rebuild the deformation, making the fusion between IMU and contact information, and without making any assumption on the dynamics of the flexibility. We show how, with this simple setting, we are able to compensate for perturbations and to stabilize the end-effector’s position/orientation in the world reference frame. We show also that this estimation is reliable enough to enable a closed-loop stabilization of the flexibility and control of the CoM position with the simplest possible model.

7. ISIR

7.1.Proceedings de workshops et de conférences

Perrin, N., Lau, D. and Padois, V. “Effective Generation of Dynamically Balanced Locomotion with Multiple Non-coplanar Contacts”, ISRR 2015.
Studies of computationally and analytically convenient approximations of rigid body dynamics have brought valuable insight into the field of humanoid robotics. Additionally, they facilitate the design of effective walking pattern generators. Going further than the classical Zero Moment Point-based methods, this paper presents two simple and novel approaches to solve for 3D locomotion with multiple non-coplanar contacts. Both formulations use model predictive control to generate dynamically balanced trajectories with no restrictions on the center of mass height trajectory. The first formulation treats the balance criterion as an objective function, and solves the control problem using a sequence of alternating convex quadratic programs. The second formulation considers the criterion as constraints, and solves a succession of convex quadratically constrained quadratic programs.

Anis Najar, Olivier Sigaud, Mohamed Chetouani, “Socially Guided XCS: Using Teaching Signals to Boost Learning”, Genetic and Evolutionary Computation Conference (GECCO) (Companion) 2015: 1021-1028
In this paper, we show how we can improve task learning by using social interaction to guide the learning process of a robot, in a Human-Robot Interaction scenario. We introduce a novel method that simultaneously learns a social reward function on the teaching signals provided by a human and uses it to bootstrap task learning. We propose a model we call the Socially Guided XCS, based on the XCS framework, and we evaluate it in simulation with respect to the standard XCS algorithm. We show that our model improves the learning speed of XCS.

Anis Najar, Olivier Sigaud, Mohamed Chetouani: Social-Task Learning for HRI. ICSR 2015: To appear
In this paper, we introduce a novel method for learning simultaneously a task and the related social interaction. We present an architecture based on Learning Classifier Systems that simultaneously learns a model of social interaction and uses it to bootstrap task learning, while minimizing the number of interactions with the human. We validate our method in simulation and we prove the feasibility of our approach on a real robot.

8. Participation à des Workshops et des tables rondes

Le projet ROMEO2 a également été présenté à l’occasion de plusieurs workshops auxquels les participants du projet étaient invités ou bien qu’ils ont organisés.

8.1.Workshop

  • Workshop on Robot’s Social Intelligence and Natural Interaction Capabilities with End User Development, in European Robotics Forum (ERF) 2015, 11 Mar 2015 Vienna (Austria) (Aldebaran, LAAS, LIMSI)

8.2.Invitations, Présentations et Keynotes

  • Presentation at the International Symposium on Robotics Research (ISRR 2015) by Darwin Lau, September 2015 (ISIR)
  • “Development of Socially Intelligent Robots and the need of Learning: an industrial
    perspective and use cases”, in the workshop of Bottom-up and top-down development of robot skills, International Conference on Advanced Robotics (ICAR), July 2015 (Amit Kumar Pandey, Aldebaran)
  • Presentation at the French National Day on Humanoid Robotics (JNRH 2015) by Darwin Lau, June 2015 (ISIR)
  • Presentation at the French National Day on Humanoid Robotics (JNRH 2015) by Vincent Padois, June 2015 (ISIR)
  • “The need of closing the loop of Natural Social Interaction, Perception and Emotion for Personal Robots”, in workshop of robot autonomy revised, 6th International work conference on the interplay between natural and artificial computation (IWINAC) Elche, Spain, June 2015. (Amit Kumar Pandey, Aldebaran)
  • “Des robots humanoïdes pour tout le monde », lors du colloque international transdisciplinaire « La robotique au quotidien » organisé par l’Université de Limoges, France, juin 2015 (Rodolphe Gelin, Aldebaran).
  • “Cognition and AI in Robotics: A statement of interest from an industry perspective”, in cognition in robotics: pre-programmed vs learning, European Robotics Forum (ERF), Vienna, Austria, March 2015. (Amit Kumar Pandey, Aldebaran)
  • “Humanoid robot for Interaction with everyone”, Keynote at the Human Interactive
    Conference, Goldsmith University of London, November 2014 (Rodolphe Gelin, Aldebaran).
  • Presentation at the French National Day on Interactive Robotics (JNRI 2014) by Anis Najar, November 2014 (ISIR)
  • “Socially Intelligent Robot: Towards Human-Robot Coexistence”, JIIT, Noida, India, 2014 (Amit Kumar Pandey, Aldebaran)
  • “Humanoid robots for autonomy of elderly people”, in the 9th World Conference of
    Gerontechnology (ISG2014), June 2014, Taipei, Taiwan. (Rodolphe Gelin, Aldebaran)

8.3.Contributions à des écoles d’été ou d’hiver

  • Lecture on “AI and Robotics: the Industrial Perspective” at the Lucia Winter School on “AI and Robotics 2014 ”, Orebro, Sweden, Dec. 2014. (Amit Kumar Pandey, ldebaran)
  • Lecture on “Socially Intelligent Human-Robot Interaction – Applications and Needs from a commercial perspective”, 2nd International Summer School on Social Human-Robot Interaction, Åland, Finland, August 2015. (Amit Kumar Pandey, Aldebaran)

8.4.Participation à des expositions scientifiques

  • Demonstration of INRIA work with Romeo during PFIA 2015 conference in Rennes and JNRH 2015 in Nantes. (INRIA)

9. Brevets

 tableau 1Tableau 2