Robot Cognition Laboratory

Peter Ford Dominey PhD, CNRS Research Director
INSERM U846 Stem Cell & Brain Research Institute

Integrative Neuroscience Department

Cortical Networks for Cognitive Interaction Team

18 ave Doyen Lépine

69675 Bron Cedex France

 
Telephone:  04 72 91 34 84
peter.dominey@inserm.fr

see also:  

http://www.sbri.fr/members/peter-ford-dominey.html

http://www.isc.cnrs.fr/dom/dommenu-en.htm

 

Current Funding and Projects:

We greatfully acknowledge funding from EU FP7 Projects CHRIS and ORGANIC, and ANR projects Amorces and Comprendre

   CHRIS

Cooperative Human Robot Interaction Systems 

In cooperation with the Bristol Robotics Laboratory (Chris Melhuish Director and Project FP7 Project Leader), CNRS LAAS Toulouse, The Max Plank Institute for Evolutionary Anthropology, the Italian Institute of Technology and our group, we investigate the developmental foundations of human cooperation, and implement these in humanoid robots for safe human robot cooperation.


http://www.reservoir-computing.org/organicORGANIC

Current speech recognition technology is based on mathematical-statistical models of language. Although these models have become extremely refined over the last de¬cades, progress in automated speech recognition has become very slow. Human-level speech recognition seems unreachable. The ORGANIC project ventures on an altogether different route toward automated speech recognition: not starting from statistical models of language, but from models of biological neural information processing – from neurodynamical models. The overall approach is guided by the paradigm of Reservoir Computing, a biologically inspired perspective on how arbitrary computations can be learnt and performed in complex artificial neural networks.  We INSERM contibutes our significant background in recurrent nerual network computation, indeed our founding work in this area.


ANR AMORCES

Algorithmes et MOdèles pour un Robot Collaboratif Eloquent et Social

In cooperation with CNRS LAAS Toulouse (Rachid Alami ANR Project leader), the GIPSA-LAB Grenoble, GREYC (Caen) and LAMIH (Valenciennes).  The objective of the AMORCES project is to study decisional and operational human-robot interaction, and more specifically, the impact of verbal and non-verbal communication on the execution of collaborative tasks between a robot and a human partner.

ANR Comprendre


Comprendre will develop a neurophysiologically motivated hybrid model of comprehension (WP1), test this model using behavioral, fMRI and ERP methodologies (WP2), and finally develop neural network and robotic implementations of the resulting model (WP3).

Our Approach:

One of the long-term goals in the domain of human-robot interaction is that robots will approach these interactions equipped with some of the same fundamental cognitive capabilities that humans use.  This will include the ability to perceive and understand human action in terms of an ultimate goal, and more generally to represent shared intentional plans in which the goal directed actions of the robot and the human are interlaced into a shared representation of how to achieve a common goal in a cooperative manner.  Our approach to robot cognition makes a serious commitment to cognitive neuroscience and child development, as sources of knowledge about how cognition is and can be implemented.  

The cogntive neuroscience approach can be seen here.

The results of this research are exploited through the publication of research articles (see below), and also through the development of potential "real world" applications.  An exciting new venue for these real world applications is the  At Home league of  RoboCup.

RoboCup At Home

RoboCupAtHome emphasises the aspects of robotics that will progressively allow robots to take their natural place at home, with people like me and you.  Recently we have collaborated with Alfredo Weitzenfeld in a French-Mexican project supported by the LAFMI that involved  Human Robot Interaction in the RoboCup At Home Competition in Bremen Germany, June 2006.  The details including some action packed video can be found at  

(Radical Dudes RoboCup@Home)


For the 2007 competion in Atlanta, we introduced some more articulated Human-Robot Cooperation into the Open Competion, and qualified for the Finals.  
At the 2010  competition in Singapore we made it to phase 2, t he first team to use the Nao.

Examples of some Current Research

Over the past several  years we have made  technical progress in providing spoken language, motor control and vision capabilities to robotic systems.  This begins to provide the basis for progressively elaborate human-robot interaction.

  

Some of our current research takes specific experimental protocols from studies of cognitive development to define behavior milestones for a perceptual-motor robotic system.  Based on a set of previously established principals for defining the “innate” functions available to such a system, a cognitive architecture is developed that allows the robot to perform cooperative tasks at the level of an 18 month human child.  At the interface of cognitive development and robotics, the results are interesting in that they (1) provide concrete demonstration of how cognitive science can contribute to human-robot interaction fidelity, and (2) they demonstrate how robots can be used to experiment with theories on the implementation of cognition in the developing human. See Dominey 2007 below).

 

In addtion to looking at how relatively explicit communication can be used between humans and robots, we are also investigating how the robot can more autonomously discover the structure of its environment.  This work is being carried out by Jean-David Boucher, a PhD student whoo is financed by the Rhone-Alpes Region, under the Presence Project of the Isle Cluster.  Some results from the first year can be seen in Boucher J-D,  Dominey PF (2006).

 

Background on Research Projects and Collaborations:

The Robot Cognition Laboratory in Lyon benefits from a number of fruitful interactions in Europe and abroad, that are partially defined by the Robot Cogntion Working Group.  Here is some History of the Robot Cognition Working Group,  including the Initial Project. which was supported by the French ACI Computational and Integrative Neuroscience Project.

Cooperation with the Ecole Centrale de Lyon

Over  the last four  years we have had several engineers from the Ecole Centrale de Lyon work over the summer, helping to do some of the technical nuts and bolts of system integration, including Nicolas Dermine (2003), Marc Jeambrun, Bin Gao, Manuel Alvarez (2004),  Julien Lestavel and Joseph Pairraud (2006).  The 2007 ECL Team was  made up of  Benoit Miniere, Oussama Abdoun and Frédéric Grandet.  The project involved development of a spoken langauge based posture and behavior editor for  our Lynx two-arm system, with autonomous sequence learning.  This included development of a webots simulator  (Grandet), vision-based inverse kinematics for object grasping (Abdoun) and the spoken language based editor for postures and behavioral sequences (Miniere).  Below you see a sequence of snap shots of a cooperative interaction where the human and robot assemble a small "table" with screws for legs.  The learning system autonomously recognizes repeating behavior (in attaching the successive legs) and takes initiative based on its acquired experience.

Two arm cooperationsimulation



 

HRP2 Humanoid Robot – JRL Project

 

 

As part of the JRL, we have started cooperating on spoken language programming of the HRP2:

 

Spoken Language programming of the HRP2 in a cooperative construction task

Peter DOMINEY, Anthony MALLET, Eiichi YOSHIDA

The paper accepted at the 2007 International Conference on Robotics and Automation ICRA is available below.

Here we demonstrate how spoken language can be used to pilot the HRP2 Humanoid during human-robot interaction, and more importantly, how language can be used to program the robot, i.e. to teach it new composite behaviours that can be used in the future.

First we see how the robot is programmed:

http://dominey.perso.cegetel.net/JRL-HRP2/JRLSayingMacro.wmv

 

And now we see Running the Learned Program on the HRP2

http://dominey.perso.cegetel.net/JRL-HRP2/JRLRunningMacro.wmv

 

More recently (Spring 2007) we have worked together in the JRL with Anthony Mallet and Eiichi Yoshida to introduced vision and inverse kinematics so that the HRP2 can perform visually guided grasping of the table legs, thus increasing its behavioral autonomy and cooperation capability.  

Some recent videos including stereo vision localization and inverse kinematics planning can be seen here.

Human-robot Interaction


Video Demonstrations:

 

 

Cooperative Activity and Helping in Human Robot Interaction (October 12, 2006)

The following video demonstrates results from 6 experiments on Spoken language and vision processing for robot command, imitation, learning a simple game, helping the human when he is stuck, learning a complex game, and helping the human again.

http://dominey.perso.cegetel.net/Lynx/DomineyCooperation.wmv

Here are some details of how it works Dominey PF (2007)

 

 

Lynx Robot Arm in a Cooperative Construction Task

http://dominey.perso.cegetel.net/Lynx/LynxLearningCooperation.wmv

 

 

Lynx robot arm sequence learning

http://dominey.perso.cegetel.net/Humanoids2005/LynxSeq1.wmv

 

Lynx robot arm sentence based commanding

http://dominey.perso.cegetel.net/Humanoids2005/LynxGCshort.wmv

 

Khepera Robot sequence learning

http://dominey.perso.cegetel.net/Humanoids2005/KepSequence.wmv

 

Robot event describer

http://dominey.perso.cegetel.net/Humanoids2005/EventDescShort.wmv

 

Aibo Sequence learning

http://dominey.perso.cegetel.net/Humanoids2005/AiboSequence4.wmv

 

Aibo spoken telecommanding

http://dominey.perso.cegetel.net/Humanoids2005/AiboGoal.wmv

 

 

Selected Publications:

Dominey PF, Warneken (2008) The Basis of Shared Intentions in Human and Robot Cognition, In Press, New Ideas in Psychology.

Dominey PF, Mallet A, Yoshida E (2007) Real-Time Cooperative Behavior Acquisition by a Humanoid Apprentice, Proceedings of IEEE/RAS 2007 International Conference on Humanoid Robotics, Pittsburg Pennsylvania.

Yoshida E, Mallet A, Lamiraux F, Kanoun O,  Stasse O, Poirier M, Dominey PF, Laumond J-P,  Yokoi K (2008)` Give me the Purple Ball'' --he said to HRP-2 N.14, Proceedings of IEEE/RAS 2007 International Conference on Humanoid Robotics, Pittsburg Pennsylvania.

Dominey PF (2007)  Sharing Intentional Plans for Imitation and Cooperation:  Integrating Clues from Child Developments and Neurophysiology into Robotics,  Proceedings of the AISB 2007 Workshop on Imitation.

Dominey PF, Mallet A, Yoshida E (2007) Progress in Programming the HRP-2 Humanoid Using spoken Language, Proceedings of ICRA 2007, Rome.

Dominey PF, Hoen M, Inui T (2006) A Neurolinguistic Model of Grammatical Construction Processing, In Press, Journal of Cognitive Neuroscience.

Dominey PF, Hoen M (2006) Structure Mapping and Semantic Integration in a Construction-Based Neurolinguistic Model of Sentence Processing, Cortex, 42(4):476-9

Hoen M, Pachot-Clouard M, Segebarth C, Dominey P.F. (2006) When Broca experiences the Janus syndrome. An er-fMRI study comparing sentence comprehension and cognitive sequence processing. Cortex, 42(4):605-23

Boucher J-D,  Dominey PF (2006) Perceptual-Motor Sequence Learning Via Human-Robot Interaction, S. Nolfi et al. (Eds.): SAB 2006, LNAI 4095, pp. 224–235, 2006. Springer-Verlag Berlin Heidelberg 2006

Brunelliere A, Hoen M, Dominey PF. (2005) ERP correlates of lexical analysis: N280 reflects processing complexity rather than category or frequency effects. Neuroreport. Sep 8;16(13):1435-8.

Voegtlin T, Dominey PF. (2005) Linear recursive distributed representations. Neural Netw. Sep;18(7):878-95.

Dominey PF (2005a) From sensorimotor sequence to grammatical construction: Evidence from Simulation and Neurophysiology, Adaptive Behavior, 13, 4 : 347-362

Dominey PF  (2005b) Towards a Construction-Based Account of Shared Intentions in Social Cognition, Comment on Tomasello et al. Understanding and sharing intentions: The origins of cultural cognition, Behavioral and Brain Sciences, In press

Dominey PF (2005c) Emergence of Grammatical Constructions: Evidence from Simulation and Grounded Agent Experiments. Connection Science, 17(3-4) 289-306

Dominey PF, Boucher JD (2005) Learning To Talk About Events From Narrated Video in the Construction Grammar Framework,  Artificial Intelligence, 167 (2005) 31–61

Dominey PF, Boucher JD (2005) Developmental stages of perception and language acquisition in a perceptually grounded robot, Cognitive Systems Research. Volume 6, Issue 3, September 2005, Pages 243-259

Dominey PF (2005)  Aspects of Descriptive, Referential and Information Structure in Phrasal Semantics: A Construction Based Model ; Interaction Studies: Social Behavior and Communication in Biological and Artificial Systems 6(2) 287–310

Dominey PF (1995) Complex Sensory-Motor Sequence Learning Based on Recurrent State-Representation and Reinforcement Learning, Biological Cybernetics, 73, 265-274

Complete List (click Here)