En poursuivant votre navigation sur ce site, vous acceptez le dépôt de cookies dans votre navigateur. (En savoir plus)
Portail > Offres > Offre UMR9015-IOAVAS-008 - Cycle d'interaction multimodale (eye tracking, voix) pour l'apprentissage de l'anglais pour des locuteurs français ou japonais (H/F)

Multimodal interaction cycle (eye tracking, voice) for learning English for French and Japanese speakers (M/F)

This offer is available in the following languages:
Français - Anglais

Date Limite Candidature : lundi 11 juillet 2022

Assurez-vous que votre profil candidat soit correctement renseigné avant de postuler. Les informations de votre profil complètent celles associées à chaque candidature. Afin d’augmenter votre visibilité sur notre Portail Emploi et ainsi permettre aux recruteurs de consulter votre profil candidat, vous avez la possibilité de déposer votre CV dans notre CVThèque en un clic !

General information

Reference : UMR9015-IOAVAS-008
Workplace : ST AUBIN
Date of publication : Monday, June 20, 2022
Type of Contract : FTC Scientist
Contract Period : 12 months
Expected date of employment : 1 September 2022
Proportion of work : Full time
Remuneration : entre 2743€ et 3896€ brut mensuel selon l'expérience
Desired level of education : Higher than 5-year university degree
Experience required : 1 to 4 years


Within the framework of the project LeCycl, various multimodal corpora have been acquired or are being acquired. They concern in particular the learning of a second language through multimodal corpora involving speech and eye movement. Interactional corpora corresponding to the knowledge transfer phase are in progress. The postdoctoral work will focus on the automatic modelling of learning cues in order to predict optimal learning strategies from individual profiles.


- Participation in the acquisition of a multimodal corpus (speech, image, eye tracking) corresponding to the transfer phase
- Participation/supervision of the pre-processing of the corpus (orthographic transcription, annotations in meta-cognitive states)
- Extraction and modelling of multimodal cues
- Implementation of automatic models for the detection of emotional/cognitive states during the learning task, taking into account the impact of the "nudges" implemented for learning
- Integration into a system for detecting learning performance and meta-cognitive states during a learning task, real-time adaptation


PhD in computer science
Knowledge: machine learning, computational linguistics, multimodality (eye tracking)
Deep learning modelling, use of multimodal data, dialogical strategies
Ability to work in an interdisciplinary (engineering sciences, linguists) and international research team
Good level in English language

Work Context

This postdoctoral work is proposed in the framework of the trilateral ANR project Learning Cyclotron (LeCycl) which involves researchers from Germany, France and LISN. The objective of the project is to amplify human learning capacity with the help of artificial intelligence. In particular, it proposes to implement a learning cycle that goes from individual acquisition of knowledge (e.g. reading task) to transfer from one learner to another (e.g. interactional tasks where new words are transmitted and interest-type behaviour for a type of learning is encouraged with "nudging" strategies). These different contexts and phases of learning are recorded and estimated from multimodal cues: speech, eye movements, other biological sensors to be modelled in an automated way.

Constraints and risks


We talk about it on Twitter!