Informations générales
Intitulé de l'offre : postdoctorant - EventSpike - Asynchronous computer vision from event cameras (M/F) (H/F)
Référence : UMR9189-IOABIL-002
Nombre de Postes : 2
Lieu de travail : VILLENEUVE D ASCQ
Date de publication : vendredi 25 avril 2025
Type de contrat : Chercheur en contrat CDD
Durée du contrat : 13 mois
Date d'embauche prévue : 1 juin 2025
Quotité de travail : Complet
Rémunération : approx 2400€ net before tax
Niveau d'études souhaité : Doctorat
Expérience souhaitée : 1 à 4 années
Section(s) CN : 06 - Sciences de l'information : fondements de l'informatique, calculs, algorithmes, représentations, exploitations
Missions
Video analysis is one of the fundamental tasks in computer vision. The dominant approach is based on deep neural networks applied to RGB images. These models have disadvantages such as: a) the need of large quantities of annotated data, which requires significant human work; b) the significant computational and therefore energy cost of these approaches; and c) redundancy in terms of visual information between two successive images. Spiking neural networks can offer a solution to these problems, through the use of unsupervised learning rules inspired by biological learning and the possibility of implementing them on ultra-low energy hardware components. Event cameras that only communicate changes in light intensity are positioned as an alternative for capturing a scene when efficient processing on hardware with low computing capabilities is required. The objective of this project is to offer a joint response by proposing weakly supervised learning methods based on spiking learning mechanisms which will directly exploit the flow of impulses generated by an event camera.
The main objective is to develop new models of spiking neural networks (SNN) capable of directly processing visual information in the form of spike trains. The proposed models must be validated experimentally on dynamic vision databases, following standard protocols and best practices.
Activités
The work will consist in :
• identifying the barriers to spatiotemporal modeling of sparse and low-intensity movements using spiking neural networks, and the tools that could help overcome these barriers;
• developing models capable of separating sparse and low-intensity movements from measurement noise in an unsupervised manner and that can be implemented using ultra-low-power hardware;
• validating these models on standard facial expression analysis datasets;
• jointly capturing a dataset containing both standard (RGB) and spiking (DVS) data.
Compétences
Experience in one or more of the following is a plus:
• image processing, computer vision;
• machine learning;
• bio-inspired computing;
• research methodology (literature review, experimentation…).
Candidates should have the following skills:
• good proficiency in English, both spoken and written;
• scientific writing;
• programming (experience in C++ is a plus, but not mandatory).
Contexte de travail
The FOX research group is part of the CRIStAL laboratory (University of Lille, CNRS), located in Lille, France. We focus on video analysis for human behavior understanding. Specifically, we develop spatio-temporal models of motions for tasks such as abnormal event detection, emotion recognition, and face alignment. We are also involved in IRCICA (CNRS), a research institute promoting multidisciplanary research. At IRCICA, we collaborate with computer scientists and experts in electronics engineering to create new models of neural networks that can be implemented on low-power hardware architectures. Recently, we designed state-of-the-art models for image recognition with single and multi-layer unsupervised spiking neural networks. We were among the first to succesfully apply unsupervised SNNs on modern datasets of computer vision. We also developed our own SNN simulator to support experiments with SNN on computer vision problems. Our work is published in major journals (Pattern Recognition, IEEE Trans. on Affective Computing) and conferences (NeurIPS, WACV, IJCNN) in the field.
The PR (Robotic Perception) team is specialized in mobile robotics (perception), 3D reconstruction and unconventional vision. The PR team is leading the e-Cathedral program and is currently involved in three projects dealing with event cameras: the ANR CERBERE project (2022-2025), the ANR DEVIN project (2024-2028) and the international (France-Austria) ANR-FWFEVELOC project (2024-2028). The PR team wishes to further strengthen this area of research and improve its expertise in AI bycollaborating with the CRIStAL laboratory.
Le poste se situe dans un secteur relevant de la protection du potentiel scientifique et technique (PPST), et nécessite donc, conformément à la réglementation, que votre arrivée soit autorisée par l'autorité compétente du MESR.