PhD in event vision for robotics M/F
New
- FTC PhD student / Offer for thesis
- 36 mounth
- BAC+5
Offer at a glance
The Unit
Heuristique et Diagnostic des Systèmes Complexes
Contract Type
FTC PhD student / Offer for thesis
Working hHours
Full Time
Workplace
60203 COMPIEGNE
Contract Duration
36 mounth
Date of Hire
01/09/2026
Remuneration
2300 € gross monthly
Apply Application Deadline : 01 June 2026 23:59
Job Description
Thesis Subject
This PhD thesis aims at designing a new generation of binarized or highly quantified neural networks for event camera data processing on-board embedded systems. On the contrary to conventional cameras, event cameras output asynchronous and parcimonious flows representing local luminance variations, providing a minimal latency robust to very fast motions and a high dnamic range. These properties particularly fit with robotics, drones, autonomous vehicles and low power perception systems.
The goal of this PhD thesis is to surpass approaches consisting in converting events to pseudo-images processed by classical networks. It will consist in co-designing event representations, ultra-low precision neural architectures, binarized (BNN) or quantified (QNN), learning strategies and hardware constraints. Studied models could combine binary layers, quantization, temporal memory, recurrence, parcimonious attention, patial inference.
Expected contributions could be related with:
- events representation fully compatible with binarization,
- design of BNN/QNN blocks adapted to asynchronous flows,
- developpment of robust learning methods by distillation, quantization-aware training, progressive learning,
- joint evaluation in accuracy, latency, memory, power and robustness,
- validation on public benchmarks and on-board robotics platforms.
Proposed architecutres will be validated with event vision benchmarks, for tasks such as classification, gesture recognition, objects detection, tracking... An effort will be put on realistic benchmarks such as Gen1 Automotive Detection, MVSEC, or DSEC, and on comparisons with non quantized recent architectures for events.
This PhD will contribute to put closer neuromorphic vision, frugal deep learning and edge AI, with the ambition to produce models able to deal with the asychronous nature of events while fitting with strong constaints for real-time computing and power requirements.
References:
Event vision, neuromorphic, frugality and edge AI
- Gallego, G. et al. Event-based Vision: A Survey, IEEE TPAMI, 2022.
- Zheng, X. et al. Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks, 2024.
- Rebecq, H. et al. Events-to-Video: Bringing Modern Computer Vision to Event Cameras, CVPR 2019.
- Gehrig, M., Scaramuzza, D. Recurrent Vision Transformers for Object Detection with Event Cameras, CVPR 2023.
- Peng, Y. et al. GET: Group Event Transformer for Event-Based Vision, ICCV 2023.
- Gehrig, D. et al. Low-latency automotive vision with event cameras, Nature, 2024.
- Cordova-Cardenas, R. et al. Edge AI in Practice: A Survey and Deployment Framework, 2025.
- Cazzato, D. et al. An Application-Driven Survey on Event-Based Neuromorphic Vision, 2024.
- Cimarelli, C. et al. Hardware, Algorithms, and Applications of the Neuromorphic Vision Sensor: a Review.
Binarization, quantization and compression
- Courbariaux, M. et al. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, 2016.
- Rastegari, M. et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, ECCV 2016.
- Liu, Z. et al. Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm, ECCV 2018.
- Liu, Z. et al. ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions, ECCV 2020.
- Jacob, B. et al. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference, CVPR 2018.
Your Work Environment
Our laboratory is specialized in mobile robots control, such as intelligent cars and drones. We focus on problems that cover command, localization, communication, perception, as well as virtual reality. The laboratory has robotized vehicles equipped with various sensors, a track, simulators, and an aviary. Our team is involved in the SIVALab common laboratory between UTC, CNRS and Renault (Ampere).
Our team develops strong knowledge in event-based vision since 2000, with results on applications such as calibration, optical flow, depth estimation, and mobile objects segmentation.
Compensation and benefits
Compensation
2300 € gross monthly
Annual leave and RTT
44 jours
Remote Working practice and compensation
Pratique et indemnisation du TT
Transport
Prise en charge à 75% du coût et forfait mobilité durable jusqu’à 300€
About the offer
| Offer reference | UMR7253-JULMOR-006 |
|---|
About the CNRS
The CNRS is a major player in fundamental research on a global scale. The CNRS is the only French organization active in all scientific fields. Its unique position as a multi-specialist allows it to bring together different disciplines to address the most important challenges of the contemporary world, in connection with the actors of change.
Create your alert
Don't miss any opportunity to find the job that's right for you. Register for free and receive new vacancies directly in your mailbox.