En poursuivant votre navigation sur ce site, vous acceptez le dépôt de cookies dans votre navigateur. (En savoir plus)

PhD position M/F in Verification of probabilistic properties for Artificial Intelligence

This offer is available in the following languages:
- Français-- Anglais

Date Limite Candidature : vendredi 9 mai 2025 00:00:00 heure de Paris

Assurez-vous que votre profil candidat soit correctement renseigné avant de postuler

Informations générales

Intitulé de l'offre : PhD position M/F in Verification of probabilistic properties for Artificial Intelligence (H/F)
Référence : UMR7161-SYLPUT-004
Nombre de Postes : 1
Lieu de travail : PALAISEAU
Date de publication : vendredi 7 février 2025
Type de contrat : CDD Doctorant
Durée du contrat : 36 mois
Date de début de la thèse : 1 octobre 2025
Quotité de travail : Complet
Rémunération : 2 135,00 € gross monthly
Section(s) CN : 06 - Sciences de l'information : fondements de l'informatique, calculs, algorithmes, représentations, exploitations

Description du sujet de thèse

AI is now embedded into a number of everyday life applications. More and more, we depend on neural networks to make even critical situations, such as control and motion planning for autonomous cars, and it is of primary importance to be able to verify their correct behavior.
For some applications, e.g. perception in autonomous systems, through classifiers, we can only hope for probabilistic safety. Moreover, in real-world systems, precise models representative of the data are not always available. For instance, several probabilistic models may be plausible for describing of a problem, or a probabilistic model may be known but with uncertain parameters. Therefore, we need to consider both probabilistic information and epistemic uncertainty. Recently was introduced in [1] an approach based on imprecise probabilities or probability boxes, which generalize probabilistic and epistemic uncertainties by defining sets of probability distributions, for the quantitative verification of neural networks. This approach provides qualitative measures of how likely the outputs of a neural network are to exhibit a certain behavior, given some assumptions on the inputs specified as imprecise probabilities. Based on an abstraction and propagation of sets of probability distributions in neural networks, the probability of a property satisfaction can be bounded, and the regions of the input space the more likely to lead to property violation identified.

The approach of [1] proved to be both more general and more computationally efficient than the state of the art. However many challenges remain on the way to the applicability to real-world problems. As a starting point to the problem, the following extensions can be studied:
- the current abstraction of probability boxes relies on a constant stepsize staircase discretization which will hardly scale with the input dimension of the networks; we would like to investigate other abstractions;
- independence is currently assumed between the inputs of the network; we would like to handle also multivariate input distributions, for instance using copulas [4] as in e.g. [2].

A longer-term objective is to study the extension of the approach to the analysis of Bayesian neural networks, where the weights and bias of the network are also defined by multivariate (imprecise) probability distributions. This extends the case of multivariate input distributions, but in much higher dimension, probably requiring a fresh view on the approach. Finally, the project will investigate the application to one or both of the following:
- application to the safety of autonomous systems, such as robustness of perception and decision in the vision of drones with imprecise probabilistic information on trajectories;
- contribution to the fairness analysis and more generally the explanability of the network behavior [3].

[1] Eric Goubault and Sylvie Putot. A zonotopic Dempster-Shafer approach to the quantitative verification of neural networks. In Formal Methods: 26th International Symposium, FM 2024, Milan, Italy, September 9–13, 2024, Proceedings, Part I, page 324–342, Berlin, Heidelberg, 2024. Springer-Verlag.
[2] Ander Gray, Marcelo Forets, Christian Schilling, Scott Ferson, and Luis Benet. Verified propagation of imprecise probabilities in non-linear odes. International Journal of Approximate Reasoning, 164, 2024. Publisher Copyright: © 2023.
[3] Rabia Saleem, Bo Yuan, Fatih Kurugollu, Ashiq Anjum, and Lu Liu. Explaining deep neural networks: A survey on the global interpretation methods. Neurocomputing, 513:165–180, 2022.
[4] Bernhard Schmelzer. Random sets, copulas and related sets of probability measures. International Journal of Approximate Reasoning, 160:108952, 2023.

Contexte de travail

LIX (Laboratoire d'Informatique de l'Ecole Polytechnique) is a joint research unit with two supervisory institutions, École Polytechnique, a member of the Institut Polytechnique de Paris (cluster of universities composed of Ecole Polytechnique, Télécom Paris, ENSTA Paris, Télécom Sud Paris, ENSAE), and the Centre National de la Recherche Scientifique (CNRS), and one partner, Inria Saclay, with shared buildings and mixed teams.

LIX is organized in four poles: “Computer Mathematics”, “Data Analytics and Machine Learning”, “Efficient and Secure Communications”, “Modeling, Simulation and Learning” and “Proofs and Algorithms”. The Ph.D. student will be part of the Cosynus team in the “Proofs and Algorithms” pole. The members of the Cosynus team work on the semantics and static analysis of software systems, sequential, concurrent or distributed, and hybrid/control systems and cyber-physical systems.

The Ph.D. student will benefit from the exciting environment of LIX, specifically the Computer Science Department of Ecole Polytechnique (DIX), in which he or she will be able to give courses, and the Department of Computer Science, Data and Artificial Intelligence of Institut Polytechnique de Paris (IDIA). He or she will also interact with the project members of the SAIF project (Safe Artificial Intelligence through Formal Methods), of the French National Research Programme on Artificial Intelligence PEPR IA.

Le poste se situe dans un secteur relevant de la protection du potentiel scientifique et technique (PPST), et nécessite donc, conformément à la réglementation, que votre arrivée soit autorisée par l'autorité compétente du MESR.

Contraintes et risques

No particular risk or constraints, those inherent to research work