En poursuivant votre navigation sur ce site, vous acceptez le dépôt de cookies dans votre navigateur. (En savoir plus)

Post-doc (M/F): Explanations of AI Systems via Causal Absraction

This offer is available in the following languages:
- Français-- Anglais

Date Limite Candidature : vendredi 12 décembre 2025 23:59:00 heure de Paris

Assurez-vous que votre profil candidat soit correctement renseigné avant de postuler

Informations générales

Intitulé de l'offre : Post-doc (M/F): Explanations of AI Systems via Causal Absraction (H/F)
Référence : UMR5217-MAXPEY-003
Nombre de Postes : 1
Lieu de travail : ST MARTIN D HERES
Date de publication : vendredi 21 novembre 2025
Type de contrat : Chercheur en contrat CDD
Durée du contrat : 24 mois
Date d'embauche prévue : 1 février 2026
Quotité de travail : Complet
Rémunération : from €3,041 gross per month, depending on experience and according to the CNRS salary scale
Niveau d'études souhaité : Doctorat
Expérience souhaitée : Indifférent
Section(s) CN : 07 - Sciences de l'information : traitements, systèmes intégrés matériel-logiciel, robots, commandes, images, contenus, interactions, signaux et langues

Missions

The post-doctoral researcher will contribute to the causal abstraction research direction, which aims to build rigorous benchmark for evaluating AI interpretability using the framework of causal abstraction and develop new interpretability methods. Their mission is to advance the theoretical foundations of the project, develop the evaluation metrics, help establish a robust evaluation pipeline for interpretability methods, and build new interpretability algorithms.

Activités

The post-doc will carry out theoretical work on causal abstraction and causal alignment, implement algorithms and experimental pipelines in Python/PyTorch, and run experiments on GPU clusters. They will collaborate closely with the PI and the PhD students of the team, interact with international partners, and participate in the supervision and coordination of Master's interns involved in the project. Regular preparation of research results, contribution to conference submissions, and participation in project meetings will be part of their activities.

Compétences

The position requires a PhD in machine learning, NLP, causality, or a related discipline, with a strong command of deep learning and an interest in interpretability. Excellent programming skills in Python, familiarity with modern neural architectures, and the ability to conduct independent research are expected. Experience in causal modeling, representation learning, or mechanistic interpretability is appreciated. The successful candidate should also demonstrate good scientific communication skills and the ability to collaborate within a research team.

Contexte de travail

The post-doc will join CNRS in the GetAlp team at the Laboratoire d'Informatique de Grenoble (LIG). GetAlp conducts research in NLP, machine learning, evaluation, and interpretability. The project will be supervised by Maxime Peyrard (CNRS), with collaboration from PhD students and external partners. The researcher will benefit from an active local community in AI and access to GPU computing infrastructure.

Le poste se situe dans un secteur relevant de la protection du potentiel scientifique et technique (PPST), et nécessite donc, conformément à la réglementation, que votre arrivée soit autorisée par l'autorité compétente du MESR.

Contraintes et risques

N/A