Faites connaître cette offre !
General information
Reference : UMR7225-ALEBRI0-007
Workplace : PARIS 13
Date of publication : Monday, February 22, 2021
Scientific Responsible name : Olivier COLLIOT / Ninon BURGOS
Type of Contract : PhD Student contract / Thesis offer
Contract Period : 36 months
Start date of the thesis : 1 October 2021
Proportion of work : Full time
Remuneration : 2 135,00 € gross monthly
Description of the thesis topic
Deep learning offers great promises for improving diagnosis of neurodegenerative disorders from brain imaging data. Although it can achieve impressive performances, its black-box nature is an impediment to its wide adoption. There is thus a strong research interest in improving the interpretability of deep learning systems. Currently, the most used approaches for interpreting neural networks are based on visualization, namely: i) variations around the idea of saliency maps; ii) occlusion methods that modify the input to perturb the behavior of the model. Other methods include the approximation by simpler models and the use of joint training. See Xie et al, for a recent review.
The aim of this project is to design interpretable deep learning models for the analysis of brain imaging data. The main target application will be computer-aided diagnosis of neurodegenerative diseases. The project may start as a Master internship.
We first propose to develop approaches based on the idea of joint training, specifically simultaneously learning a classification as well as interpretable medical characteristics (visual scales of abnormalities defined by neuroradiologists, volumetric features). We will also apply visualization techniques to interpret the different branches of the network. The considered datasets will include research and clinical routine datasets of patients with Parkinson's disease, Alzheimer's disease and other neurodegenerative diseases (atypical parkinsonian syndromes, fronto-temporal dementia).
Then, several research leads can be considered, depending on the interests of the student. The first axis consists in extending the joint training approaches to more complex tasks (segmentation, approximate delineation…). A second axis could consider using interpretability for studying robustness and identifying potential failure modes of the system. A third idea consists in combining image analysis with natural language processing for the automatic generation of medical reports. For that specific aim, we have a dataset of around 100,000 participants with MRI and medical reports. Finally, if interested, the PhD student could also conduct research on conceptual aspects of interpretability including the definition of what is an interpretable system and its impact on medicine (see e.g. Lipton).
Work Context
You will work within the ARAMIS lab (www.aramislab.fr) at the Paris Brain Institute. The institute is ideally located at the heart of the Pitié-Salpêtrière hospital, downtown Paris.
The ARAMIS lab, which is also part of Inria (the French National Institute for Research in Computer Science and Applied Mathematics), is dedicated to the development of new computational approaches for the analysis of large neuroimaging and clinical data sets. With about 35 people, the lab has a multidisciplinary composition, bringing together researchers in machine learning and statistics and medical doctors (neurologists, neuroradiologists).
The research project will be carried out within the framework of the Olivier Colliot Chair at the Interdisciplinary Institute of Artificial Intelligence (3IA) PRAIRIE (http://prairie-institute.fr/ ), one of the four 3IA institutes created as part of the French plan for artificial intelligence.
We have access to a supercomputer with 1044 nVIDIA V100 GPU.
Constraints and risks
No specific constraint or risk
We talk about it on Twitter!