Resilient Haptic Shared Autonomy for Collaborative Mobile Manipulation in Degraded Environments (M/F)
New
- FTC PhD student / Offer for thesis
- 36 mounth
- BAC+5
Offer at a glance
The Unit
Institut de recherche en informatique et systèmes aléatoires
Contract Type
FTC PhD student / Offer for thesis
Working hHours
Full Time
Workplace
35042 RENNES
Contract Duration
36 mounth
Date of Hire
01/10/2026
Remuneration
2300 € gross monthly
Apply Application Deadline : 02 April 2026 23:59
Job Description
Thesis Subject
Develop a resilient shared-control architecture for mobile manipulators in hazardous or degraded environments. Integrate AI-driven adaptive role allocation with intuitive wearable sensory feedback to adjust autonomy based on real-time environmental uncertainty and operator cognitive state. Ensure mission success and safety in collaborative tasks while minimizing the operator's cognitive and physical load.
General Context and Operational Problem. In high-intensity operations or disaster response, operators must transport heavy loads (equipment, supplies, casualties) through unstructured and hazardous terrain. While autonomous logistic robots ("assistants") offer a solution, they currently lack the resilience to operate reliably in degraded environments (e.g., GNSS-denied zones, visual obscuration by smoke/dust, unstable ground). Conversely, pure teleoperation or manual guidance requires high cognitive attention, leaving the operator vulnerable and unable to maintain situational awareness. The critical scientific barrier is the rigidity of current human-robot collaboration. Existing systems use static role definitions, which fail when the robot loses perception confidence or the human becomes cognitively saturated. To be operationally viable, the robotic partner must possess adaptive decision-making capabilities, shifting seamlessly between autonomy and full human control based on the context.
State of the Art. Current shared control methods typically rely on fixed impedance parameters or visual feedback interfaces. However, in highly demanding scenarios, the visual channel is often overloaded, leaving "silent" communication channels underutilized. Furthermore, most autonomous navigation stacks are brittle; they simply stop when uncertain, rather than asking for specific, low-bandwidth help. This project advances the state of the art by proposing a bi-directional, AI-based, adaptive framework where the robot and human continuously negotiate authority through physical interaction and haptic cues. The core scientific barrier is the rigidity of state-of-the-art shared control architectures. Most existing Human-Robot Interaction (HRI) frameworks rely on static role allocation. This is dangerous and highly inefficient in dynamic environments: if perception degrades, the robot may execute unsafe maneuvers. Conversely, if the operator is under cognitive stress, their inputs may become suboptimal and erratic. Current systems lack the "autonomie décisionnelle adaptative" (adaptive decision-making autonomy) required to handle these uncertainties. They function as passive tools rather than intelligent partners capable of real-time planning under uncertainty.
Proposed Approach: AI-Driven Resilient Haptic Shared Autonomy. This PhD proposes a paradigm shift from "supervision" to "symbiotic partnership" driven by embedded AI. We aim to develop an adaptive haptic shared control framework that ensures operational resilience. We want to achieve a new paradigm of a robot-empowered operational unit. The research is structured around three specific scientific axes:
1. AI-Based Dynamic Authority Allocation. We will move beyond fixed impedance control to adjustable autonomy driven by learning-based estimators. The system will function as a real-time negotiation between two agents. Using sensor fusion (vision, proprioception, tactile), the robot will compute a real-time "confidence of autonomy" score. We will investigate Bayesian Neural Networks or Gaussian Processes to quantify perceptual uncertainty in degraded environments. Using Machine Learning (ML) techniques, the system will analyze interaction forces and motion dynamics to infer the operator's intent and physical state (fatigue, stress). An AI-driven policy will dynamically shift authority. If the robot is "confident" and the human is overloaded, the AI increases stiffness to enforce safety. If the robot is uncertain of the current state, it smoothly yields control, requesting human guidance.
2. Haptic Language for Cognitive Unloading. To address the priority of optimizing mental workload, we will develop a "haptic vocabulary" for non-visual communication. The interface will render distinct physical cues (stiffness changes, rhythmic pulses) to convey the AI's decision state (e.g., stiffening implies uncertainty or an obstacle). Active Pulling indicates a high-confidence path to follow. This ensures natural exchanges, enabling eyes-up operation where the operator understands the AI's intent through touch, especially when the visual channel is unavailable.
3. Physical Resilience and Stability. The control laws must ensure resilient navigation and intuitive interaction. We will develop whole-body control strategies that mechanically absorb distracting cues (terrain disturbances, wind) without transmitting them to the operator. This physical filtering reduces physical fatigue and ensures mission endurance.
Operational Impact and Practical Relevance. This project directly targets the capability gap of deploying autonomous systems in hazardous environments (GNSS-denied zones, smoke, rubble), where current "black-box" autonomy fails. By validating a resilient haptic shared control, this research transforms the robotic agent from a passive object into a proactive teammate.
- Primary Fallouts: Ensures operational continuity in high-intensity scenarios (logistics, engineering) by smoothly blending human insight when robot perception degrades. Preserves the operator's situational awareness ("eyes-up" operation) and builds the trust required for casualty extraction and close-proximity collaboration.
- Industrial and Emergency Fallouts: Transfers directly to search and rescue in disaster zones and industrial logistics in hazardous areas where visibility is poor and teleoperation is cognitively demanding.
- Integration Time Scale: By the end of the thesis (Year 3), we will deliver a validated lab prototype. We estimate a 3 to 5-year timeline post-thesis to achieve a field-deployable system in collaboration with industrial partners.
Alignment with Strategic Priorities. This proposal is inherently transversal, addressing critical priorities across two primary domains, supported by a third methodological domain. It addresses autonomous decision-making and resilient navigation. It optimizes mental workload and masks complexity, reinforcing confidence and partnership with technological artifacts. AI serves as the core enabling technology to achieve these goals, utilizing learning-based estimators and Reinforcement Learning.
Your Work Environment
IRISA/CNRS - RAINBOW team
Compensation and benefits
Compensation
2300 € gross monthly
Annual leave and RTT
44 jours
Remote Working practice and compensation
Pratique et indemnisation du TT
Transport
Prise en charge à 75% du coût et forfait mobilité durable jusqu’à 300€
About the offer
| Offer reference | UMR6074-CLAPAC-013 |
|---|---|
| CN Section(s) / Research Area | Mathematics and mathematical interactions |
About the CNRS
The CNRS is a major player in fundamental research on a global scale. The CNRS is the only French organization active in all scientific fields. Its unique position as a multi-specialist allows it to bring together different disciplines to address the most important challenges of the contemporary world, in connection with the actors of change.
Create your alert
Don't miss any opportunity to find the job that's right for you. Register for free and receive new vacancies directly in your mailbox.