Recherchez une offre d'emploi

Phd Position F - M Mechanistic Interpretability And Problem-Space Adversarial Attacks For Llm-Based Software Vulnerability Detection H/F - 35

Description du poste

  • INRIA
  • Rennes - 35

  • CDD

  • Publié le 13 Mars 2026


A propos d'Inria

Inria est l'institut national de recherche dédié aux sciences et technologies du numérique. Il emploie 2600 personnes. Ses 215 équipes-projets agiles, en général communes avec des partenaires académiques, impliquent plus de 3900 scientifiques pour relever les défis du numérique, souvent à l'interface d'autres disciplines. L'institut fait appel à de nombreux talents dans plus d'une quarantaine de métiers différents. 900 personnels d'appui à la recherche et à l'innovation contribuent à faire émerger et grandir des projets scientifiques ou entrepreneuriaux qui impactent le monde. Inria travaille avec de nombreuses entreprises et a accompagné la création de plus de 200 start-up. L'institut s'eorce ainsi de répondre aux enjeux de la transformation numérique de la science, de la société et de l'économie.
PhD Position F/M Mechanistic Interpretability and Problem-Space Adversarial Attacks for LLM-based Software Vulnerability Detection
Le descriptif de l'offre ci-dessous est en Anglais
Type de contrat : CDD

Niveau de diplôme exigé : Bac +5 ou équivalent

Fonction : Doctorant

A propos du centre ou de la direction fonctionnelle

The Inria Centre at Rennes University is one of Inria's eight centres and has more than thirty research teams. The Inria Centre is a major and recognized player in the field of digital sciences. It is at the heart of a rich R&D and innovation ecosystem: highly innovative PMEs, large industrial groups, competitiveness clusters, research and higher education players, laboratories of excellence, technological research institute, etc.

Contexte et atouts du poste

Within the framework of the ANR PRCI project "SecLLM4SVD (Secured Large Language Models in Reliable Software Vulnerability Detection)", Principal Investigator: Dr. Yufei Han.

Mission confiée

Context and Motivation:

Large Language Models (LLMs) have demonstrated remarkable capabilities in automating the detection of software vulnerabilities (SVD) due to their ability to process both natural and programming languages. However, a critical reliability concern with state-of-the-art LLMs is their susceptibility to adversarial attacks. Subtle, problem-space modifications to source code-such as variable renaming or dead code insertion-can mislead the model without changing the code's main functionality or underlying vulnerabilities. Furthermore, the opaque, "black-box" nature of LLMs makes it difficult to understand whether they truly grasp code semantics or simply recognize superficial statistical artifacts.

Collaboration :

The recruited person will be in connection with Dr. Yuejun Guo at Luxembourg Institute of Science and Technology.

Responsibilities :
The person recruited is responsible for conducing full-time research activities centered at the theme of the thesis.

Steering/Management :
The person recruited will be supervised by Dr. Yufei Han

Principales activités

- Thesis Objectives
This 36-month PhD position aims to bridge the gap between LLM transparency and adversarial robustness. The PhD candidate will spearhead research in two dedicated work packages: WP2 (Mechanistic Interpretability of LLM-based SVD) and WP3 (Problem-space Adversarial Attacks against LLM-based SVD).
Goal 1: Unveiling LLM Decision-Making
The first phase of the thesis will focus on a systematic analysis of how LLMs detect software vulnerabilities. The candidate will:

- Investigate the causal relationships encoded in LLMs' vulnerability detection mechanisms.
- Analyse how specific code properties (e.g., syntactic patterns, data flow structures) trigger vulnerability flags.
- Explore how the attention mechanisms in LLMs encode correlations between code properties and detection outputs, providing human-understandable insights into the LLM logic.

Goal 2: Assessing and Exploiting Vulnerabilities via Adversarial Attacks
Building upon the mechanistic understanding from WP2, the candidate will generate adversarially manipulated source code to systematically mislead LLM-based SVD systems. The candidate will:

- Design and propose advanced problem-space adversarial attacks that preserve code functionality and mimic real-world developer practices.
- Leverage heuristic optimization methods, such as multi-armed bandit programming and reinforcement learning, to craft these attacks.
- Develop innovative in-context learning techniques to overcome the limited input windows of LLMs, ensuring efficient and comprehensive evaluations of model robustness.

Compétences

Candidate Profile and Requirements

To successfully carry out the research objectives of WP2 and WP3, the ideal candidate should possess a strong foundational background in both artificial intelligence and software security. We are looking for candidates who meet the following requirements:

- Educational Background:A Master's degree or equivalent engineering degree in Computer Science, Artificial Intelligence, Cybersecurity, or a closely related discipline.
- Deep Learning Expertise:Solid knowledge and proven project experience in designing, training, and evaluating Deep Neural Network (DNN)-based classification models.
- Program Analysis Proficiency:Demonstrated understanding and practical experience in program analysis. Specifically, the candidate must be familiar with the static analysis of source code using semantic representations, such as Control Flow Graphs (CFG) and Data Flow Graphs (DFG).
- Programming Skills:Strong programming skills in Python and proficiency with standard deep learning frameworks (e.g., PyTorch, TensorFlow). Experience with code parsing and analysis tools (e.g., Tree-sitter, Joern) is highly desirable.
- Additional Assets:Prior exposure to Large Language Models (LLMs), Natural Language Processing (NLP), or Adversarial Machine Learning will be considered a significant plus.
- Soft Skills:Excellent analytical and problem-solving skills, an autonomous and rigorous work ethic, and good communication skills in English for scientific writing and presentation within an international consortium.

Avantages

- Subsidized meals
- Partial reimbursement of public transport costs
- Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
- Professional equipment available (videoconferencing, loan of computer equipment, etc.)
- Social, cultural and sports events and activities
- Access to vocational training
- Social security coverage

Rémunération

monthly gross salary 2300 euros

Compétences requises

  • Python
  • Access
Je postule sur HelloWork

Offres similaires

Architecte Logiciel H/F

  • Atos

  • Rennes - 35

  • CDI

  • 4 Mars 2026

Devenir Chef de Projet IA - Conseil Data - IA H/F

  • Agoriade

  • Rennes - 35

  • CDD

  • 11 Mars 2026

Professeur d'Intelligence Artificielle - IA H/F

  • VOSCOURS

  • Rennes - 35

  • Indépendant

  • 9 Mars 2026


Recherches similaires

Déposez votre CV

Soyez visible par les entreprises qui recrutent à Rennes.

J'y vais !

Chiffres clés de l'emploi à Rennes

  • Taux de chomage : 11%
  • Population : 220488
  • Médiane niveau de vie : 21760€/an
  • Demandeurs d'emploi : 23470
  • Actifs : 103712
  • Nombres d'entreprises : 15285

Sources :


Un site du réseaux :

Logo HelloWork