Les missions du poste


A propos d'Inria

Inria est l'institut national de recherche dédié aux sciences et technologies du numérique. Il emploie 2600 personnes. Ses 215 équipes-projets agiles, en général communes avec des partenaires académiques, impliquent plus de 3900 scientifiques pour relever les défis du numérique, souvent à l'interface d'autres disciplines. L'institut fait appel à de nombreux talents dans plus d'une quarantaine de métiers différents. 900 personnels d'appui à la recherche et à l'innovation contribuent à faire émerger et grandir des projets scientifiques ou entrepreneuriaux qui impactent le monde. Inria travaille avec de nombreuses entreprises et a accompagné la création de plus de 200 start-up. L'institut s'eorce ainsi de répondre aux enjeux de la transformation numérique de la science, de la société et de l'économie.
PhD Position F/M Dynamic Approximate Computing for Energy-Efficient AI Hardware Accelerators
Le descriptif de l'offre ci-dessous est en Anglais
Type de contrat : CDD

Niveau de diplôme exigé : Bac +5 ou équivalent

Fonction : Doctorant

A propos du centre ou de la direction fonctionnelle

The Inria center at the University of Rennes is one of eight Inria centers and has more than thirty research teams. The Inria center is a major and recognized player in the field of digital sciences. It is at the heart of a rich ecosystem of R&D and innovation, including highly innovative SMEs, large industrial groups, competitiveness clusters, research and higher education institutions, centers of excellence, and technological research institutes.

Contexte et atouts du poste

Disclaimer

A PhDis nota continuation of coursework or a natural next step after a Master's degree.A PhD isa long-term, research-focusedcommitmentthat requires deepcuriosity,self-motivation,resilience, and a certaindegree ofautonomy.

By research, we meancreating new knowledge, not just applying existing theories. Your task is to discover, design, or prove something that no one has done before, work that will become what future students study.

If you are mainly looking for structured classes, predefined assignments, or a repeat of your Master's experience, you willlikely find this pathunfulfilling. We welcome applications from candidates who areexcited by uncertainty, driven to askoriginal questions, and eager to shape the frontier of their field.

Context and Background
As artificial intelligence expands across edge devices, data centers, and embedded systems, the demand for computational power has grown dramatically. Deep neural networks require billions of arithmetic operations and move vast amounts of data, which drives up both energy consumption and hardware complexity. Specialized accelerators like GPUs and TPUs have delivered significant improvements over general-purpose processors, yet the increasing scale of AI models continues to push power and thermal limits. This becomes particularly challenging for edge and battery-powered devices, where energy efficiency determines not just performance but practical viability and long-term sustainability.Approximate computing offers a compelling approach to these energy constraints by trading some computational precision for improvements in power efficiency, performance, and chip area. The fundamental insight is that many AI applications, such as image recognition, speech processing, and recommendation systems, can tolerate a degree of error. Neural networks are particularly resilient, maintaining acceptable accuracy even when operations use reduced precision, simplified circuits, or probabilistic components. Leveraging this tolerance allows designers to reduce switching activity, memory bandwidth, and overall hardware complexity.For AI hardware accelerators, approximate computing can be implemented across multiple design layers: algorithmic techniques like model pruning and quantization, architectural approaches such as approximate memory hierarchies, and circuit-level methods including approximate multipliers and adders. Together, these strategies can substantially reduce both dynamic and static power while keeping application performance within acceptable error margins. As AI deployment scales from resource-constrained edge devices to large data centers, approximate computing has become an essential research direction for achieving sustainable and energy-efficient acceleration.
Mission confiée

The primary objective of this thesis is to investigate and advance thedesign of energy-efficient AI acceleratorsby dynamically applyingapproximate computingtechniquesand to advancehardware-software co-design methodologies.

The research will build upon recent advancements in efficient domain-specific architectures for AI. The goal is to develop novel approaches that balance performance, energy efficiency, and accuracy, while addressing the unique challenges of implementing approximate computing in real-world AI systems.

Principales activités

This research explores the principles and practical implications of approximate computing as a pathway toward more energy-efficient AI hardware accelerators. It examines how different forms of approximation affect computational efficiency, prediction accuracy, and overall system-level performance. Rather than treating these techniques in isolation, the study considers their combined impact across the computing stack, with particular attention to how accuracy-efficiency trade-offs can be characterized and controlled.

A central theme of the work is the integration of hardware and software perspectives through a co-design approach. By closely aligning algorithmic characteristics with architectural features, the research aims to uncover strategies for embedding approximation mechanisms directly into accelerator designs. Emphasis is placed on adaptive and context-aware approximation techniques that can dynamically balance energy savings and output quality, ensuring that efficiency gains do not compromise application-level requirements.

To ground these ideas in practice, the research involves modeling, simulation, and experimental prototyping using representative AI workloads, including deep learning inference and computer vision applications. Through systematic evaluation and validation, the study aims to assess the feasibility, robustness, and scalability of proposed approaches, contributing insights into the design of next-generation energy-efficient AI systems.

Compétences

Required Skills

We seek highly motivated and passionate candidates. Autonomy is a highly appreciated quality.

Candidates should possess the following qualifications:

- Strong HW design skills: VHDL/Verilog, HW synthesis flow (design, simulation, synthesis, and deployment through commercial tools for FPGA or ASIC)
- Strong foundation in computer architectureand Systems design. Knowledge about hardware architectures of Neural Network accelerators is a plus.
- Strong SW Programming/Scripting: C/C++, Python, Linux scripting
- Familiarity or Experience with machine learning fundamentals and Deep Neural Network development frameworks, e.g., PyTorch/TensorFlow
- Experience with approximate computing techniques (e.g., functional approximation, mixed-precision arithmetic, pruning) is a significant plus.

- Excellent analytical and problem-solving abilities, with an interest in optimizing for energy efficiency.
- Strong communication skills to articulate research findings clearly and effectively.
- Languages: proficiency in written English and fluency in spoken English are required.
- Relational skills:thecandidatewillworkin a research team, where regular meetings will be set up.The candidatehas to be able to present the progress oftheir work in a clear and detailed manner.
- Othervalues appreciated areopen-mindedness, strong integration skills,and team spirit.

Candidates must have a Master's degree (or equivalent) inComputer Engineeringorrelated areas relevant to the PhD topic.

Talented last year Master's students may start as 6-month interns and continue as Ph.D. researchers after graduation.

Avantages

- Subsidized meals
- Partial reimbursement of public transport costs
- Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
- Professional equipment available (videoconferencing, loan of computer equipment, etc.)
- Social, cultural and sports events and activities

Rémunération

monthly gross salary 2300 euros

Postuler sur le site du recruteur

Ces offres pourraient aussi vous correspondre.

L’emploi par métier dans le domaine Data et IA à Rennes