Mitigating Bias in LLMs (Master's Internship)

30 November 2024

Most recent news

(list of all news)

Weekly Eco Interview on TV7

(20 February 2025)

Last Wednesday, I had the opportunity to answer questions from Stéphanie Lacazze, from Sud Ouest, as part of [Hebdo Éco](https://www.sudouest.fr/lachainetv7/economie/l-hebdo-eco/) on TV7. It’s a local TV channel belonging to the...


Launch of the Institutional Diploma in AI

(13 February 2025)

The institutional diploma "Experts in AI" has just been established, right after the AI summit in Paris (ahead of schedule!). This program was proposed thanks to the impetus of the...


Page on the AI Convention

(09 February 2025)

After months of preliminary work, we are officially launching the recruitment of students for the student citizen convention that will take place at the Bordeaux Métropole premises on the weekend...


AI in Initial Training

(30 January 2025)

AI and Training: From University to the Professional World With Juliette MATTIOLI, Expert Fellow in AI at Thalès, we had the pleasure of presenting, in two hours, the challenges of...


AI & Training

(24 January 2025)

As part of a series of meetings and conferences organized by France Travail (with more than 400 visitors in total), at the premises of Bordeaux Métropole, I had the pleasure,...

Start Date: February 2025

Associated Research Themes: Machine Learning, Logic, ML, LLM

Location: LaBRI, Laboratoire Bordelais de Recherche en Informatique, Talence, France

The internship is fully funded by the “Trustworthy AI” chair led by Laurent Simon and supported by the Bordeaux University Foundation.

Proposed Master’s Project Description

The impressive progress made in recent years in machine learning allows for many applications that were previously out of reach. However, the accuracy achieved in prediction (and/or recommendation) by these tools alone does not guarantee their adoption, especially in critical applications. There are many barriers to their industrialization, as these tools require the ability to understand their decisions and/or explain/justify them. Generally, the question of trust in the recommendations calculated by these tools is becoming increasingly crucial.

This observation has led to a new subfield of artificial intelligence, focusing on the guarantees offered by these systems and the trust that can be placed in them. This trust can take various forms and is strongly linked to explainability issues, which have also been at the forefront of the scientific scene in recent years.

In this M2 Research internship, we propose to study the issue of biases present in text generation by large language models. This problem is one of the current obstacles to the deployment of these models when they are no longer under human supervision. The internship will aim to establish the current state of the art in this field and identify its limitations. The proposed approach involves working directly on the predictive model rather than constraining it with external techniques. For example, we could aim to specialize one of the available large language models of reasonable size so that, without explicit mention of gender, the model’s predictions do not reproduce biases present in the data. This also raises the issue of evaluating the performance of a model designed not to reproduce what it was trained on. A first internship has already been conducted in this direction. The goal of this new internship is to continue this work to deliver a small bias-free LLM.

Candidate

The candidate should have strong knowledge in Machine Learning and symbolic AI. Very good programming skills are also expected.

Contact: Laurent Simon

(Last Modified date: 22 February 2025)
(version française)
This page was translated using AI tools (Deepl, GPT, ...)
(list of all news)