Mitigating Bias in LLMs (Master's Internship)

30 November 2024

Most recent news

(list of all news)

Round Table Women in Tech

(05 December 2024)

At the initiative of Betclic, as part of their "Diversity Week", I was able to participate in a round table (without a table) with Amélie Benoit (Women in Tech Bordeaux)...


Maxime Fuccellaro's PhD Thesis

(25 November 2024)

Accompanying a student to the achievement of a Phd is always an adventure. With Maxime Fuccellaro, whom I co-supervised with Akka Zemmari, it was an opportunity to discover a new...


Feedback on the LLM Afternoon by the Trustworthy AI Chair

(20 November 2024)

Last Wednesday, November 20th, as part of the activities of the Trustworthy AI Chair, we offered two hours covering the main themes related to LLMs, at ENSEIRB-MATMECA. You can find...


Jury at HackRobot Hackathon

(17 November 2024)

A Sunday afternoon watching robots and innovations everywhere? I had the pleasure of being invited to be a jury member at the Hack1Robo organized by INRIA Bordeaux South West. From...


Alexis Juven's PhD Thesis

(12 November 2024)

One of the ways to realize how AI permeates all areas of society is to measure the extent of its applications. I was happy to be the reviewer of Alexis...

Start Date: February 2025

Associated Research Themes: Machine Learning, Logic, ML, LLM

Location: LaBRI, Laboratoire Bordelais de Recherche en Informatique, Talence, France

The internship is fully funded by the “Trustworthy AI” chair led by Laurent Simon and supported by the Bordeaux University Foundation.

Proposed Master’s Project Description

The impressive progress made in recent years in machine learning allows for many applications that were previously out of reach. However, the accuracy achieved in prediction (and/or recommendation) by these tools alone does not guarantee their adoption, especially in critical applications. There are many barriers to their industrialization, as these tools require the ability to understand their decisions and/or explain/justify them. Generally, the question of trust in the recommendations calculated by these tools is becoming increasingly crucial.

This observation has led to a new subfield of artificial intelligence, focusing on the guarantees offered by these systems and the trust that can be placed in them. This trust can take various forms and is strongly linked to explainability issues, which have also been at the forefront of the scientific scene in recent years.

In this M2 Research internship, we propose to study the issue of biases present in text generation by large language models. This problem is one of the current obstacles to the deployment of these models when they are no longer under human supervision. The internship will aim to establish the current state of the art in this field and identify its limitations. The proposed approach involves working directly on the predictive model rather than constraining it with external techniques. For example, we could aim to specialize one of the available large language models of reasonable size so that, without explicit mention of gender, the model’s predictions do not reproduce biases present in the data. This also raises the issue of evaluating the performance of a model designed not to reproduce what it was trained on. A first internship has already been conducted in this direction. The goal of this new internship is to continue this work to deliver a small bias-free LLM.

Candidate

The candidate should have strong knowledge in Machine Learning and symbolic AI. Very good programming skills are also expected.

Contact: Laurent Simon

(Last Modified date: 01 December 2024)
(version française)
(list of all news)