Start Date: February 2025
Associated Research Themes: Machine Learning, Logic, ML, LLM
Location: LaBRI, Laboratoire Bordelais de Recherche en Informatique, Talence, France
The internship is fully funded by the “Trustworthy AI” chair led by Laurent Simon and supported by the Bordeaux University Foundation.
Proposed Master’s Project Description
The impressive progress made in recent years in machine learning allows for many applications that were previously out of reach. However, the accuracy achieved in prediction (and/or recommendation) by these tools alone does not guarantee their adoption, especially in critical applications. There are many barriers to their industrialization, as these tools require the ability to understand their decisions and/or explain/justify them. Generally, the question of trust in the recommendations calculated by these tools is becoming increasingly crucial.
This observation has led to a new subfield of artificial intelligence, focusing on the guarantees offered by these systems and the trust that can be placed in them. This trust can take various forms and is strongly linked to explainability issues, which have also been at the forefront of the scientific scene in recent years.
In this M2 Research internship, we propose to study the issue of biases present in text generation by large language models. This problem is one of the current obstacles to the deployment of these models when they are no longer under human supervision. The internship will aim to establish the current state of the art in this field and identify its limitations. The proposed approach involves working directly on the predictive model rather than constraining it with external techniques. For example, we could aim to specialize one of the available large language models of reasonable size so that, without explicit mention of gender, the model’s predictions do not reproduce biases present in the data. This also raises the issue of evaluating the performance of a model designed not to reproduce what it was trained on. A first internship has already been conducted in this direction. The goal of this new internship is to continue this work to deliver a small bias-free LLM.
Candidate
The candidate should have strong knowledge in Machine Learning and symbolic AI. Very good programming skills are also expected.
Contact: Laurent Simon