Learning trustworthy systems, a neuro-symbolic approach (Master + PhD ) (Closed Position, 2023)

Most recent news

(list of all news)

AI Training at Aérocampus

(04 July 2025)

The Aérocampus of Nouvelle Aquitaine is a training center for aeronautical technicians. It’s a unique campus where industry plays a major role. Here, airplanes (including a Rafale) and helicopters of...


2025 Agrégation Jury

(23 June 2025)

Since 2022, France has finally established an external computer science agrégation. I remember my own computer science classes in middle school (on PCs with 5”1/4 floppy disks and MO5/TO7 machines)....


Does AI Still Belong to Digital Science ?

(03 June 2025)

I had the pleasure of opening the AI Day in Nouvelle Aquitaine, organized by AI in Nouvelle Aquitaine. For this occasion, I allowed myself to venture into areas close to...


2025 Graduation Ceremony

(15 March 2025)

Last Saturday was, as every year, the graduation ceremony for the students of ENSEIRB-MATMECA, where I am still the head of the Computer Science department for a few more months....


Weekly Eco Interview on TV7

(20 February 2025)

Last Wednesday, I had the opportunity to answer questions from Stéphanie Lacazze, from Sud Ouest, as part of Hebdo Éco on TV7. It’s a local TV channel belonging to the...

Master’s Research Internship in Artificial Intelligence

This position is closed since 2023. It is here for archives

Starting Feb. 2023 (A PhD Grant is associated with this internship)

Disciplinary fields related to AI: Machine Learning, Reinforcement Learning, SAT, Logic, Constraint Programming, Trustworthy AI

Localization: LaBRI, Laboratoire Bordelais de Recherche en Informatique, Talence, France

(This PhD will be fully funded by the chair “towards a trustworthy A.I.” led by Laurent Simon and supported by the Fondation Bordeaux Université)


Topic

The impressive progress obtained in recent years in machine learning has made it possible to envisage many applications that were previously out of reach. However, the accuracy achieved in the prediction (and/or recommendation) of these tools does not by itself guarantee their wide adoption in critical applications. Indeed, there are still many barriers to their industrialization, for instance as soon as their deployment requires the ability to understand their decisions and/or to explain/justify them. In general, the question of trust arises. How can we trust the recommendations calculated by these tools? This observation has given rise to a new sub-field of artificial intelligence, which studies the guarantees offered by these systems and the trust that can be placed in them. This confidence can take on different aspects and has strong links with the problems of explainability, which have also been at the forefront of the A.I. scene for a few years.

In this M2 Research internship, we propose to study the question of trust exclusively through the lens of the system’s ability to admit formal proofs, in relation to some specific questions. The progress observed in formal verification (thanks in particular to the improvement of SMT / SAT solvers) may give us the hope to directly use these tools for ensuring trust of systems obtained by machine learning, such as deep neural networks. However, due to their architecture, the functions learned by these systems are of such complexity that it remains impossible to apply the usual verification techniques directly. It is therefore necessary to study the trade-off between the computational cost of reasoning about the properties of the learned function and the precision offered by the latter.

To this end, we propose to study a hybrid approach to machine learning that allows to directly learn functions offering structural or semantic properties that would allow to apply, in practice, automatic methods of proof and/or logical reasoning. It will also be studied how prior knowledge can help to converge more quickly or to guarantee that the learned functions respect the properties known in advance.

The goal of the internship is to identify target languages that are well adapted to the search for a good performance/confidence compromise. Our ambition is to propose learning methods that take into account the targeted guarantees from the beginning of the learning process, thus offering the possibility for AI models to be more or less efficient or explainable, depending on the context. For example, we will study how Knowledge Compilation (Darwiche, Marquis, 2001) allows us to characterize the trust we could have in a complex decision system through a series of questions about the reasons for the decision, in relation with the computation cost of asking these questions. The theoretical study of the capabilities of the different systems will be proposed through a theoretical map of the possible guarantees offered by different learning systems. The originality of the approach is based on a progressive hybridization, more or less strong, in order to obtain realistic decision systems in the context of an effective deployment.

The work we propose is thus positioned at the heart of so-called “hybrid” AI (mixing statistical learning and symbolic reasoning), a rapidly expanding field of AI with more and more dedicated workshops and conferences. The research topics related to this subject, in addition to Machine Learning (Neural Networks, Random Forests, …), are: Binary Decision Networks, Knowledge Base Compilation (decomposable, deterministic representation), but also causal model learning problems.

Continuation of the internship

Given the richness of the subject, the internship is associated with a 3-year thesis grant, founded by the Trustworthy AI Chair and the RobSYS project.

Candidate

The candidate should have an excellent scientific background and a good knowledge of logic-based reasoning methods. In addition, a very good level of programming is also expected.

The internship is closed since 2023

Contact: Laurent Simon

The PDF file describing this internship is availaible here