Learning trustworthy systems, a neuro-symbolic approach (Master + PhD )

Most recent news

(list of all news)

Glucose is (finally) on Github

(17 May 2023)

Glucose is finally (also) distributed on Github. The idea is simply to allow official forks from Glucose versions. We used to distribute it only via the archives and the official...


Interview in the Regional Newspaper "Sud-Ouest" about Trustworthy AI

(03 May 2023)

A.I. has been the subject of many, many press articles in recent months. The exposure of tools such as chatGPT4 to the general public accelerates even more the need for...


Invitation to the Simons Institute (Berkeley)

(17 April 2023)

As part of the Simons Institute’s SAT Program (SAT Reunion Semester), I had the pleasure of being invited for a long stay in this famous research place about theoretical computer...


Lauch of the DIHNAMIC European Project

(20 March 2023)

The Dihnamic project has just been launched very officially. It is a large-scale European and regional project in which I act as a member of the advisory board. The project...


Dataquitaine 2023

(02 March 2023)

The sixth “Dataquitaine” (a regional scientific event organized in Nouvelle-Aquitaine) days were held on Thursday 2 March 2023 at Kedge Business School, organised by Digital Aquitaine and especially the Domex...

Master’s Research Internship in Artificial Intelligence

Starting Feb. 2023 (A PhD Grant is associated with this internship)

Disciplinary fields related to AI: Machine Learning, Reinforcement Learning, SAT, Logic, Constraint Programming, Trustworthy AI

Localization: LaBRI, Laboratoire Bordelais de Recherche en Informatique, Talence, France

(This PhD will be fully funded by the chair “towards a trustworthy A.I.” led by Laurent Simon and supported by the Fondation Bordeaux Université)


Topic

The impressive progress obtained in recent years in machine learning has made it possible to envisage many applications that were previously out of reach. However, the accuracy achieved in the prediction (and/or recommendation) of these tools does not by itself guarantee their wide adoption in critical applications. Indeed, there are still many barriers to their industrialization, for instance as soon as their deployment requires the ability to understand their decisions and/or to explain/justify them. In general, the question of trust arises. How can we trust the recommendations calculated by these tools? This observation has given rise to a new sub-field of artificial intelligence, which studies the guarantees offered by these systems and the trust that can be placed in them. This confidence can take on different aspects and has strong links with the problems of explainability, which have also been at the forefront of the A.I. scene for a few years.

In this M2 Research internship, we propose to study the question of trust exclusively through the lens of the system’s ability to admit formal proofs, in relation to some specific questions. The progress observed in formal verification (thanks in particular to the improvement of SMT / SAT solvers) may give us the hope to directly use these tools for ensuring trust of systems obtained by machine learning, such as deep neural networks. However, due to their architecture, the functions learned by these systems are of such complexity that it remains impossible to apply the usual verification techniques directly. It is therefore necessary to study the trade-off between the computational cost of reasoning about the properties of the learned function and the precision offered by the latter.

To this end, we propose to study a hybrid approach to machine learning that allows to directly learn functions offering structural or semantic properties that would allow to apply, in practice, automatic methods of proof and/or logical reasoning. It will also be studied how prior knowledge can help to converge more quickly or to guarantee that the learned functions respect the properties known in advance.

The goal of the internship is to identify target languages that are well adapted to the search for a good performance/confidence compromise. Our ambition is to propose learning methods that take into account the targeted guarantees from the beginning of the learning process, thus offering the possibility for AI models to be more or less efficient or explainable, depending on the context. For example, we will study how Knowledge Compilation (Darwiche, Marquis, 2001) allows us to characterize the trust we could have in a complex decision system through a series of questions about the reasons for the decision, in relation with the computation cost of asking these questions. The theoretical study of the capabilities of the different systems will be proposed through a theoretical map of the possible guarantees offered by different learning systems. The originality of the approach is based on a progressive hybridization, more or less strong, in order to obtain realistic decision systems in the context of an effective deployment.

The work we propose is thus positioned at the heart of so-called “hybrid” AI (mixing statistical learning and symbolic reasoning), a rapidly expanding field of AI with more and more dedicated workshops and conferences. The research topics related to this subject, in addition to Machine Learning (Neural Networks, Random Forests, …), are: Binary Decision Networks, Knowledge Base Compilation (decomposable, deterministic representation), but also causal model learning problems.

Continuation of the internship

Given the richness of the subject, the internship is associated with a 3-year thesis grant, founded by the Trustworthy AI Chair and the RobSYS project.

Candidate

The candidate should have an excellent scientific background and a good knowledge of logic-based reasoning methods. In addition, a very good level of programming is also expected.

The internship can start as early as February 2023.

Contact: Laurent Simon

The PDF file describing this internship is availaible here