Trustworthy AI: Bounding Machine Learning Decisions by Proven Systems (Master + PhD )

Most recent news

(list of all news)

Glucose is (finally) on Github

(17 May 2023)

Glucose is finally (also) distributed on Github. The idea is simply to allow official forks from Glucose versions. We used to distribute it only via the archives and the official...


Interview in the Regional Newspaper "Sud-Ouest" about Trustworthy AI

(03 May 2023)

A.I. has been the subject of many, many press articles in recent months. The exposure of tools such as chatGPT4 to the general public accelerates even more the need for...


Invitation to the Simons Institute (Berkeley)

(17 April 2023)

As part of the Simons Institute’s SAT Program (SAT Reunion Semester), I had the pleasure of being invited for a long stay in this famous research place about theoretical computer...


Lauch of the DIHNAMIC European Project

(20 March 2023)

The Dihnamic project has just been launched very officially. It is a large-scale European and regional project in which I act as a member of the advisory board. The project...


Dataquitaine 2023

(02 March 2023)

The sixth “Dataquitaine” (a regional scientific event organized in Nouvelle-Aquitaine) days were held on Thursday 2 March 2023 at Kedge Business School, organised by Digital Aquitaine and especially the Domex...

Master’s Research Internship in Artificial Intelligence

Starting Feb. 2023 (A PhD Grant is associated with this internship)

Disciplinary fields related to AI: Machine Learning, Reinforcement Learning, SAT, Logic, Constraint Programming, Trustworthy AI

Localization: LaBRI, Laboratoire Bordelais de Recherche en Informatique, Talence, France

(This PhD will be fully funded by the chair “towards a trustworthy A.I.” led by Laurent Simon and supported by the Fondation Bordeaux Université)


Topic

In order to achieve the best possible accuracy, decision/recommendation systems built with the help of machine learning act mainly on one lever: the computational complexity of the learned functions (e.g. the number of learned parameters). It is thus common practice to build recommendation systems involving millions of calculations leading to a decision, which has the immediate effect of making it impossible, by construction, to inspect their decision in simple and understandable terms. Aiming solely at precision may have immense applicative interests, but as soon as the implementation requires understanding, justifying, explaining or guaranteeing certain decisions, we see that precision alone is not enough when trust is needed.

How is it possible to have confidence in a decision whose calculation is by construction impossible to summarize in comprehensible terms?

This observation has led to the multiplication of work in a new subfield of artificial intelligence, linked to the problem of trust in autonomous decisions (thus, many specialized workshops are proposed in the margins of all the major AI conferences).

Depending on the application, trust can take different forms. In the approach we propose, trust is expressed in terms of proven guarantees on the final system. For this purpose, we will rely on the progress made in recent years by formal methods (in particular thanks to the use of SMT / SAT solvers) so as to bound “black box” decision systems by simpler systems allowing to express the targeted guarantees in accessible languages. Although formal methods have also made immense progress in recent years, attempting to use them directly on systems with billions of parameters seems hopeless. In this approach, we propose to bound these systems by other, simpler, systems, and to guarantee the properties on the functions bounding these systems.

Thus, given for example RNNA a Neural network predicting “Yes” or “No” on any input, we propose to build two logical formulas Up and Down allowing to bound RNNA’s decisions as precisely as possible (Up for “Yes”, Down for “No”) and offering guarantees of explainability or allowing the proof of some desired properties. The key point is that Up and Down languages will be defined on a formal language allowing formal proof (propositional logics, temporal logics, …) while capturing a maximum of precision from RNNA.

The main difficulty in the internship will probably be constructing Up and Down from RNNA. It will therefore be necessary to study how the language in which RNNA, Up and Down are expressed will allow A to be rewritten while minimizing the loss of precision of the Up and Down functions. Approaches based on Binarized Neural Networks, as well as approaches based on Knowledge Base Compilation can be studied. Depending on the candidate’s knowledge, it may also be possible to study how the learning itself can be modified to directly target the learning of RNNA, Up and Down functions together rather than computing them a posteriori.

Continuation of the internship

Given the richness of the subject, the internship is associated with a 3-year thesis grant, co-founded by the Trustworthy AI Chair and the RobSYS project.

Candidate

The candidate should have an excellent scientific background and a good knowledge of logic-based reasoning methods. In addition, a very good level of programming is also expected.

The internship can start as early as February 2023.

Contact: Laurent Simon

The PDF file describing this internship is availaible here