Interview in Digiteco 11 - TV

02 April 2024

Most recent news

(list of all news)

Interview for Curieux.live!

(24 December 2025)

I had the chance to answer questions from Florence Heimburger for the magazine [curieux.live!](https://www.curieux.live/), a site aimed at science-curious teenagers. The starting point was to understand why we should—or should...


Seminar on Competitions at Dagstuhl

(29 October 2025)

It had been quite a while since I last had the pleasure of returning to Dagstuhl Castle. This week is also a return to an old topic: [competitions and empirical...


Interview in the Science et Vie Special Issue on AI

(20 October 2025)

I've had the privilege for some time of occasionally answering journalists' questions about the impacts of AI on society, thanks to the Trustworthy AI Chair which has increased the visibility...


AI Act Afternoon by the Trustworthy AI Chair

(06 October 2025)

October 13th: afternoon of presentation and exchanges on the AI Act! **Listen, question and understand the AI Act** - An afternoon of explanations, theoretical insights and applied experience feedback to...


Science & Vie Roundtable — Fête de la Science

(01 October 2025)

On the eve of the official launch of Fête de la Science 2025, and at the offices of the Ministry of Higher Education, I took part in a roundtable organized...

It is (once again) with great pleasure that I accepted to discuss around the table of Placeco, in the premises of TV7, this time about the openness to open source of AI models and what we could expect from it. Is having access to the sources of generative AIs enough to trust them? Behind this vast subject (spoiler: the answer is no), there are often reassuring messages from the providers of these LLMs.

First of all, it should be noted that disseminating the architecture of a model and its weights says nothing about the model itself. What I defended (quickly) in this round table is that it is not that simple. The openness of the sources is essential to verify, for example, cryptographic protocols. Paradoxically, for code to be reliable, it must be open, otherwise backdoors will be found (and exploited). Having open code also allows us to guard against malicious code parts. For years, having the code at hand allowed us to verify that the code was doing what it was supposed to do (no strange or obscure parts in the code, …), but it is not the same for AIs with LLMs. Having the configuration of billions of parameters does not indicate anything about the system’s behavior. Trust must be sought elsewhere (of course, an open system is better than a closed system, but let’s say it’s far from over).

Moreover, G. Hinton is strongly opposed to the open source openness of LLMs because once open, it is quite easy to build attacks that can turn the system’s head. We see that what we attached to the notion of open source is no longer so simple now.

With Jean-Noël Barthas on the Digiteco set

With Jean-Noël Barthas on the Digiteco set

You can find the discussion on the Placeco website (in french)

(Last Modified date: 08 December 2024)
(version française)
This page was translated using AI tools (Deepl, GPT, ...)
(list of all news)