Interview in Digiteco 11 - TV

02 April 2024

Most recent news

(list of all news)

AI Training at Aérocampus

(04 July 2025)

The Aérocampus of Nouvelle Aquitaine is a training center for aeronautical technicians. It's a unique campus where industry plays a major role. Here, airplanes (including a Rafale) and helicopters of...


2025 Agrégation Jury

(23 June 2025)

Since 2022, France has finally established an external computer science agrégation. I remember my own computer science classes in middle school (on PCs with 5"1/4 floppy disks and MO5/TO7 machines)....


Does AI Still Belong to Digital Science ?

(03 June 2025)

I had the pleasure of opening the AI Day in Nouvelle Aquitaine, organized by AI in Nouvelle Aquitaine. For this occasion, I allowed myself to venture into areas close to...


2025 Graduation Ceremony

(15 March 2025)

Last Saturday was, as every year, the graduation ceremony for the students of ENSEIRB-MATMECA, where I am still the head of the Computer Science department for a few more months....


Weekly Eco Interview on TV7

(20 February 2025)

Last Wednesday, I had the opportunity to answer questions from Stéphanie Lacazze, from Sud Ouest, as part of [Hebdo Éco](https://www.sudouest.fr/lachainetv7/economie/l-hebdo-eco/) on TV7. It’s a local TV channel belonging to the...

It is (once again) with great pleasure that I accepted to discuss around the table of Placeco, in the premises of TV7, this time about the openness to open source of AI models and what we could expect from it. Is having access to the sources of generative AIs enough to trust them? Behind this vast subject (spoiler: the answer is no), there are often reassuring messages from the providers of these LLMs.

First of all, it should be noted that disseminating the architecture of a model and its weights says nothing about the model itself. What I defended (quickly) in this round table is that it is not that simple. The openness of the sources is essential to verify, for example, cryptographic protocols. Paradoxically, for code to be reliable, it must be open, otherwise backdoors will be found (and exploited). Having open code also allows us to guard against malicious code parts. For years, having the code at hand allowed us to verify that the code was doing what it was supposed to do (no strange or obscure parts in the code, …), but it is not the same for AIs with LLMs. Having the configuration of billions of parameters does not indicate anything about the system’s behavior. Trust must be sought elsewhere (of course, an open system is better than a closed system, but let’s say it’s far from over).

Moreover, G. Hinton is strongly opposed to the open source openness of LLMs because once open, it is quite easy to build attacks that can turn the system’s head. We see that what we attached to the notion of open source is no longer so simple now.

With Jean-Noël Barthas on the Digiteco set

With Jean-Noël Barthas on the Digiteco set

You can find the discussion on the Placeco website (in french)

(Last Modified date: 08 December 2024)
(version française)
This page was translated using AI tools (Deepl, GPT, ...)
(list of all news)