It is (once again) with great pleasure that I accepted to discuss around the table of Placeco, in the premises of TV7, this time about the openness to open source of AI models and what we could expect from it. Is having access to the sources of generative AIs enough to trust them? Behind this vast subject (spoiler: the answer is no), there are often reassuring messages from the providers of these LLMs.
First of all, it should be noted that disseminating the architecture of a model and its weights says nothing about the model itself. What I defended (quickly) in this round table is that it is not that simple. The openness of the sources is essential to verify, for example, cryptographic protocols. Paradoxically, for code to be reliable, it must be open, otherwise backdoors will be found (and exploited). Having open code also allows us to guard against malicious code parts. For years, having the code at hand allowed us to verify that the code was doing what it was supposed to do (no strange or obscure parts in the code, …), but it is not the same for AIs with LLMs. Having the configuration of billions of parameters does not indicate anything about the system’s behavior. Trust must be sought elsewhere (of course, an open system is better than a closed system, but let’s say it’s far from over).
Moreover, G. Hinton is strongly opposed to the open source openness of LLMs because once open, it is quite easy to build attacks that can turn the system’s head. We see that what we attached to the notion of open source is no longer so simple now.
You can find the discussion on the Placeco website (in french)