14:00
18:00

Defence of Korlan Rysbayeva's thesis (I&S department, TAD team)

Multimodal learning is an approach to machine learning that involves processing data from several sources and in different modalities. When data is defined according to a hierarchical classification, the labels associated with the data are considered to be linked according to predefined semantics. The aim of multimodal deep learning is to train models capable of processing different types of data, generally images, videos, sounds and texts, and finding relationships between them. This thesis manuscript details several scientific contributions in relation to multimodal deep learning. We propose new approaches for creating multimodal deep learning models. We propose efficient classification strategies while addressing data-related issues: missing modalities or insufficient data. Our first contribution is the proposal of a multimodal learning model (whose inputs are images and text) capable of classifying data from soil decontamination reports. The originality of this contribution is that we propose to artificially create a hierarchical relationship between the multimodal data. The addition of a hierarchical dependency in the data enables our model to achieve better classification performance than multimodal approaches that do not use a hierarchical relationship between the data. Our second contribution is to improve our first model. We propose to add an additional learning step based on deep metric learning (DML). The originality of our proposal is, as in the first contribution, to integrate the hierarchical structure into the DML learning stage. Adding this step not only improves our model's ability to classify data (particularly complex data), but also enables us to calculate distances between multimodal data. This calculation of distances makes it possible to propose a system for classifying multimodal data by similarity. The third and final contribution of this manuscript concerns the interpretability of the results. The complexity of the models created cannot be explained solely by the calculation of metrics. The originality of this contribution is that we propose different approaches for analysing the results of classification of multimodal and hierarchical data. We also offer software that enables soil remediation experts (who are not machine learning experts) to navigate through a database of pdf documents whose content (by its very nature multimodal) has been analysed by our classification models.

Amphi LaBRI