Agenda
December
-
09:3012:00
This thesis deals with the recommendation of books in Arabic from heterogeneous sources, reconciling personalization, explainability, and temporal adaptation. Using a Goodreads corpus, we set up a traceable pipeline: normalization of reviews (MSA/dialects), deduplication and cleaning, harmonization of metadata (author, title, description), and parsimonious semantic enrichment. We propose D-MARS, which encodes each modality according to its nature (reviews via a language model adapted to Arabic dialects, ratings via an ordinal perceptron, metadata via dedicated representations), then aggregates by hierarchical attention combining intra-view (metadata) and inter-view (source calibration) weightings before modeling interactions. The D-MARS+ extension introduces incremental learning by sessions with controlled replay, limiting forgetting and stabilizing quality over time. Evaluation (strict temporal split, ablations, sensitivity) is based on an interpretable reading of attention weights, showing how information is distributed between opinions, metadata, and notes according to the richness of the inputs.
The contributions focus on (i) a cleaned corpus and a reproducible pipeline adapted to Arabic specificities, (ii) an explainable fusion exploiting the complementarity of views without unnecessary complexity, and (iii) a simple and effective incremental strategy improving the temporal consistency of ranks. Limitations include dialectal noise, uneven metadata completeness, and encoding costs. Future prospects include objectives better aligned with rank, distillation/quantification for inference, backing by a semantic graph (authors/themes), and in vivo evaluations integrating diversity, fairness, and robustness.? ailleurs qu'au LaBRI -
10:0014:00
Over the past few decades, in order to meet growing demands for computing power, high-performance computing systems have adopted increasingly complex architectures. Recent architectures now incorporate accelerators such as FPGAs and GPUs, multi-GPUs, and complex interconnect topologies. However, while this allows for unprecedented optimal performance, fully exploiting these architectures is becoming increasingly complex. In particular, the growing gap between core performance and memory bus bandwidth, as well as the introduction of NUMA (Non-Uniform Memory Access) effects, limit the performance of memory-intensive applications, making them “memory-bound.” In this context, memory-intensive applications, or any memory-intensive computational phases within an application, will often achieve better performance and energy efficiency on a subset of available resources, using fewer cores and accelerators, thereby reducing the amount of data access required and avoiding NUMA effects. However, finding an optimal set of resources to run these applications adds another optimization challenge for developers, especially when considering multiple architectures, as this optimal set will vary depending on the topology of the target architecture. To alleviate this burden, in this thesis we will present two dynamic resource adjustment heuristics capable of transparently choosing an efficient set of GPUs to run iterative applications. This heuristic leverages both online performance metrics and observation of data access patterns, as well as information about the target architecture's topology, to explore and find the best set of resources without incurring significant overhead, regardless of the architecture. We validate our heuristics on two benchmark groups and three architectures. The first benchmark group evaluates the accuracy of our heuristics. We found that we find the best or second-best configuration in 98.33% of cases, without ever selecting a configuration that is more than 9% slower than the optimal configuration. The second group of benchmarks compares our heuristics to naive implementations. Our heuristics outperform naive heuristics in most scenarios. We also observe an improvement in energy efficiency proportional to the acceleration achieved. Finally, our heuristics achieve at least 92.6% of the maximum performance observed on all benchmarks, regardless of the target architectures, indicating good performance portability.
Amphi LaBRI