Agenda
December
-
11:0012:00
Mouna Safir (ENS Lyon)
Title: Fault tolerance in distributed systems
Abstract:
One of the key principles of distributed computing is to replicate data and distribute tasks to improve the availability and fault-tolerance. But this distribution poses new challenges, in particular the coordination of processes. To ensure the consistency and proper functioning of the system, distributed entities must exchange information and reach joint decisions, even in the case of breakdowns or unpredictable behaviour. In this context, fault-tolerance becomes a central issue. Distributed systems must continue to operate despite partial failures or abnormal behaviours of certain components.
In my research, I explored two ways to strengthen the resilience of distributed systems:
- The k-agreement, which makes it possible to ensure coordination despite the faults.
- Self-stabilization, which aims to automatically restore the system after temporary disturbances.In the course of my presentation, I will present the main results achieved in these two lines of research.
EnglishLaBRI 178 -
14:0015:00
TBA
EnglishLaBRI, salle 178 -
14:0017:00
Jury members:
- Robert ROSS, Deputy Division Director/Senior Computer Scientist, Argonne National Laboratory (United States) - Rapporteur
- Jesus CARRETERO, Full Professor, Universidad Carlos III de Madrid (Spain) - Rapporteur
- Gabriel ANTONIU, Director of Research, Inria - Rapporteur
- Thomas HERAULT, Director of Research, Inria - Examiner
- Vania MARANGOZOVA, University Professor, Université Grenoble Alpes - Examiner
- Julien KUNKEL, Professor, University of Göttingen (Germany) - Examiner
- Brice GOGLIN, Director of Research, Inria - Director of Research
Salle Ada Lovelace (Inria) -
15:3016:30
Locomotion, step planning, and fall resistance in humanoid robots via reinforcement learning policies
EnglishAmphi 2 Bat A IUT Gradignan -
15:3018:30
The thesis “Locomotion, step planning, and fall resistance in humanoid robots via reinforcement learning policies” studies how to equip humanoid robots with reliable walking and rapid recovery after a fall, without resorting to heuristics or preprogrammed trajectories. It addresses a central challenge in robotic autonomy: enabling real robots to act robustly in uncertain, contact-rich environments, while respecting the constraints of embedded computing. Purely model-based approaches show their limitations in terms of adaptation, while deep reinforcement learning (DRL) promises generalizable behaviors learned from data. The problem is therefore: how can we design DRL policies that are computationally light, transferable from simulation to real robots, and integrable into standard locomotion stacks? Methodologically, the thesis establishes the foundations of reinforcement learning applied to robotics, then proposes two main contributions trained in simulation with domain randomization and validated on small humanoids. FootstepNet is an efficient actor-critical step planner capable of producing continuous step placements while anticipating the number of steps needed to reach multiple local goals; it eliminates dependence on discrete step sets and heuristics, operates using embedded inference, and matches or exceeds the quality of ARA* planning, at a much lower computational cost, with validation in simulation and on a real robot at RoboCup 2023. FRASA, meanwhile, is a unified catching-up and lifting agent: a single policy transforms proprioceptive observations into motor commands that establish stabilizing contacts before standing up. By exploiting the Cross-Q algorithm and the robot's symmetry, FRASA reduces training to approximately 30 minutes and transfers zero-shot to the actual robot, surpassing a reference based on preprogrammed trajectories and handling a wide variety of initial postures. In conclusion, this work shows that lightweight, modular, and safe DRL policies can be made practical for onboard control of humanoids, significantly reducing downtime after disruption and paving the way for more general and robust autonomy in real-world conditions.
Amphi 2 Bât 2A (IUT de Bx, Gradignan) -
14:0017:00
The adoption of electronic health records has considerably enhanced access to large volumes of clinical data. While this accessibility is invaluable for both healthcare delivery and research, it also introduces new challenges arising from the complexity of medical data. These challenges include its implicitness (i.e., the need for domain expertise to interpret data), imperfections such as inconsistency, uncertainty, and incompleteness , and its inherently temporal nature. This thesis investigates how logic-based approaches can address these challenges.
First, I investigated an ontology-driven approach to illustrate how ontologies can be used to evaluate medical data quality, with a focus on lung cancer phenotyping. This involved designing an ontology to capture essential domain knowledge and applying it to query the Clinical Data Warehouse of Bordeaux University Hospital. The work highlighted both the benefits of ontologies in representing domain knowledge and identifying inconsistencies, as well as their limitations, particularly in handling temporally inconsistent healthcare data.
Building on this experience, I then proposed a novel logic-based framework for inferring high-level events from temporal clinical data, in a way that better aligns with clinical reasoning and decision-making . The framework defines logical rules specifying the existence conditions of an event at a given time-point, along with optional termination conditions that signal its possible end. It also introduces two aggregation methods to construct event intervals from these conditions. Furthermore, the formalism supports the definition of meta-events, obtained by combining or generalizing other events, and integrates confidence levels and a repair mechanism to handle imperfections in event detection. To validate the framework, I implemented its core components using Answer Set Programming, a declarative logic programming paradigm, and evaluated the resulting system, CASPER, on two medical use cases. The evaluation showed both computational feasibility and alignment with expert medical opinions.
Amphi LaBRI -
10:0013:00
High-performance computing refers to the use of supercomputers to solve complex problems requiring exceptional computing power, particularly in numerical simulations such as weather forecasting or fluid dynamics. These systems, organized into computing clusters, combine system administration, networking, hardware architecture, and software optimization. Supercomputers are composed of multiple computing nodes, each equipped with multi-core processors or even graphics cards, connected in a network that allows data exchange. Rather than executing a task on a single machine, problems are divided and parallelized. This allows simultaneous execution on multiple resources. There are two forms of parallelism: inter-node, where communication between nodes is critical but resources are vast, and intra-node, where processors share memory, facilitating communication but with more limited resources. In this context, streaming applications, particularly software radio, take advantage of intra-node parallelism. Stream computing differs from batch processing. Data is processed as it arrives, without accumulating input data. Processing filters are organized in a pipeline, with each stage being executed by a different computing resource. This mechanism significantly increases throughput, which is essential for applications such as video or radio broadcasting. This thesis aims to optimize the automatic allocation of resources for streaming applications on multicore architectures, first homogeneous and then heterogeneous. The first part of this work focuses on task chain scheduling on homogeneous multicore architectures. The problem is modeled as a pipeline workflow scheduling problem. The objective is to maximize throughput by exploiting pipeline parallelism and task replication. Two algorithms are proposed: a dynamic programming approach to obtain an optimal solution, and OTAC, an optimal greedy algorithm that guarantees high throughput while minimizing resource usage. Experiments show that OTAC quickly produces optimal partitions with reduced resource usage. The emergence of hybrid processors composed of high-performance cores (core-P) and energy-efficient cores (core-E) introduces new challenges: execution times vary depending on the assignment. The objective becomes twofold: to maximize throughput while minimizing energy consumption, favoring the use of efficient cores. The second part therefore focuses on resource allocation for task chains on heterogeneous architectures. Three strategies are developed: two greedy heuristics (FERTAC and 2CATAC) and an optimal solution using dynamic programming (HeRAD).
The results indicate that heuristics achieve near-optimal performance while consuming very few additional resources. The last part of this work focuses on the management of multiple simultaneous streaming channels. In certain contexts, such as embedded systems, IoT, or the cloud, multiple applications coexist on the same resources. The goal is to intelligently distribute resources among multiple pipelines while satisfying throughput constraints without wasting resources. The last part explores allocation strategies adapted to the management of multiple task chains or task graphs. Thus, this thesis offers several contributions to the optimization of streaming systems on parallel architectures, covering optimal scheduling, adaptation to heterogeneous architectures, and the coexistence of multiple simultaneous streams, with a constant focus on performance and energy efficiency.Amphi LaBRI -
14:0017:00
In recent years, data production in biology has experienced unprecedented growth, driven by the development of high-throughput sequencing techniques, whose scope of application continues to expand. New sequencing technologies specifically targeting a single cell (``single-cell'') are one example. In oncology, this new data is crucial for improving our understanding of tumor development and heterogeneity by identifying the different cell types (or states) that make up a tumor. At the patient level, comprehensible mapping of this heterogeneity paves the way for new personalized medicine therapies. Characterizing this cellular heterogeneity requires the use of automatic or manual methods to annotate an individual cell (or group of similar cells) based on its gene expression. In this context, the aim of this thesis project is to develop new methods for annotating single-cell data at the intersection of several disciplines, such as bioinformatics and computer science (knowledge representation and visualization, in particular).
Amphi LaBRI -
14:3018:00
Chabname Ghassemi Nedjad will defend her thesis on December 11, 2025, at 2:30 p.m. in the LaBRI lecture hall.
The title of her thesis is: “Modeling and solving combinatorial optimization problems for reverse ecology.”
Amphi LaBRI