Agenda
October
-
14:0018:00
Many architecture-related parameters affect application execution: number of threads, thread placement, data placement, prefetchers, core and memory frequency, SIMD, use of different levels of parallelism, precision, etc. They can help improve performance and reduce energy consumption, but poor parameter choices can also have the opposite effect. Listing and evaluating all possible combinations of these parameters quickly becomes unfeasible. For example, recent hardware developments have favored SIMD (Single Instruction Multiple Data) computing units, which can dramatically increase the number of floating point operations per second (FLOPS). However, the diversity and complexity of microarchitectures and SIMD instruction sets make them difficult to adopt in business simulation applications. The code transformations required are costly, with no guarantee of performance improvement. The same uncertainty affects other parameter choices and combinations. We present the CORHPEX (COmpiler, Runtime and Hardware Parameter EXplorer) framework, which allows parameter space explorations (PSE) to be performed on applications in order to guide parameter selection by modeling their execution times and energy consumption.
Amphi LaBRI
November
-
15:3017:00
The speaker completed her M2 internship under the supervision of Sophie Duchesne (Centre Émile Durkheim) and the co-supervision of Christèle Etchegaray and Nicolas Papadakis.
The presentation will be followed by refreshments in the lobby outside the conference room, to continue the discussion.A sociological perspective on the experiences of female students in the mathematics bachelor's degree program
This presentation offers a sociological analysis of gender relations in the mathematics bachelor's degree program at the University of Bordeaux. Based on an ethnographic study conducted as part of a sociology research thesis, it seeks to understand how gender influences and shapes the daily lives and trajectories of female students in an academic environment that is still largely male-dominated. The results of this research highlight the mechanisms (often subtle or invisible) expressed in practices, spaces, and representations that contribute to reproducing gender inequalities in fundamental scientific training. This research thus invites reflection on possible levers for action to promote diversity and equality in these fields.
Salle de conférence IMB -
10:3012:00
Recent advancements have positioned Large Language Models (LLMs) as transformative tools for scientific research, capable of addressing complex tasks that require reasoning, problem-solving, and decision-making. Their exceptional capabilities suggest their potential as scientific research assistants, but also highlight the need for holistic, rigorous, and domain-specific evaluation to assess effectiveness in real-world scientific applications.
First, this talk motivates and describes the current effort at Argonne National Laboratory to develop a multifaceted methodology for evaluating AI models as scientific Research Assistants (EAIRA). This methodology incorporates four primary classes of evaluations: 1) Multiple Choice Questions to assess factual recall; 2) Open Response to evaluate advanced reasoning and problem-solving skills; 3) Lab-Style Experiments involving detailed analysis of capabilities as research assistants in controlled environments; and 4) Field-Style Experiments to capture researcher-LLM interactions at scale in a wide range of scientific domains and applications. For each of these four classes of evaluation, we develop testing methods (e.g., benchmarks) and tools for manual and automatic QA generation and validation, as well as for collecting and analyzing researcher-LLM interactions.
We will present a selection of tools and generated benchmarks, as well as the early analysis of the largest Field-Style Experiments to date (the 1,000 Scientists AI JAM). These complementary methods enable a comprehensive analysis of LLM strengths and weaknesses with respect to their scientific knowledge, reasoning abilities, and adaptability. Although developed within a subset of scientific domains, the methodology is designed to be generalizable to a wide range of scientific domains.
Cappello received his Ph.D. from the University of Paris XI and joined CNRS (as CR) in 1994. In 2003, he joined INRIA (as DR). He initiated the Grid’5000 project in 2003 and served as its Director (https://www.grid5000.fr) from 2003 to 2008. In 2009, he established the JLESC with Marc Snir. In 2016, Cappello became the director of two Exascale Computing Project (ECP: https://www.exascaleproject.org/) software projects related to resilience (VeloC) and lossy compression of scientific data (SZ) that help Exascale applications run efficiently on Exascale systems. Cappello is now focusing on establishing a methodology to evaluate the knowledge and skills of LLMs used as research assistants. He is an IEEE Fellow, the recipient of the 2024 IEEE CS Charles Babbage Award, the 2024 Europar Achievement Award, the 2022 HPDC Achievement Award, two R&D100 awards (2019 and 2021), the 2018 IEEE TCPP Outstanding Service Award, and the 2021 IEEE Transactions of Computer Award for Editorial Service and Excellence.
EnglishAmphi LaBRI