10:00
14:00

Over the past few decades, in order to meet growing demands for computing power, high-performance computing systems have adopted increasingly complex architectures. Recent architectures now incorporate accelerators such as FPGAs and GPUs, multi-GPUs, and complex interconnect topologies. However, while this allows for unprecedented optimal performance, fully exploiting these architectures is becoming increasingly complex. In particular, the growing gap between core performance and memory bus bandwidth, as well as the introduction of NUMA (Non-Uniform Memory Access) effects, limit the performance of memory-intensive applications, making them “memory-bound.” In this context, memory-intensive applications, or any memory-intensive computational phases within an application, will often achieve better performance and energy efficiency on a subset of available resources, using fewer cores and accelerators, thereby reducing the amount of data access required and avoiding NUMA effects. However, finding an optimal set of resources to run these applications adds another optimization challenge for developers, especially when considering multiple architectures, as this optimal set will vary depending on the topology of the target architecture. To alleviate this burden, in this thesis we will present two dynamic resource adjustment heuristics capable of transparently choosing an efficient set of GPUs to run iterative applications. This heuristic leverages both online performance metrics and observation of data access patterns, as well as information about the target architecture's topology, to explore and find the best set of resources without incurring significant overhead, regardless of the architecture. We validate our heuristics on two benchmark groups and three architectures. The first benchmark group evaluates the accuracy of our heuristics. We found that we find the best or second-best configuration in 98.33% of cases, without ever selecting a configuration that is more than 9% slower than the optimal configuration. The second group of benchmarks compares our heuristics to naive implementations. Our heuristics outperform naive heuristics in most scenarios. We also observe an improvement in energy efficiency proportional to the acceleration achieved. Finally, our heuristics achieve at least 92.6% of the maximum performance observed on all benchmarks, regardless of the target architectures, indicating good performance portability.

Amphi LaBRI