09:30
14:00

Artificial intelligence is a field that has received a lot of attention recently. Its success is due to the progress of Deep Learning, a sub-field that brings together machine learning methods based on neural networks. These neural networks have proven to be effective in solving very complex problems in different domains. However, their effectiveness in solving problems depends on a number of factors: the architecture of the model, its size, how and where the training was performed. Most studies indicate that larger models achieve better accuracy, but they are also more expensive to train. The main challenges are related to the computational power and limited memory of the machines: if the model is too large, it may take a long time to learn (days or even months) or, in the worst case, it may not even fit in memory. During the learning process, it is necessary to store both the weights (model parameters), the activations (intermediate computed data) and the optimizer states.
This situation offers several opportunities to deal with memory problems, depending on their origin. Learning can be distributed across multiple resources on the computational platform, and different parallelization techniques offer different ways to distribute memory. In addition, data structures that remain inactive for a long period of time can be temporarily offloaded to a larger storage space, with the possibility of recovering them later (offloading strategies). Finally, activations that are computed at each iteration can be deleted and recomputed several times during the iteration (rematerialization strategies). Memory saving strategies usually induce a time overhead compared to direct execution, therefore optimization problems must be considered to choose the best approach for each strategy. In this manuscript, we formulate and analyze optimization problems in relation to various methods to reduce memory consumption during the learning process. In particular, we focus on rematerialization, activation offloading and pipelined model parallelism strategies; for each of them, we design optimal solutions under a set of assumptions. Finally, we propose a fully functional tool called rotor that combines activation offloading and rematerialization and can be used for training large models with minimal overhead in PyTorch, models that would otherwise not fit in memory.

 

English
Amphi LaBRI