The Run:AI Researcher Library is a python library you can add to your deep learning python code. The library contains an elasticity module which allows train workloads to shrink or expand based on the cluster's availability
Shrinking a Workload
Shrinking a training job allows your workload to run on a smaller number of GPUs than the researcher code was originally written for. This is useful for maximizing utilization of the cluster as a whole as well as allowing a researcher to run, albeit slower than intended.
Shrinking a training job uses an algorithm called Gradient Accumulation. For more information about the algorithm see https://towardsdatascience.com/what-is-gradient-accumulation-in-deep-learning-ec034122cfa
Expanding a Workload
Expanding a training job allows your GPUs to runs on more GPUs than the researcher code was originally written for. This is useful for maximizing the utilization of the cluster as a whole as well as allowing a researcher to run faster if idle GPUs exist in the cluster. The extra GPUs will be automatically reclaimed if needed by other, prioritized jobs.
Python Deep-Learning Code
In your command line run:
pip install runai
In your python code add:
Initialize Elasticity by calling:
model = runai.elastic.keras.models.Model(model)
callbacks=[StepTimeReporter()] if runai.elastic.master else )
Running a Training Workload
- When launching the job with the runai submit command use --elastic
- When launching a job via yaml code, use the label "elastic" with the value "true"
- Elasticity currently works with Keras-based deep learning code only
- Any training job with Run:AI is subject to pause/resume episodes. Elasticity may increase these episodes, making it even more important to make your code resilient. Save checkpoints in your code and allow it to resume from the latest checkpoint rather than start from the beginning