GPUs in Red Cloud
Red Cloud supports GPU computing featuring Nvidia Tesla T4, Nvidia Tesla V100, and Nvidia A100 GPUs. To use a GPU, launch an instance with one of the following 2 flavors (instance types):
Flavor | CPUs | GPUs | RAM |
---|---|---|---|
c4.t1.m20 | 4 | 1 Nvidia Tesla T4 | 20 GB |
c14.g1.m60 | 14 | 1 Nvidia Tesla V100 | 60 GB |
c16.a1.m55 | 16 | 1 Nvidia A100 | 55 GB |
Availability
Red Cloud has 20 T4 GPUs, 4 V100, and 2 A100 GPUs. You can see how many are available. If no GPU is available, you will receive an error when launching a GPU instance.
Red Cloud resources (CPU cores, RAM, GPUs) are not oversubscribed. When you create a GPU instance, you are reserving the physical hardware for the duration of the life of your instance (and your subscription will be charged accordingly) until the instance is deleted or shelved to free the resources.
If you are new to Red Cloud please review how to read this documentation before launching an instance, especially the section on accounting.
Launching A GPU Instance
When launching a GPU instance, you can use the base Linux or Windows image and install your own software or libraries that utilizes the GPU. To speed up time to science, CAC also provides 2 Linux GPU images with GPU software installed.
GPU images
- gpu-accelerated-ubuntu-2022-02 (based on Ubuntu 20.04 LTS)
- gpu-accelerated-rocky-8-2022-02 (based on Rocky Linux 8.5)
- gpu-accelerated-centos-7-2022-02 (based on CentOS 7.9)
These images include the following software:
- CUDA 11.6
- Anaconda Python 3 with these packages
- TensorFlow
- PyTorch
- Keras
- Docker-containerized Jupyter Notebook servers, and
- Matlab R2021a.
See Red Cloud GPU Image Usage page for more details and sample code.