Descriptions of Slurm partitions
The following subsections sum up the parameters of the different partitions or queues available on Cholesky.
Default partition
No default partition.
You will have to select partition you want to submit your jobs. This can be done throught the command #SBATCH -p <partition name>
CPU partitions
These are the partitions for CPU computing on Cholesky.
Resources
Partition Name | Nodes | Cores per node | Max. memory per node | Default memory per cpu | Max. Walltime |
---|---|---|---|---|---|
cpu_seq | [019-022] | 40 | 384 GB | 4 GB | 100:00:00 |
cpu_shared | [001-020,023-032] | 40 | 188GB / 384 GB | 4 GB | 100:00:00 |
cpu_dist | [023-068] | 40 | 188 GB | 4 GB | 24:00:00 |
cpu_test* | [001-002,023-068] | 40 | 188 GB | 4 GB | 00:30:00 |
Limits / QOS
Partition Name | Max Ncpus per user | Max jobs per user | Max. Walltime | Max Ncpus per job |
---|---|---|---|---|
cpu_seq | 40 | 40 | 100:00:00 | 1 |
cpu_shared | 240 | - | 100:00:00 | 40 |
cpu_dist | 400 | - | 24:00:00 | 400 |
cpu_test* | 80 | 1 | 00:30:00 | 80 |
Note
cpu_test is a 'floating' queue used for debugging jobs with a QOS limit of 6 Maximum Nodes.
GPU partitions
These are the partitions for GPU computing on Cholesky.
Resources
Partition Name | Nodes | Cores per node | Max. memory per node | Max. Walltime | GPU per node |
---|---|---|---|---|---|
gpu | [1-4] | 40 | 152 GB / 188 GB | 24:00:00 | 4 |
gpu_v100 | [1-2] | 40 | 152 GB | 24:00:00 | 4 |
gpu_a100 | [3-4] | 56 | 188 GB | 24:00:00 | 4 |
gpu_a100_80g | 5 | 48 | 500 GB | 24:00:00 | 4 |
Limits / QOS
Partition Name | Max Ngpus per user | Max jobs per user | Max. Walltime | Max Ngpus per job |
---|---|---|---|---|
gpu | 8 | - | 24:00:00 | 8 |
SLURM command
The sinfo
SLURM command lists all available partitions.
$ sinfo