Send your jobs to one of the following partitions:
lem-cpu and bem2-cpu - for CPU jobs lem-gpu - for GPU jobsOn the WCSS supercomputers, there are various SLURM partitions, which can also be understood as job queues. Partitions group selected computing resources and each of them has its own set of restrictions, such as job size limit, job time limit, groups (services) that have access to them, etc. Access to partitions is granted based on membership in the appropriate services (or more precisely, Linux groups corresponding to these services).
Below are tables containing information about the currently available partitions.
Automatic partition choice based on the job duration
To automatically assign a job to the appropriate partition based on the declared job time:
lem-cpu-shortorlem-cpu-normal- use option--partition lem-cpubem2-cpu-shortorbem2-cpu-normal- use option--partition bem2-cpuThis is an additional requeuing mechanism and
bem2-cpuandlem-cpuare not formally SLURM partitions
Limitations and requirements:
service-balance --check-cpu)| Partition | Node Count | CPU Model | Number of CPUs per node | GPU Model | Number of GPUs per node | Memory | Max Job Time | Available TMPDIR (quotab) |
|---|---|---|---|---|---|---|---|---|
| bem2-cpu-short | 487 | Intel(R) Xeon(R) Platinum 8268 | 48 | - | - | 177G/357Gc | 3-00:00:00 |
|
| bem2-cpu-normal | 336 | Intel(R) Xeon(R) Platinum 8268 | 48 | - | - | 177G/357Gc | 21-00:00:00 |
|
| lem-cpu-short | 184 | AMD EPYC 9554 | 128 | - | - | 1430G | 3-00:00:00 |
|
| lem-cpu-normal | 140 | AMD EPYC 9554 | 128 | - | - | 1430G | 21-00:00:00 |
|
a two types of nodes with different amounts of memory available
b maximum TMPDIR occupancy: Lustre - no limit on TMPDIR occupancy; SHM - maximum TMPDIR capacity equal to the amount of memory on the node, LOCAL - available TMPDIR space per node given in brackets
* default TMPDIR space for single-node tasks
** default TMPDIR space for multi-node tasks
Automatic partition choice based on the job duration
To automatically assign a job to the appropriate partition based on the declared job time:
lem-gpu-shortorlem-gpu-normal- use option--partition lem-gpuThis is an additional requeuing mechanism and
lem-gpuis not formally a SLURM partition
Limitations and requirements:
service-balance --check-cpu --check-gpu)| Partition | Node Count | CPU Model | Number of CPUs per node | GPU Model | Number of GPUs per node | Memory | Max Job Time | Available TMPDIR (quotab) |
|---|---|---|---|---|---|---|---|---|
| lem-gpu-short | 74 | Intel(R) Xeon(R) Platinum 8462Y+ | 64 | NVIDIA H100-94GB | 4 | 996G | 3-00:00:00 |
|
| lem-gpu-normal | 52 | Intel(R) Xeon(R) Platinum 8462Y+ | 64 | NVIDIA H100-94GB | 4 | 996G | 7-00:00:00 |
|
| tesla | 2 | Intel(R) Xeon(R) Gold 6126 | 24 | NVIDIA Tesla P100-16GB | 2 | 117G | 7-00:00:00 |
|
b maximum TMPDIR occupancy: Lustre - no limit on TMPDIR occupancy; SHM - maximum TMPDIR capacity equal to the amount of memory on the node, LOCAL - available TMPDIR space per node given in brackets
* default TMPDIR space for single-node tasks
** default TMPDIR space for multi-node tasks
Startup script
To start an interactive session use the commands:
-sub-interactivefor the bem2-cpu-interactive partition
-sub-interactive-lem-cpufor the lem-cpu-interactive partition
-sub-interactive-lem-gpufor the lem-gpu-interactive partition
Limitations and requirements:
srun interactive job possible (you cannot assign a task using sbatch)| Partition | Node Count | CPU Model | Number of CPUs per node | GPU Model | Number of GPUs per node | Memory | Max Job Time | Available TMPDIR (quotab) | Additional limitations |
|---|---|---|---|---|---|---|---|---|---|
| bem2-cpu-interactive | 2 | Intel(R) Xeon(R) Platinum 8268 | 96 | - | - | 177G | 06:00:00 |
|
|
| lem-cpu-interactive | 1 | AMD EPYC 9554 | 128 | - | - | 1700G | 06:00:00 |
|
|
| lem-gpu-interactive | 1 | Intel(R) Xeon(R) Platinum 8462Y+ | 64 | NVIDIA H100-94GB | 4 | 996G | 06:00:00 |
|
|
b maximum TMPDIR occupancy: Lustre - no limit on TMPDIR occupancy; SHM - maximum TMPDIR capacity equal to the amount of memory on the node, LOCAL - available TMPDIR space per node given in brackets
* default TMPDIR space for single-node tasks
** default TMPDIR space for multi-node tasks
Limitations and requirements:
| Partition | Node Count | CPU Model | Number of CPUs per node | GPU Model | Number of GPUs per node | Memory | Max Job Time | Available TMPDIR (quotab) |
|---|---|---|---|---|---|---|---|---|
| plgrid-short | 32 | Intel(R) Xeon(R) Platinum 8268 | 48 | - | - | 177G | 1-00:00:00 |
|
| plgrid | 32 | Intel(R) Xeon(R) Platinum 8268 | 48 | - | - | 187G | 3-00:00:00 |
|
| plgrid-long | 32 | Intel(R) Xeon(R) Platinum 8268 | 48 | - | - | 177G | 1-00:00:00 |
|
b maximum TMPDIR occupancy: Lustre - no limit on TMPDIR occupancy; SHM - maximum TMPDIR capacity equal to the amount of memory on the node, LOCAL - available TMPDIR space per node given in brackets
* default TMPDIR space for single-node tasks
** default TMPDIR space for multi-node tasks
Maximum amounts of resources per job
The values given in the columns "Number of CPUs per node", "Number of GPUs per node", "Memory" and "Available TMPDIR" define the maximum amounts of these resources per node for one job.
The TMPDIR directories
Depending on the selected partition and the type of task - single or multi-node, default types of TMPDIR directories are assigned (see the "Available TMPDIR" column). More information on the Temporary disk space for computations (TMPDIR) page.
To obtain information about currently available nodes on individual partitions, use the check-partitions command:
$ check-partitions
PARTITION TIMELIMIT NODES(A/I)
bem2-cpu-short 3-00:00:00 412/53
bem2-cpu-normal 21-00:00:00 308/12
bem2-cpu-interactive 6:00:00 1/1
lem-cpu-short 3-00:00:00 0/171
lem-cpu-normal 21-00:00:00 0/128
lem-cpu-interactive 6:00:00 0/1
lem-gpu-short 3-00:00:00 14/47
lem-gpu-normal 7-00:00:00 13/27
lem-gpu-interactive 6:00:00 0/1
staff-bem2-cpu infinite 1/0
staff-lem-cpu infinite 1/0
staff-lem-gpu infinite 1/0
where:
More details
Information about a specific partition can be obtained using the commandscontrol show partition <PARTITION_NAME>
ATTENTION!
Lack of information about a given partition means that it is not available to the user.