Requirements:
- Account on the e-science.pl platform
- Active service "Process on Supercomputer"
Log in to the access server:
ssh user@ui.wcss.pl
Check the service balance:
service-balance --check-gpu
Example output indicating GPU access:
GPU AVAILABLE FOR: hpc-xxxxx-123456789
If you do not meet the above requirements, click here.
Run an interactive job with a GPU accelerator:
sub-interactive-lem-gpu
List the available GPUs:
nvidia-smi
Output:
Wed Apr 2 12:41:26 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA H100 Off | 00000000:66:00.0 Off | 0 |
| N/A 33C P0 69W / 700W | 1MiB / 95830MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
ssh username@ui.wcss.pl
srun -N 1 -c 4 --mem=10gb -t60 -p H100 --gres=gpu:hopper:1 --pty /bin/bash
nvidia-smi
command:apptainer exec --nv /lustre/software-data/container-images/pytorch_latest.sif nvidia-smi
ssh username@ui.wcss.pl
test-job.sh
:#!/bin/bash
#SBATCH -N 1
#SBATCH -c 4
#SBATCH --mem=10gb
#SBATCH --time=0-01:00:00
#SBATCH --job-name=example
#SBATCH -p lem-gpu
#SBATCH --gres=gpu:hopper:1
apptainer exec --nv /lustre/software-data/container-images/pytorch_latest.sif obliczenia_skrypt.py
sbatch test-job.sh
Specially prepared scripts to run individual versions of a given program on the GPU resources
List of subscripts
Enter the name of the subscript and click the Enter button - you will get information about the parameters.
Example of the script for MATLAB version r2022b, sub-matlab-R2022b
:
login@ui: ~ $ sub-matlab-R2022b
Usage: /usr/local/bin/bem2/sub-matlab-R2022b [PARAMETERS] INPUT_FILE MOLECULE_FILE
Parameters:
-p | --partition PARTITION Set partition (queue). Default = normal
-n | --nodes NODES Set number of nodes. Default = 1
-c | --cores CORES Up to 48. Default = 1
-m | --memory MEMORY In GB, up to 390 (must be integer value). Default = 1
-t | --time TIME_LIMIT In hours. Default = 1
Run the script by selecting the lem-gpu
partition - a sample script:
sub-matlab-R2022b -p lem-gpu -n 1 -c 1 -m 3 -t 50
Jupyter Notebook with GPU and PyTorch
:Launch
Running
(1) and click Connect to Jupyter
(2)The full version of the user documentation is available here.
If you do not find a solution in the above documentation, please contact kdm@wcss.pl.
or 71 320 47 45