These are specialized scripts prepared by system administrators to run specific versions of a given program. The scripts are designed to correctly configure the working environment (loading required modules, setting environment variables, allocating resources, selecting the required software version, etc.). Scripts are available in the following format: sub-<program_name>-<version_number>, and they are accessible exclusively on the access node ui.wcss.pl.
A current list is provided below:
sub-cfx-2024 sub-gaussian sub-orca-6.0.1-avx2
sub-abaqus sub-cfx-2025 sub-gromacs sub-orca-6.1.0-avx2
sub-abaqus-2021 sub-comsol-5.6 sub-gromacs-2021 sub-orca-avx
sub-abaqus-2023 sub-comsol-5.6-llm sub-interactive sub-psi4
sub-abaqus.old sub-comsol-6.0 sub-interactive-bem2-cpu sub-psi4-1.3.2
sub-abinit sub-comsol-6.0-llm sub-interactive-lem-cpu sub-qespresso
sub-abinit-10.2.5 sub-comsol-6.1 sub-interactive-lem-gpu sub-r
sub-abinit-10.2.7 sub-comsol-6.1-llm sub-k8s-buildtainer.sh sub-r-4.1.0
sub-abinit-10.4.7 sub-cp2k sub-kkrgen8.6.0 sub-r-4.2.1
sub-abinit-10.6.5 sub-cpmd-4.3 sub-kkrscf8.6.0 sub-raspa
sub-abinit-9.4.1 sub-cpmd-4.3.new sub-matlab sub-raspa-2.0
sub-alphafold-2.3.2 sub-crystal sub-molpro sub-sprkkr8.6.0-local
sub-alphafold-3.0.1 sub-crystal-17_1.0.2 sub-namd sub-turbomole
sub-ams sub-dalton sub-namd-2.14 sub-turbomole-7.6
sub-ams-2022.103 sub-dalton-2020 sub-namd-3.0 sub-turbomole-7.9
sub-ams-2023.104 sub-dalton-2020.old sub-namd-3.0-cuda sub-vasp-5.4.4
sub-ams-2024.102 sub-dirac sub-nwchem-7.0.2 sub-vasp-5.4.4-vtst
sub-bash sub-dirac-19 sub-openmolcas sub-vasp-6.2.0
sub-bash-mpi sub-dirac-22 sub-orca sub-vasp-6.4.0
sub-castep-24.1-mpi sub-dirac-23.0 sub-orca-4.2.1 sub-vasp-6.4.2
sub-castep-24.1-serial sub-fds-6.7.6 sub-orca-5.0.2 sub-vasp-6.4.3
sub-cfour sub-fluent sub-orca-5.0.3 sub-vasp-6.5.0
sub-cfour-2.1 sub-fluent-2022 sub-orca-5.0.3-xtb-6.5.0 sub-vasp-6.5.1
sub-cfour-2.1-serial sub-fluent-2023 sub-orca-5.0.4 sub-vasp-6.6.0
sub-cfx sub-fluent-2024 sub-orca-5.0.4-nbo7
sub-cfx-2022r1 sub-fluent-2025 sub-orca-6.0.0-avx2
slurm-<jobid>.out file, which provides extensive information for debugging or reporting issues to our helpdesk.TMPDIR within the job before calculations begin. The path is visible in the slurm-<jobid>.out file.sbatch or srun flags of the queuing system.--cores and --ntasks flags. We select these flags for the user, prioritizing job performance.-N corresponds to the sub-script flag -n.To display the help for a given script, simply type its name and press ENTER or use the --help flag. Example for the gaussian software script:
> sub-gaussian --help
sub-gaussian @ WCSS
Usage: sub-gaussian [OPTIONS] INPUT
Option Default Description
----------------------------------------------------------------------------------------
--debug Print sub script and exit. (requires input)
-C | --copy *.chk Copy additional files to TMPDIR.
Example: --copy="file.inp *.chk"
-B | --copy-back Copy specified files back from TMPDIR.
Example: --copy-back="myfile.cube *.wfn"
| --clean-tmp false Remove contents of TMPDIR after the job ends:
- false (default option)
- success (if job didnt fail)
- always (clean even if job failed)
----------------------------------------------------------------------------------------
-p | --partition lem-cpu Set partition (queue).
-n | --nodes 1 Set number of nodes.
-c | --cores 8 Set number of cores
-m | --memory 100 In GB (must be integer value).
-s | --storage Shortcut for --gres=storage:<type>:<value>
--storage=shm:200g
-t | --time 1 In hours.
| --gres E.g. 10GB tmpdir on /dev/shm would be:
--gres=storage:shm:10g
| --mail none Possible options: BEGIN,END,FAIL,ALL.
Example: --mail BEGIN,END
| --reservation Allocate from existing reservation.
-w | --nodelist Request specific list of nodes by their names.
| --name Job name to be appended after wcss prefix.
----------------------------------------------------------------------------------------
--formchk Use 'formchk' command after calculations are finished.
The copy flag is used to pass additional files, besides the input file, to the job. In the example above, the parameter default: *.chk is shown. This means that for Gaussian, all checkpoint files located in the directory from which the job was submitted are copied to TMPDIR by default.
This flag is used to copy files back to the submission directory after the job has finished. However, you must know the filenames or their naming pattern. If all files in TMPDIR are strictly required, you can use --copy-back \$TMPDIR, which will result in copying the entire directory (named after the job ID) to the SLURM_SUBMIT_DIR (from which the job was queued).
This flag is used to clean up the working directory after the job to avoid storing data after completion. By default, data is stored for a period of 2 weeks.
the default option, which ensures that data is not deleted
removes data from the working directory only if the job completed successfully
removes data from the working directory even if the job ended with an error
The following flags are used to define resource allocation in the queuing system
----------------------------------------------------------------------------------------
-p | --partition lem-cpu Set partition (queue).
-n | --nodes 1 Set number of nodes.
-c | --cores 8 Set number of cores
-m | --memory 100 In GB (must be integer value).
-s | --storage Shortcut for --gres=storage:<type>:<value>
--storage=shm:200g
-t | --time 1 In hours.
| --gres E.g. 10GB tmpdir on /dev/shm would be:
--gres=storage:shm:10g
| --mail none Possible options: BEGIN,END,FAIL,ALL.
Example: --mail BEGIN,END
| --reservation Allocate from existing reservation.
-w | --nodelist Request specific list of nodes by their names.
| --name Job name to be appended after wcss prefix.
----------------------------------------------------------------------------------------
The --storage flag is simply a shorter version of the --gres=storage:<type>:<size> flag and takes priority over the gres flag.
If I want to copy a resulting file, for example, a *.chk file from the Gaussian program
without copying output files:
sub-gaussian input.inp
copying all files with the chk extension:
sub-gaussian input.inp --copy-back="*.chk"
copying a specific file precious.chk:
sub-gaussian input.inp --copy-back="precious.chk"
copying all output files:
sub-gaussian input.inp --copy-back=\$TMPDIR
An example of using the --copy and --copy-back flags for the Quantum ESPRESSO program using sub-qespresso, along with an additional flag for this program, --exec, which allows selecting an executable other than the default one.
> sub-qespresso --help
sub-qespresso @ WCSS
Usage: sub-qespresso [OPTIONS] INPUT
Option Default Description
----------------------------------------------------------------------------------------
--debug Print sub script and exit. (requires input)
-C | --copy *.UPF Copy additional files to TMPDIR.
Example: --copy="file.inp *.chk"
-B | --copy-back Copy specified files back from TMPDIR.
Example: --copy-back="myfile.cube *.wfn"
| --clean-tmp false Remove contents of TMPDIR after the job ends:
- false (default option)
- success (if job didnt fail)
- always (clean even if job failed)
----------------------------------------------------------------------------------------
...slurm flags here...
----------------------------------------------------------------------------------------
--exec pw.x Set ESPRESSO executable.
admin@ui.wcss.pl > sub-qespresso -c 32 -m 100 scf.in
sub-qespresso @ WCSS
job name - qespresso-latest:FeCNT_scf.in
partition - lem-cpu
nodes - 1
cores - 32 per node
memory - 100 GB per node
time limit - 1 hours
mail type - none
inputs - scf.in
sbatch: [AUTOMATIC PARTITION CHOICE] Based on jobtime: 60m, selecting partition 'lem-cpu-short'
Submitted batch job 4390980
4390980. After the calculations are finished, I have two additional files, scf-4390980.log and slurm-4390980.out.admin@ui.wcss.pl > ls -1
C.pbe-n-kjpaw_psl.1.0.0.UPF
Fe.pbe-spn-kjpaw_psl.0.2.1.UPF
DOS.in
scf.in
nscf.in
scf-4390980.log
slurm-4390980.out
WORKDIR file location from slurm-4390980.out.(these are stored for 2 weeks after the calculations are finished)
admin@ui.wcss.pl > tail slurm-4390980.out
WORKDIR_FILES = /lustre/tmp/slurm/finished_jobs/4390980/LOCAL/r11ch01b04
At the same time, the input file specified that the output files should be stored in a directory named out, so the full path to the output files will be as follows:
/lustre/tmp/slurm/finished_jobs/4390980/LOCAL/r11ch01b04/out
--copy flag for the next input file:admin@ui.wcss.pl > sub-qespresso -c 32 -m 100 nscf.in --copy="/lustre/tmp/slurm/finished_jobs/4390980/LOCAL/r11ch01b04/out"
sub-qespresso @ WCSS
job name - qespresso-latest:FeCNT_nscf.in
partition - lem-cpu
nodes - 1
cores - 32 per node
memory - 100 GB per node
time limit - 1 hours
mail type - none
inputs - nscf.in
sbatch: [AUTOMATIC PARTITION CHOICE] Based on jobtime: 60m, selecting partition 'lem-cpu-short'
Submitted batch job 4390981
slurm-4390981.out, nscf-4390981.log.admin@ui.wcss.pl > ls -1
C.pbe-n-kjpaw_psl.1.0.0.UPF
Fe.pbe-spn-kjpaw_psl.0.2.1.UPF
DOS.in
nscf-4390981.log
nscf.in
scf-4390980.log
scf.in
slurm-4390980.out
slurm-4390981.out
dos.x instead of pw.x.admin@ui.wcss.pl > tail -n 1 slurm-4390981.out
WORKDIR_FILES = /lustre/tmp/slurm/finished_jobs/4390981/LOCAL/r11ch01b04
admin@ui.wcss.pl > sub-qespresso -c 32 -m 100 DOS.in --copy="/lustre/tmp/slurm/finished_jobs/4390981/LOCAL/r11ch01b04/out" --exec="dos.x"
sub-qespresso @ WCSS
job name - qespresso-latest:DOS.in
partition - lem-cpu
nodes - 1
cores - 32 per node
memory - 100 GB per node
time limit - 1 hours
mail type - none
inputs - DOS.in
sbatch: [AUTOMATIC PARTITION CHOICE] Based on jobtime: 60m, selecting partition 'lem-cpu-short'
Submitted batch job 4390983
admin@ui.wcss.pl > ls -1
C.pbe-n-kjpaw_psl.1.0.0.UPF
Fe.pbe-spn-kjpaw_psl.0.2.1.UPF
DOS-4390983.log
DOS.in
nscf-4390981.log
nscf.in
scf-4390980.log
scf.in
slurm-4390980.out
slurm-4390981.out
slurm-4390983.out
For the purpose of running modified or custom scripts, we provide two scripts that allow you to run jobs in accordance with the cluster policy while receiving the appropriate information in the slurm-<jobid>.out file.
These are:
sub-bash
sub-bash-mpi
They allow you to insert your own script in the place where other calculations would normally execute. All flags will function the same as in other sub-scripts. Following convention, the script changes the working directory to TMPDIR within the job; the provided bash script will be copied automatically, while other necessary files can be copied either within your own script or by using the --copy flag.