| Command | PBS | SLURM | 
|---|---|---|
| Queues / partitions | qstat -q | sinfo --summarize | 
| Task list | qstat -r,qstat -i | squeue | 
| User task list | qstat -u <username> | squeue -u <username> | 
| Submitting tasks | qsub | sbatch/srun/salloc | 
| Task status | qstat | squeue <job_id>,scontrol show job <job_id> | 
| Deleting a task | qdel <job_id> | scancel <job_id> | 
| Deleting all user tasks | qselect -u <username> | xargs qdel,qdel -u <username> | scancel -u <username> | 
| Interactive session in a pseudo-terminal | qsub -I | srun --pty <program> | 
Directives for the slurm queueing system can be included in a file that also contains directives for the PBS queueing system. The interpreter will treat directives that do not belong to it as comments and will not disrupt the task execution.
| Argument | #PBS | #SLURM | 
|---|---|---|
| Task name | -N <name> | -J <name> | 
| Queue / partition | -q <name> | -p <name> | 
| Number of nodes | -l select=<number> | -N <number> | 
| Total number of subtasks / MPI processes | N/A | -n <number> | 
| Total number of computing cores per subtask | N/A | -c <number> | 
| Number of computational cores | -l ncpus=<number> | N/A | 
| Number of MPI processes per node | -l mpiprocs=<number> | --ntasks-per-node=<number> | 
| Amount of RAM per node | -l mem=<amount> | --mem=<amount> | 
| Time limit | -l walltime=<hh:mm:ss> | -t <minutes>,-t <days-hh:mm:ss> | 
It is worth noting that allocating the number of cores in the SLURM queueing system defaults to total sizes, not per node, as in the PBS system.