• Jump to content
  • Jump to navigation
  • Jump to bottom of page
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Geschäftsstelle
  • Gauß-Allianz

Navigation Navigation close
  • News
  • People
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials and Courses
    • Theses
    • HPC Cafe
    • Student Cluster Competition
    Portal Teaching
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Success Stories from the Support
    • Training Resources
    • Summary of System Utilization
    • Reports from User Projects
    Portal Systems & Services

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. Batch Processing
  5. Torque batch system

Torque batch system

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
      • Torque batch system
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • Monthly HPC Cafe
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • Support Success Stories
      • Success story: Elmer/Ice
  • HPC User Training
  • HPC System Utilization
  • User projects
    • Biology, life sciences & pharmaceutics
      • HPC User Report from A. Bochicchio (Professorship of Computational Biology)
      • HPC User Report from A. Horn (Bioinformatics)
      • HPC User Report from C. Söldner (Professorship for Bioinformatics)
      • HPC User Report from J. Calderón (Computer Chemistry Center)
      • HPC User Report from J. Kaindl (Chair of Medicinal Chemistry)
      • HPC User Report from K. Pluhackova (Computational Biology Group)
    • Chemical & mechanical engineering
      • HPC User Report from A. Leonardi (Institute for Multiscale Simulation)
      • HPC User Report from F. Lenahan (Institute of Advanced Optical Technologies – Thermophysical Properties)
      • HPC User Report from F. Weber (Chair of Applied Mechanics)
      • HPC User Report from K. Nusser (Institute of Process Machinery and Systems Engineering)
      • HPC User Report from K. Nusser (Institute of Process Machinery and Systems Engineering)
      • HPC User Report from L. Eckendörfer (Catalytic Reactors and Process Technology)
      • HPC User Report from M. Klement (Institute for Multiscale Simulation)
      • HPC User Report from M. Münsch (Chair of Fluid Mechanics)
      • HPC User Report from T. Klein (Institute of Advanced Optical Technologies – Thermophysical Properties)
      • HPC User Report from T. Schikarski (Chair of Fluid Mechanics / Chair of Particle Technology)
      • HPC User Report from U. Higgoda (Institute of Advanced Optical Technologies – Thermophysical Properties)
    • Chemistry
      • HPC User Report from B. Becit (Professorship of Theoretical Chemistry)
      • HPC User Report from B. Meyer (Computational Chemistry – ICMM)
      • HPC User Report from D. Munz (Chair of Inorganic and General Chemistry)
      • HPC User Report from J. Konrad (Professorship of Theoretical Chemistry)
      • HPC User Report from P. Schwarz (Interdisciplinary Center for Molecular Materials)
      • HPC User Report from S. Frühwald (Chair of Theoretical Chemistry)
      • HPC User Report from S. Maisel (Chair of Theoretical Chemistry)
      • HPC User Report from S. Sansotta (Professorship of Theoretical Chemistry)
      • HPC User Report from S. Seiler (Interdisciplinary Center for Molecular Materials)
      • HPC User Report from S. Trzeciak (Professorship of Theoretical Chemistry)
      • HPC User Report from T. Klöffel (Interdisciplinary Center for Molecular Materials)
      • HPC User Report from T. Kollmann (Professorship of Theoretical Chemistry)
    • Computer science & Mathematics
      • HPC User Report from B. Jakubaß & S. Falk (Division of Phoniatrics and Pediatric Audiology)
      • HPC User Report from D. Schuster (Chair for System Simulation)
      • HPC User Report from F. Wein (Professorship for Mathematical Optimization)
      • HPC User Report from J. Hornich (Professur für Höchstleistungsrechnen)
      • HPC User Report from L. Folle and K. Tkotz (Chair of Computer Science 5, Pattern Recognition)
      • HPC User Report from R. Burlacu (Economics, Discrete Optimization, and Mathematics)
      • HPC User Report from S. Falk (Division of Phoniatrics and Pediatric Audiology)
      • HPC User Report from S. Falk (Phoniatrics and Pediatric Audiology)
      • HPC User Report from S. Jacob (Chair of System Simulation)
    • Electrical engineering & Audio processing
      • HPC User Report from N. Pia (AudioLabs)
      • HPC User Report from S. Balke (Audiolabs)
    • Geography & Climatology
      • HPC usage report from F. Temme, J. V. Turton, T. Mölg and T. Sauter
      • HPC usage report from J. Turton, T. Mölg and E. Collier
      • HPC usage report from N. Landshuter, T. Mölg, J. Grießinger, A. Bräuning and T. Peters
      • HPC User Report from C. Pickler and T. Mölg (Climate System Research Group)
      • HPC User Report from E. Collier (Climate System Research Group)
      • HPC User Report from E. Collier and T. Mölg (Climate System Research Group)
      • HPC User Report from E. Collier, T. Sauter, T. Mölg & D. Hardy (Climate System Research Group, Institute of Geography)
      • HPC User Report from E. Kropač, T. Mölg, N. J. Cullen, E. Collier, C. Pickler, and J. V. Turton (Climate System Research Group)
      • HPC User Report from J. Fürst (Department of Geography)
      • HPC User Report from P. Friedl (Department of Geography)
      • HPC User Report from T. Mölg (Climate System Research Group)
    • Linguistics
      • HPC User Report from P. Uhrig (Chair of English Linguistics)
    • Material sciences
      • HPC User Report from A. Rausch (Chair of Materials Science and Engineering for Metals)
      • HPC User Report from D. Wei (Chair of Materials Simulation)
      • HPC User Report from J. Köpf (Chair of Materials Science and Engineering for Metals)
      • HPC User Report from P. Baranova (Chair of General Materials Properties)
      • HPC User Report from S. Nasiri (Chair for Materials Simulation)
      • HPC User Report from S.A. Hosseini (Chair for Materials Simulation)
    • Medical research
      • HPC User Report from H. Sadeghi (Phoniatrics and Pediatric Audiology)
      • HPC User Report from P. Ritt (Imaging and Physics Group, Clinic of Nuclear Medicine)
      • HPC User Report from S. Falk (Division of Phoniatrics and Pediatric Audiology)
    • Physics
      • HPC User Report from D. Jankowsky (High-Energy Astrophysics)
      • HPC User Report from M. Maiti (Inst. Theoretische Physik 1)
      • HPC User Report from N. Vučemilović-Alagić (PULS group of the Physics Department)
      • HPC User Report from O. Malcioglu (Theoretische Festkörperphysik)
      • HPC User Report from S. Fey (Chair of Theoretical Physics I)
      • HPC User Report from S. Ninova (Theoretical Solid-State Physics)
      • HPC User Report from S. Schmidt (Erlangen Centre for Astroparticle Physics)
    • Regional users and student projects
      • HPC User Report from Dr. N. Ferruz (University of Bayreuth)
      • HPC User Report from J. Martens (Comprehensive Heart Failure Center / Universitätsklinikum Würzburg)
      • HPC User Report from M. Fritsche (HS-Coburg)
      • HPC User Report from M. Heß (TH-Nürnberg)
      • HPC User Report from M. Kögel (TH-Nürnberg)
  • NHR compute time projects

Torque batch system

This website shows information regarding the following topics:

  • Commands for Torque
  • Batch scripts for Torque
  • Interactive Jobs with Torque
This documentation gives you a general overview of how to use the Torque batch system and is applicable to the woody and emmy cluster, as well as parts of TinyGPU. For more cluster-specific information, consult the respective cluster documentation!

Commands for Torque

The command to submit jobs is called qsub. To submit a batch job use

qsub <further options> [<job script>]

The job script may be omitted for interactive jobs (see below). After submission, qsub will output the Job ID of your job. It can later be used for identification purposes and is also available as the environment variable $PBS_JOBID in job scripts (see below). These are the most important options for the qsub command:

Option Meaning
Important options for qsub and their meaning
-N <job name> Specifies the name which is shown with qstat. If the option is omitted, the name of the batch script file is used.
-l nodes=<# of nodes>:ppn=<nn> Specifies the number of nodes requested. All current clusters (except the SandyBridge partition within Woody) require you to always request full nodes. Thus, for Emmy you always need to specify :ppn=40, and for Woody (usually) :ppn=4. For other clusters, see the documentation of the respective clusters for the correct ppn values.
-l walltime=HH:MM:SS Specifies the required wall clock time (runtime). When the job reaches the walltime given here it will be sent a TERM signal. After a few seconds, if the job has not ended yet, it will be sent KILL. If you omit the walltime option, a – very short – default time will be used. Please specify a reasonable runtime, since the scheduler bases its decisions also on this value (short jobs are preferred).
-M x@y -m abe You will get e-mail to x@y when the job is aborted (a), starting (b), and ending (e). You can choose any subset of abe for the -m option. If you omit the -M option, the default mail address assigned to your RRZE account will be used.
-o <standard output file> File name for the standard output stream. If this option is omitted, a name is compiled from the job name (see -N) and the job ID.
-e <error output file> File name for the standard error stream. If this option is omitted, a name is compiled from the job name (see -N) and the job ID.
-I Interactive job. It is still allowed to specify a job script, but it will be ignored, except for the PBS options it might contain. No code will be executed. Instead, the user will get an interactive shell on one of the allocated nodes and can execute any command there. In particular, you can start a parallel program with mpirun.
-X Enable X11 forwarding. If the $DISPLAY environment variable is set when submitting the job, an X program running on the compute node(s) will be displayed at the user’s screen. This makes sense only for interactive jobs (see -I option).
-W depend:<dependency list> Makes the job depend on certain conditions. E.g., with -W depend=afterok:12345 the job will only run after Job 12345 has ended successfully, i.e. with an exit code of zero. Please consult the qsub man page for more information.
-q <queue> Specifies the Torque queue (see above); default queue is route. Usually it is not required to use this parameter as the route queue automatically forwards the job to an appropriate execution queue.

There are several Torque commands for job inspection and control. The following table gives a short summary:

Command Purpose Options
Useful Torque user commands
qstat [<options>] [<JobID>|<queue>] Displays information on jobs. Only the user’s own jobs are displayed. For information on the overall queue status see the section on job priorities. -a display “all” jobs in user-friendly format
-f extended job info
-r display only running jobs
qdel <JobID> ... Removes job from queue –
qalter <qsub-options> Changes job parameters previously set by qsub. Only certain parameters may be changed after the job has started. see qsub and the qalter manual page
qcat [<options>]  <JobID> Displays stdout/stderr from a running job -o display stdout (default)
-e display stderr
-f output appended data as the job is running (like tail -f

The scheduler typically sets environment variables to tell the job about what resources were allocated to it. These can also be used in batch scripts. The most useful are given below:

Useful environment variables for Torque
Job ID $PBS_JOBID
Directory from which the job was submitted $PBS_O_WORKDIR
List of nodes on which job runs (filename) cat $PBS_NODEFILE
Number of nodes allocated to job $PBS_NUM_NODES

 

Batch scripts for Torque

To submit a batch job you have to write a shell script that contains all the commands to be executed. Job parameters like estimated runtime and required number of nodes/CPUs can also be specified there (instead of on the command line):

Example of a batch script (Emmy cluster), MPI parallel job
#!/bin/bash -l
#
# allocate 4 nodes (80 Cores / 160 SMT threads) for 6 hours
#PBS -l nodes=4:ppn=40,walltime=06:00:00
#
# job name 
#PBS -N Sparsejob_33
#
# first non-empty non-comment line ends PBS options


#load required modules (compiler, MPI, ...)
module load example1
# jobs always start in $HOME - 
# change to work directory
cd  ${PBS_O_WORKDIR}

# uncomment the following lines to use $FASTTMP
# mkdir ${FASTTMP}/$PBS_JOBID
# cd ${FASTTMP}/$PBS_JOBID
# copy input file from location where job was submitted
# cp ${PBS_O_WORKDIR}/inputfile .

# run, using only physical cores
mpirun -n 80 a.out -i inputfile -o outputfile

 

Example of a batch script (woody cluster), shared memory parallel job (OpenMP)
#!/bin/bash -l
#
# allocate 1 node (4 Cores) for 6 hours
#PBS -l nodes=1:ppn=4,walltime=06:00:00
#
# job name 
#PBS -N Sparsejob_33
#
# first non-empty non-comment line ends PBS options

#load required modules (compiler, ...)
module load intel64
# jobs always start in $HOME - 
# change to work directory
cd  ${PBS_O_WORKDIR}
export OMP_NUM_THREADS=4
 
# run 
./a.out

The comment lines starting with #PBS are ignored by the shell but interpreted by Torque as options for job submission (see above for an options summary). These options can all be given on the qsub command line as well. The example also shows the use of the $FASTTMP and $HOME variables. $PBS_O_WORKDIR contains the directory where the job was submitted. All batch scripts start executing in the user’s $HOME so some sort of directory change is always in order.

If you have to load modules from inside a batch script, you can do so. The only requirement is that you have to use either a csh-based shell or bash with the -l switch, like in the example above.

Interactive Jobs with Torque

For testing purposes or when running applications that require some manual intervention (like GUIs), Torque offers interactive access to the compute nodes that have been assigned to a job. To do this, specify the -I option to the qsub command and omit the batch script. When the job is scheduled, you will get a shell on the master node (the first in the assigned job node list). It is possible to use any command, including mpirun, there. If you need X forwarding, use the -X option in addition to -I.

Note that the starting time of an interactive batch job cannot reliably be determined; you have to wait for it to get scheduled. Thus we recommend to always run such jobs with wallclock time limits less than one hour, so that the job will be routed to the devel queue for which a number of nodes is reserved during working hours.

Interactive batch jobs do not produce stdout and stderr files. If you want a protocol of what’s happened, use e.g. the UNIX script command.

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up