Index
Working with NVIDIA GPUs
NVIDIA compiler and libraries
The CUDA compilers are part of the cuda
modules. Loading the appropriate module (e.g. cuda/11.2) will not only sets the path to the Nvidia CUDA compilers but also e.g. CUDA_HOME
or CUDA_INSTALL_PATH
which might be used in Makefiles, etc.
The Nvidia (formerly PGI) compilers are part of the nvhpc
modules.
GPU statistics in job output
Slurm saves the standard output stream by default into a file in the working directory and the filename is automatically compiled from the job name and the job ID. Statistics on GPU utilization are added at the very end of this file. Each CUDA binary call prints a line with information on GPU name, bus ID, process ID, GPU and memory utilization, maximum memory usage and overall execution time.
The output will look like this:
=== GPU utilization === gpu_name, gpu_bus_id, pid, gpu_utilization [%], mem_utilization [%], max_memory_usage [MiB], time [ms] NVIDIA GeForce RTX 3080, 00000000:1A:00.0, 134883, 92 %, 11 %, 395 MiB, 244633 ms NVIDIA GeForce RTX 3080, 00000000:1A:00.0, 135412, 92 %, 11 %, 395 MiB, 243797 ms
In this example, two CUDA binary calls happened; both were running on the same GPU (00000000:1A:00.0). The average GPU utilization was 92%, 11% of the GPU memory or 395 MiB have been used and each binary run for about 244 seconds.
NVIDIA System Management Interface
The System Management Interface (nvidia-smi) is a command line utility, based on top of the NVIDIA Management Library (NVML), intended to aid in the management and monitoring of NVIDIA GPU devices. nvidia-smi
provides monitoring and management capabilities for each of NVIDIA’s Tesla, Quadro, GRID and GeForce devices from Fermi and higher architecture families.
Using nvidia-smi on our clusters
ssh
to the node where the job runs; if you have multiple jobs running on the same node,you will be placed in the allocation of the job which has most recently started a new jobstep (either by starting the job or by calling
srun
); currently, this cannot be changed- type
nvidia-smi
to see GPU utilization
The output of nvidia-smi
will look similar to the picture on the right. The upper part contains information about the GPU and provides the percentage of GPU utilization in the bottom right cell of the table; the lower part lists the processes that are running on the GPU and shows how much GPU memory is used. The device numbers for GPU jobs always starts with 0 as can be seen in the bottom left cell of the table because each job is treated on its own. Thus, in case you contact us for bug reports or need general help, please include the jobID and the GPU busID from the middle cell of the table to your message.
nvtop: GPU status viewer
Nvtop stands for Neat Videocard TOP, a (h)top
like task monitor for AMD and NVIDIA GPUs. It can handle multiple GPUs and print information about them in a htop
familiar way. It provides information on the GPU states (GPU and memory utilization, temperature, etc) as well as information about the processes executing on the GPUs. nvtop
is available as a module on Alex and TinyGPU.
NVIDIA Multi-Process Service
The Multi-Process Service (MPS) is an alternative, binary-compatible implementation of the CUDA Application Programming Interface (API). The MPS runtime architecture is designed to transparently enable co-operative multi-process CUDA applications, typically MPI jobs. This can benefit performance when the GPU compute capacity is underutilized by a single application process.
Using MPS with single-GPU jobs
# set necessary environment variables and start the MPS daemon export CUDA_MPS_PIPE_DIRECTORY=$TMPDIR/nvidia-mps.$SLURM_JOB_ID export CUDA_MPS_LOG_DIRECTORY=$TMPDIR/nvidia-log.$SLURM_JOB_ID nvidia-cuda-mps-control -d # do your work (a.out is just a placeholder) ./a.out -param 1 & ./a.out -param 2 & ./a.out -param 3 & ./a.out -param 4 & wait # stop the MPS daemon echo quit | nvidia-cuda-mps-control
GPU-Profiling with NVIDIA tools
NVIDIA offers two prominent profiling tools: Nsight Systems which targets profiling whole applications and Nsight Compute which allows zeroing in on specific performance characteristics of single kernels.
An overview of application behavior can be obtained by running
nsys profile ./a.out
transferring the resulting report file to your local machine and opening it with a local installation of Nsight Systems. More command line options are available, as specified in the documentation. Some of the most relevant ones are
--stats=true --force-overwrite=true -o my-profile
Stats summarizes obtained performance data after the application has finished and prints this summary to the command line. -o specifies the target output file name for the generated report file (my-profile in this example). Force overwrite advises the profiler to overwrite the report file should it already exist.
A full example could be
nsys profile --stats=true --force-overwrite=true -o my-profile ./a.out
Important: The resulting report files can grow quite large, depending on the application examined. Please make sure to use the appropriate file systems.
After getting an execution time overview, more in-depth analysis can be carried out by using Nsight Compute via
ncu ./a.out
which by default profiles all kernels in the application. This can be finetuned by providing options such as
--launch-skip 2 --launch-count 1
to skip the first two kernel launches and limit the number of profiled kernels to 1. Profiling can also be limited to specific kernels using
--kernel-name my_kernel
with an assumed kernel name of my_kernel. In most cases, specifying metrics to be measured is recommended as well, e.g. with
--metrics dram__bytes_read.sum,dram__bytes_write.sum
for the data volumes read and written from and to the GPU’s main memory. Further information on available metrics can be found here and some key metrics are listed here.
Other command line options can be reviewed in the documentation.
A full profiling call could be
ncu --kernel-name my_kernel --launch-skip 2 --launch-count 1 --metrics dram__bytes_read.sum,dram__bytes_write.sum ./a.out
LIKWID
LIKWID is a powerful performance tools and library suite for performance-oriented programmers and administrators using the GNU Linux operating system. For example, likwid-topology can be used to display the thread and cache topology on multicore/multisocket computers, likwid-perfctr is a tool to measure hardware performance counters on recent Intel and AMD processors, and likwid-pin allows you to pin your threaded application without changing your code.
LIKWID 5.0 also supports NVIDIA GPUs. In order to simplify the transition from CPUs to GPUs for the users, the LIKWID API for GPUs is basically a copy of the LIKWID API for CPUs with a few differences. For the command line applications, new CLI options are introduced. A tutorial on how to use LIKWID with NVIDIA GPUs can be found on the LIKWID GitHub page.
Continuous Integration / Gitlab Cx
HPC4FAU and NHR@FAU are happy to provide continuous integration for HPC-related software projects developed on one of the Gitlab instances at RRZE (gitlab.rrze.fau.de or gitos.rrze.fau.de). Access to the Gitlab Runner is restricted. Moreover, every job on the HPC systems has to be associated with an HPC user account.
The Cx jobs run on the Testcluster provided by HPC4FAU and NHR@FAU.
A Hands-On talk on Cx was given at the HPC-Café (October 19, 2021): Slides & Additional Slides
(Note: With the license downgrade of gitlab.rrze.fau.de at June 22, 2022, the pull mirroring feature is disabled. There is currently no easy way to sync GitHub to Gitlab repositories to run the Cx services)
Prerequisites:
- Valid HPC account at HPC4FAU and NHR@FAU (Getting started guide)
- SSH key pair for authentication of the Gitlab Runner. Main information about SSH access is provided here. We recommend creating a separate SSH key pair without passphrase for Gitlab CI only, e.g. by running
ssh-keygen -t ed25519 -f id_ssh_ed25519_gitlab
, which generatesid_ssh_ed25519_gitlab
andid_ssh_ed25519_gitlab.pub
. - Request Cx usage by mail at the HPC user support hpc-support@fau.de with
- your HPC account name
- the URL to the repository
- the public key (like
id_ssh_ed25519_gitlab.pub
)
Preparing Gitlab repositories:
- Configure SSH authentification for the HPC Cx service. In the repository go to
Settings -> CI/CD -> Variables
and add two variables:AUTH_USER
: The name of your HPC account.AUTH_KEY
: The content of the private SSH key file (likeid_ssh_ed25519_gitlab
). The key is not shown in the logs but is visible for all maintainers of the project!
- Enable the HPC runner for the repository at
Settings -> CI/CD -> Runner
and flip the switch atEnable shared runners for this project
. The HPC Runner has thetestcluster
tag.
Define jobs using the HPC Cx service
Jobs for CI/CD in Gitlab are defined in the file .gitlab-ci.yml
in the top level of the repository. In order to run on the HPC system, the jobs need the tag testcluster
. The tag tells the system on which runner the job can be executed.
job: tags: - testcluster [...]
To define where and how the job is run, the following variables are available:
Variable | Value | Changeable | Description |
SLURM_PARTITION |
work |
NO | Specify the set of nodes which should be used for the job. We currently allow Cx jobs only in the work partition |
SLURM_NODES |
1 |
NO | Only single-node jobs are allowed at the moment. |
SLURM_TIMELIMIT |
120 |
YES (values 1 – 120 allowed) |
Specify the maximal runtime of a job |
SLURM_NODELIST |
phinally |
YES to any hostname in the system, see here | Specify the host for the the job. |
You only need to specify a host in SLURM_NODELIST
if you want to test different architecture-specific build options or optimizations.
In order to change one of the settings globally, you can overwrite them globally for all jobs:
SLURM options can be set globally in the variables
section to apply to all jobs:
variables: SLURM_TIMELIMIT: 60 SLURM_NODELIST: rome1 job1: [...] tags: - testcluster job2: [...] tags: - testcluster
The options can also be specified for each job individually. This will overwrite the global settings.
job: [...] variables: SLURM_NODELIST: rome1 tags: - testcluster
The Cx system uses the salloc
command to submit the jobs to the batch system. All available environment variables for salloc
can be applied here. An example would be SLURM_MAIL_USER
to get notified by the system.
If you want to run on the frontend node testfront
instead of a compute node, you can specify the variable NO_SLURM_SUBMIT: 1
. This is commonly not what you want!
It may happen that your CI job fails if the node is occupied with other jobs for more than 24 hours. In that case, simply restart the CI job.
Examples:
Build on default node with default time limit (120 min.)
stages: - build - test build: stage: build script: - export NUM_CORES=$(nproc --all) - mkdir $CI_PROJECT_DIR/build - cd $CI_PROJECT_DIR/build - cmake .. - make -j $NUM_CORES tags: - testcluster artifacts: paths: - build test: stage: test variables: SLURM_TIMELIMIT: 30 script: - cd $CI_PROJECT_DIR/build - ./test tags: - testcluster
Build on default node with default time limit, enable LIKWID (hwperf) and run one job on frontend
variables: SLURM_CONSTRAINT: "hwperf" stages: - prepare - build - test prepare: stage: prepare script: - echo "Preparing on frontend node..." variables: NO_SLURM_SUBMIT: 1 tags: - testcluster build: stage: build script: - export NUM_CORES=$(nproc --all) - mkdir $CI_PROJECT_DIR/build - cd $CI_PROJECT_DIR/build - cmake .. - make -j $NUM_CORES tags: - testcluster artifacts: paths: - build test: stage: test variables: SLURM_TIMELIMIT: 30 script: - cd $CI_PROJECT_DIR/build - ./test tags: - testcluster
Build and test stage on a specific node and use a custom default time limit
variables: SLURM_NODELIST: broadep2 SLURM_TIMELIMIT: 10 stages: - build - test build: stage: build script: - export NUM_CORES=$(nproc --all) - mkdir $CI_PROJECT_DIR/build - cd $CI_PROJECT_DIR/build - cmake .. - make -j $NUM_CORES tags: - testcluster artifacts: paths: - build test: stage: test variables: SLURM_TIMELIMIT: 30 script: - cd $CI_PROJECT_DIR/build - ./test tags: - testcluster
Build and benchmark on multiple nodes
stages: - build - benchmark .build: stage: build script: - export NUM_CORES=$(nproc --all) - mkdir $CI_PROJECT_DIR/build - cd $CI_PROJECT_DIR/build - cmake .. - make -j $NUM_CORES tags: - testcluster variables: SLURM_TIMELIMIT: 10 artifacts: paths: - build .benchmark: stage: benchmark variables: SLURM_TIMELIMIT: 20 script: - cd $CI_PROJECT_DIR/build - ./benchmark tags: - testcluster # broadep2 build-broadep2: extends: .build variables: SLURM_NODELIST: broadep2 benchmark-broadep2: extends: .benchmark dependencies: - build-broadep2 variables: SLURM_NODELIST: broadep2 # naples1 build-naples1: extends: .build variables: SLURM_NODELIST: naples1 benchmark-naples1: extends: .benchmark dependencies: - build-naples1 variables: SLURM_NODELIST: naples1
Parent-child pipelines for dynamically creating jobs
In order to create a child pipeline we have to dynamically create a YAML file that is compatible for the Gitlab-CI system. The dynamically created file is only valid for the current Cx execution. The YAML file can be either created by a script that is part of the repository like the .ci/generate_jobs.sh script in the example below. There are different methods to create the YAML file for the child pipeline (multi-line script entry, templated job with variable overrides, …).
$ cat .ci/generate_jobs.sh #!/bin/bash -l # Get list of modules MODLIST=$(module avail -t intel64 2>&1 | grep -E "^intel64" | awk '{print $1}') # Alternative: Get list of idle hosts in the testcluster (requires NO_SLURM_SUBMIT=1) #HOSTLIST=$(sinfo -t idle -h --partition=work -o "%n") for MOD in ${MODLIST}; do MODVER=${MOD/\//-} # replace '/' in module name with '-' for job name cat << EOF build-$MODVER: stage: build variables: CUDA_MODULE: $MOD script: - module load "\$CUDA_MODULE" - make - ./run_tests tags: - testcluster EOF done
With this script, we can generate and execute the child pipeline in the parent configuration. We use NO_SLURM_SUBMIT=1
to generate the pipeline on the frontend node. In some cases, you have to use a specific system (e.g. CUDA modules only usable on the host medusa
), then just use the SLURM_NODELIST
variable. We store the generated YAML file as artifact in the generator job and include it as trigger in the executor. If you want to use artifacts in the child pipeline that are created in the parent pipeline (like differently configured builds), you have to specify the variable PARENT_PIPELINE_ID=$CI_PIPELINE_ID
and specify the pipeline in the child job (job
-> needs
-> pipeline: $PARENT_PIPELINE_ID
).
generate_child_pipeline: stage: build tags: - testcluster variables: - NO_SLURM_SUBMIT: 1 script: - .ci/generate_jobs.sh > child-pipeline.yml artifacts: paths: - child-pipeline.yml execute_child_pipeline: stage: test trigger: include: - artifact: child-pipeline.yml job: generate_child_pipeline strategy: depend variables: PARENT_PIPELINE_ID: $CI_PIPELINE_ID
Disclaimer
Be aware that
- the private SSH key is visible by all maintainers of your project. Best is to have only a single maintainer and all others are developers.
- the CI jobs can access data (
$HOME
,$WORK
, …) of the CI user. - BIOS and OS settings of Testcluster nodes can change without notification.
Mentors
- T. Gruber, RRZE/NHR@FAU, hpc-support@fau.de
- L. Werner, Chair of Computer Science 10, Chair of System Simulation
- Prof. Dr. Harald Köstler (NHR@FAU and Chair of System Simulation)
Python and Jupyter
Jupyterhub was the topic of the HPC Cafe in October 2020. https://jupyterhub.rrze.uni-erlangen.de/ is an experimental service.
This page will address some common pitfalls when working with python and related tools on a shared system like a cluster.
The following topics will be discussed in detail on this page:
- Available python versions
- Installing packages
- Conda environment
- Jupyter notebook security
- Installation and usage of mpi4py under Conda
Available python versions
All Unix systems come with a system-wide python installation, however for the cluster it is highly recommended to use one of the anaconda installations provided as a modules.
# reminder module avail python module load python/XY
These modules come with a wide range of preinstalled packages.
Installing packages
There are different ways of managing python packages on the cluster. This list is not complete, whoever it highlights methods which are known to work well with the local software stack.
As a general note. It is recommended to build packages using an interactive job on the target cluster to make sure all hardware can be used properly.
Make sure to load modules that might be needed by your python code (e.g. CUDA for gpu support)
set if external repositories are needed
export http_proxy=http://proxy:80
export https_proxy=http://proxy:80
Using pip
Pip is a package manager for python. It can be used to easily install packages and manage their versions.
By default pip will try to install packages system wide, which will not be possible due to missing permissions.
The behavior can be changed by adding --user
to the call.
pip install --user package-name
or %pip install —user —proxy http://proxy:80 package-name
from within Jupiter-notebooks
By defining the variable PYTHONUSERBASE
(best done in your bashrc/bash_profile) we change the installation location from ~/.local to a different path. Doing so will prevent your home folder from cluttering with stuff that does not need a backup and hitting the quota.
export PYTHONUSERBASE=$WOODYHOME/software/privat
If you intend to share the package with your coworkers consider wrapping the python package inside a module.
For information on the module system see your HPC-Cafe from March 2020.
Setup and define the target folder with PYTHONUSERBASE
.
Install the package as above.
Your module file needs to add to PYTHONPATH
the site-packages
folder
and to PATH
the bin
folder, if the package comes with binaries.
For an example see the module quantumtools
on woody.
Conda environment
In order to use Conda environments on the HPC cluster some preparation has to be done.
Remember a python module needs to be loaded all the time – see module avail python
.
run
conda init bash
if you use a different shell replace bash by the shell of your choice
source ~/.bashrc
if you use a different shell replace .bashrc.
The process was successful if your prompt starts with (base).
Create a ~/.profile with the content
if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
fi
For batch jobs it might be needed to use source activate <myenv>
instead of conda activate <myenv>
Some scientific software comes in the form of a Conda environment (e.g. https://docs.gammapy.org/0.17/install/index.html).
By default such an environment will be installed to ~/.conda. However the size can be several GB, therefore you should configure Conda to a different path. This will prevent your home folder from hitting the quota. It can be done by following these steps:
conda config # create ~/.condarc
Add the following lines to the file (replace the path if you prefer a different location)
pkgs_dirs:
- ${WOODYHOME}/software/privat/conda/pkgs
envs_dirs:
- ${WOODYHOME}/software/privat/conda/envs
You can check that this configuration file is properly read by inspecting the output of conda info
For more options see https://conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html
Conda environments can also be used for package management (and more)
You can share conda environments with co-workers by having them add your environment path to their envs_dir as well.
Create your own environment with
conda create --name myenv (python=3.9)
conda activate myenv
conda/pip install package-name
packages will end up within the conda environment therefore no --user
option is needed.
Conda environments come with the extra benefit of ease of use; with jupyterhub.rrze.uni-erlangen.de they show up as a kernel option when starting a notebook.
Jupyter notebook security
When using Jupyter notebooks with their default configuration, they are protected by a random hashed password, which in some circumstances can cause security issues on a multi-user system like cshpc or the cluster frontends. We can change this with a few configuration steps by adding a password protection.
First generate a configuration file by executing
jupyter notebook --generate-config
Open a python terminal and generate a password:
from notebook.auth import passwd; passwd()
Add the password hash to your notebook config file
# The string should be of the form type:salt:hashed-password. c.NotebookApp.password = u'' c.NotebookApp.password_required = True
From now on your notebook will be password protected. This comes with the benefit that you can use bash functions for a more convenient use.
Quick reminder how to use the remote notebook:
#start notebook on a frontend (e.g. woody) jupyter notebook --no-browser --port=XXXX
On your client, use:
ssh -f user_name@remote_server -L YYYY:localhost:XXXX -N
Open the notebook in your local browser at https://localhost:YYYY
With XXXX and YYYY being 4 digit numbers.
Don’t forget to stop the notebook once you are done. Otherwise you will block resources that could be used by others!
Some useful functions/aliases for lazy people 😉
alias remote_notebook_stop='ssh username@remote_server_ip "pkill -u username jupyter"'
Be aware this will kill all jupyter processes that you own!
start_jp_woody(){ nohup ssh -J username@cshpc.rrze.fau.de -L $1:localhost:$1 username@woody.rrze.fau.de " . /etc/bash.bashrc.local; module load python/3.7-anaconda ; jupyter notebook --port=$1 --no-browser" ; echo ""; echo " the notebook can be started in your browser at: https://localhost:$1/ " ; echo "" }
start_jp_emmy(){ nohup ssh -J username@cshpc.rrze.fau.de -L $1:localhost:$1 username@emmy.rrze.fau.de " . /etc/profile; module load python/3.7-anaconda ; jupyter notebook --port=$1 --no-browser" ; echo ""; echo " the notebook can be started in your browser at: https://localhost:$1/ " ; echo "" }
If you are using a cshell remove . /etc/bash.bashrc.local
and . /etc/profile
from the functions.
Installation and usage of mpi4py under Conda
Installing mpi4py
via pip
will install a generic MPI that will not work on our clusters. We recommend separately installing mpi4py
for each cluster through the following steps:
- If conda is not already configured and initialized follow the steps documented under Conda environment.
- For more details regarding the installation refer to the official documentation of
mpi4py
.
Note: Running MPI parallel Python scripts is only supported on the compute nodes and not on frontend nodes.
Installation
Installation must be performed on the cluster frontend node:
- Load Anaconda module.
- Load MPI module.
- Install
mpi4py
and specify the path to the MPI compiler wrapper:MPICC=$(which mpicc) pip install --no-cache-dir mpi4py
Testing the installation must be performed inside an interactive job:
- Load the Anaconda and MPI module versions
mpi4py
was build with. - Activate environment.
- Run MPI parallel Python script:
srun python -m mpi4py.bench helloworld
This should print for each process a line in the form of:
Hello, World! I am process <rank> of <size> on <hostname>
The number of processes to start is configured through the respective options of
salloc
.
Usage
MPI parallel python scripts with mpi4py
only work inside a job on a compute node.
In an interactive job or inside a job script run the following steps:
- Load the Anaconda and MPI module versions
mpi4py
was build with. - Initialize/activate environment.
- Run MPI parallel Python script
srun python <script>
The number of processes to start is configured through the respective options in the job script or of
salloc
.
For how to request an interactive job via salloc
and how to write a job script see batch processing.
SSH – Secure Shell access to HPC systems
To use the HPC systems at NHR@FAU, you have to log into a cluster frontend via an SSH (SecureShell) client. For all HPC accounts created via the new HPC portal, thus, in particular all NHR project accounts, the use of SSH public key authentication is mandatory and the only way to access the HPC systems as the HPC portal does not store password hashes. Only legacy FAU-HPC accounts can still use password authentication for some transition period.
SSH is a common command-line tool for remotely logging into and executing commands on a different computer over the network. The following topics will be discussed in detail on this page:
- Basic usage
- Graphical applications
- SSH public-key authentication
- SSH agent
- Configure host settings in ~/.ssh/config
- Security recommendations
- Advanced usage
Basic usage
Connect to a remote host
Under Linux, Mac and recent Windows 10 versions, a command-line SSH client is pre-installed. If you want to have a graphical user interface, you can use third-party clients like PuTTY (Windows, Linux) or MobaXterm (Windows).
Direct access to the cluster frontends is restricted to networks within the university. So if you are connected via such a network, or if you are using VPN, you can connect using the following command:
ssh USERNAME@CLUSTERNAME.rrze.fau.de
In this case, USERNAME
is your HPC user name and CLUSTERNAME
is the name of the cluster you want to log into, e.g. woody
, emmy
or meggie
. If you want to access TinyFat
, or TinyGPU
, you also have to connect to woody
. You will be prompted for your HPC password or your SSH key passphrase if you are using SSH keys. After successful authentication, you have a login shell on the target system.
Accounts created via the new HPC portal, thus, in particular NHR project accounts can only use SSH keys as the new HPC portal does not store any password hashes for HPC accounts.
If you are outside of the university network and are not using VPN, you have to connect to the dialogserver first :
ssh USERNAME@cshpc.rrze.fau.de
You can then use the above SSH command to connect to the cluster front ends from there.
Copy data to a remote host
A secure mechanism for copying data to a remote host is also available in all OpenSSH distributions on Linux, Mac, and current Windows 10 versions. When running Windows, you can also use WinSCP, which has a graphical user interface.
For all command-line based options, the secure copy mechanism is invoked by the following command:
scp <filename> USERNAME@CLUSTERNAME.rrze.fau.de:<remote_directory>
This will copy the local file <filename>
to the directory $HOME/<remote_directory>
on the remote system. This directory must exist prior to the copy attempt. Keep in mind that nearly all available file systems are mounted on all frontends (see File Systems documentation). It is therefore sufficient to copy data to only one frontend, e.g. cshpc.
For WinSCP, it is possible to choose from different file transfer protocols, mainly scp
and sftp
. A comparison can be found on the WinSCP website. Especially for large files, scp
is usually much faster, however, the transfer cannot be resumed.
For more complex file transfers or a larger amount of files, we recommend using rsync
. It provides more extensive functionality than scp
, e.g. resuming file transfers, excluding specific files, or checking if files already exist in the destination. It is, however, only available for Linux and Mac.
Graphical applications
We generally do not recommend to run graphical applications on the cluster frontends, since they normally consume much more resources and can, therefore, interfere with the work of other users on these shared systems. However, there are some cases where the use of graphical applications is necessary.
For applications that do not need many resources, it should be sufficient to enable X11 forwarding/X11 tunneling by your SSH client via the -X
option:
ssh -X USERNAME@CLUSTERNAME.rrze.fau.de
However, this requires an X11-Server running on your local machine, which is generally not available by default on Mac and Windows. In this case, you need to activate X11 tunneling in your client configuration, as well as have an X Window server (e.g. Xming or MobaXTerm for Windows, XQuartz for Mac) running locally.
As an alternative, we recommend using remote desktop software to run graphical applications, e.g. NoMachine NX. A description of how to set up and use NoMachine NX on cshpc is available in the dialogserver description.
SSH public-key authentication
The use of SSH public key authentication is mandatory and the only way for all accounts created via the new HPC portal, thus, in particular all NHR project accounts.
As an alternative to logging in with your HPC password when you connect to a server via SSH, you can also use public key authentication. It requires a so-called SSH key pair comprised of two matching parts – a public and a private key. The key pair is generated on your local machine. The public key is uploaded to the remote system, whereas the private key remains on your local machine. We recommend generating a separate SSH key pair for every system (workstation, laptop, …) you use for logging into the HPC clusters.
Generating key pairs is possible when your client has OpenSSH capabilities (Linux, Mac, Windows 10). If you are using PuTTY, you can generate keys with puttygen.exe.
When generating a key pair, you have to choose between different algorithms and key sizes. The recommendations which one to use are changing over time since also the capabilities to break encryptions increase. Currently, it is advised to use either rsa
with a length of 4096 bits, ecdsa
with 521 bits or ed25519
. Use one of the following commands to generate a key pair:
ssh-keygen -t rsa -b 4096
ssh-keygen -t ecdsa -b 521
ssh-keygen -t ed25519
During the generation process, you will be prompted for a passphrase to encrypt your private key. We don’t recommend leaving this empty since in this case, your private key sitting on your computer as a plain text file. If this unencrypted private key is copied/stolen by someone, they can access the corresponding server directly. In case it is encrypted by a passphrase, the attacker must first find out the passphrase in order to gain access to the server with the key.
By default, the key pair is generated into the folder .ssh
in your home directory, with the files id_<algorithm>
being your private and id_<algorithm>.pub
being your public key. If you want to change the location and name of your key pair, use the following option:
ssh-keygen -f <path_to_keys>/<keyname> -t <algorithm>
The public key must then be copied to the server and added to the authorized_keys file to be used for authentication. This can be conveniently done using the ssh-copy-id
tool:
ssh-copy-id -i ~/.ssh/id_<algorithm>.pub USERNAME@cshpc.rrze.fau.de
If this doesn’t work, you can also manually copy the public key and add it to ~/.ssh/authorized_keys:
cat id_rsa.pub | ssh USERNAME@cshpc.rrze.fau.de 'cat>> ~/.ssh/authorized_keys'
Once the public key has been configured on the server, the server will allow any connecting user that owns the private key to log in. Since your home directory is shared on all HPC systems at RRZE, it is sufficient to copy the key to only one system, e.g. cshpc
. It will be automatically available on all others.
If you have changed the default name of your key pair, you have to explicitly specify that this key should be used for connecting to a specific host. This is possible by using the -i
parameter:
ssh -i ~/<path_to_keys>/<keyname> USERNAME@CLUSTERNAME.rrze.fau.de
For frequent usage, this is quite cumbersome. Therefore, it is possible to specify these parameters (and many more) in the ~/.ssh/config
file. A detailed description of how to do this is given below.
If you have problems using your key, e.g. when you are asked for your password despite the key, or in case authentication is not working for some other reason, try using the option ssh -v
. This will cause SSH to print debugging messages about its progress, which can help locate the issue much easier.
SSH agent
If you have set a passphrase for your private SSH key, you will be prompted to enter the passphrase every time you use the key to connect to a remote host. To avoid this, you can use an SSH agent. After you have entered your passphrase for the first time, this small tool will store your private key for the duration of your session. This will allow you to connect to a remote host without re-entering your passphrase every time.
If you are using a current Linux distribution with a graphical desktop session (Unity, GNOME,…), an SSH agent will be started automatically in the background. Your private keys will be stored automatically and used when connecting to a remote host.
In case you are not using a graphical desktop session or your SSH agent does not start automatically, you will have to start it manually by typing the following into your local terminal session:
eval "$(ssh-agent -s)"
This will start the agent in the background. To add your private key to the agent, type the following:
ssh-add ~/.ssh/<keyname>
After you have successfully entered your passphrase, you will get a confirmation message that your identity file was successfully added to the agent. This will allow you to use your key to sign in without having to enter the passphrase again in the current terminal session.
You can also list the keys which are currently managed by the SSH agent via:
ssh-add -l
For more information about the SSH agent, type man ssh-add
on your terminal.
Configure host settings in ~/.ssh/config
If you are regularly connecting to multiple remote systems over SSH, you’ll find that typing all the remote hostnames, different usernames, identity files, and various more options is quite cumbersome. However, there is a much simpler solution to define shortcuts for different hosts and store SSH settings for each remote machine you connect to.
The client-side configuration file is named config
and is located in the .ssh
folder in your home directory. If it does not exist, you can create it manually.
The configuration file is organized in different sections for each host. You can use wildcards to match more than one host. The SSH client reads the configuration file line by line, so later matches can override earlier ones. Because of this, you should put your most general matches at the top of the file.
One simple example to create a shortcut for connection to cshpc is given below. The following is added to ~/.ssh/config:
Host cshpc HostName cshpc.rrze.fau.de User USERNAME IdentitiyFile ~/.ssh/private_ssh_key_name
With this configuration, you can now connect via
ssh cshpc
instead of typing
ssh -i ~/.ssh/private_ssh_key_name
USERNAME@cshpc.rrze.fau.de
A large number of different SSH options are available. Some options which are used more frequently or are especially useful are listed below. You can find a full list by typing man ssh_config
in your terminal.
Security recommendations
In general, it is recommended not to trust systems that are accessible to multiple users or that someone else has root access to, which is true for all HPC systems. Even with best efforts by the administrators to keep the systems safe, it is always possible that attackers are able to gain root rights on the system, which makes them very powerful. An attacker may for example install keyloggers or hijack your running SSH-agent, just to name a few possibilities.
Thus it is often recommended
- not to log in via interactive passwords on untrusted hosts,
- not to use SSH agents on untrusted hosts,
- and not to use SSH agent forwarding to untrusted hosts.
It is generally more secure to use SSH public-private key pairs for authentication when accessing remote systems, as long as these rules are followed:
- Store no private keys on untrusted hosts. Private keys should only be placed on single-user systems (e.g. your laptop).
- Always use SSH private keys with strong passphrases.
- Use only one SSH key pair per system with shared homes.
- Use a separate key pair for every client (laptop, desktop,..).
To make it easier to jump between different systems at RRZE, we recommend generating a separate key for internal use only. This key may also be used for access to external systems (e.g. LRZ).
SSH agent forwarding
SSH agent forwarding is mostly used as a Single-Sign-On solution to connect from one remote host to another (e.g. from cshpc to other cluster frontend or between different cluster frontends). When you enable SSH agent forwarding, the query of the remote server for the private key is redirected to your local client where the SSH-agent is running. This eliminates the need for using password logins and for having private keys on remote machines. However, it is not recommended to use SSH agent forwarding to an untrusted host. Attackers with the ability to bypass file permissions on the remote machine can gain access to the agent on your local machine through the forwarded connection. An attacker cannot obtain key material from the agent, however, they can use the loaded keys to gain access to remote machines with your identity. An alternative to using SSH-agent forwarding is the ProxyJump functionality provided by SSH, which is described below.
X11 forwarding
Similar to SSH agent forwarding, X11 forwarding can be a security risk. If your SSH client is configured to generally allow applications on a remote server to render GUI windows on your screen, this can be exploited by an attacker. It is therefore recommended to specify ForwardX11 no
for all hosts in ~/.ssh/config and only use -X
on the command line when necessary.
Host keys
SSH host keys are used to verify a server’s identity before you sent any sensitive information like passwords to it. Each server has a unique host key, which is the server’s public key. It can be used by the client to decrypt an authentication message sent from the server when connecting. This makes sure that the remote host you connect to is really the one you intended to connect to, and that your connection is not secretly redirected to another server.
SSH clients automatically store host keys for all hosts they have connected to. These keys are normally stored in ~/.ssh/known_hosts. If the host key of a server you are trying to connect to has changed, you will get a warning message.
When you connect to a server for the first time, you cannot know if the key offered by the server is correct. Therefore, we provide the public system keys for the cluster frontends below, which can be directly added into the ~/.ssh/known_hosts file (you may need to generate the .ssh directory and/or the file if it does not exist yet) on your local machine.
cshpc
cshpc.rrze.fau.de ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAs0wFVn1PN3DGcUtd/JHsa6s1DFOAu+Djc1ARQklFSYmxdx5GNQMvS2+SZFFa5Rcw+foAP9Ks46hWLo9mOjTV9AwJdOcSu/YWAhh+TUOLMNowpAEKj1i7L1Iz9M1yrUQsXcqDscwepB9TSSO0pSJAyrbuGMY7cK8m6//2mf7WSxc= cshpc.rrze.fau.de ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPSIFF3lv2wTa2IQqmLZs+5Onz1DEug8krSrWM3aCDRU cshpc.rrze.fau.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNVzp97t3CxlHtUiJ5ULqc/KLLH+Zw85RhmyZqCGXwxBroT+iK1Quo1jmG6kCgjeIMit9xQAHWjS/rxrlI10GIw=
emmy
emmy.rrze.fau.de ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2q7Ung+RdwLkMyQXiod/6BFsUBMcKlEnvG3pFR7cw7/wdLUcjUU4ubQR9ctNlQZok7XU9b2ttMVwUOYI3w2RZnQFwm9jzUbAAl00XRfBThI9cWlgJu0UR/I+W/iRJdBSAmffwsQYTYBzJ4cRTtKSLZ98yEbJVtwfRRG12PVMewNGVDsnmBOBX5zWG92tgaA1bXAiB0GVWBS79lV78+ii/1UR/PldZaA+RQtxDx0ckuc8vq10XK4GvXJijyrEzIsi3SeIFApMhr+W84uIGp5HjhaaYwVWMkBge8PX8bR8oXNaUFLVmaRUrX/WSchCmLp2YBh3npeZ/B9vAtb6LXoS7Q== emmy.rrze.fau.de ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqBH0GRzrNUrTyOE25TkQXqY/30PLVUqUam93XArPMb emmy.rrze.fau.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEsYP0xDLyI/VHC68o4BqZ1RR5Ff7qMscZjKiKD1kEP2ckea0dMdH4oB4ahScShcEG5iZmQ2FlN41FbGX4zp6Go=
woody
woody.rrze.fau.de ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQDeiwzMm9JQ3fnc7AqLkXrlmUOHh5CXh11XF0gQe0wqZx2sER+oumfi+T0M2lWMTBoeMOL74fMal/Bgpq66ETqnodDyVasyD6LwaJwEVIxlss9gcrN1SPa1XaXxgAhpEaR7mTwrNLjM6W5d3+6CMiLvp32lsL4RrQHShjhkXAYrhr3RbApMLwFdb6QZzHa7teN47aMy9s6oubRr4haoeTbfFGRaQjIyguG18nrOcnTlhyPafiHyivL5AE0wLiLCZyqux5Q0GZhIr1uK8smyT1QIIf55A8bRHVvE/QGkcT8lz2w9OnKJKNHUS/UA4MerJIM5V2/IOOdSeDgLnMuJ0usEwawgqNXoqx76X1wuhXA8IqaP4J0vo2OkK7QdyZP7qKP2YMZoDwFmyl73C8xaCw28ovIYGzPmCVLpwtIAQf0uX5xe2yWo9hLPhfP3rTKEKksOVnLKcLNpMLMqxxJHsHbLnmbFn7RdDWQ92JH0nZDAjmUZ2NaHzoPbcz5y1/CCvdURUrNLSosaMqcclq5yZif8uWtUQ8wvIacrMQUFetPTz86dg9ryIrZhxOaYgWzNQVV2ZeED4k5P+QQLkjbvw26htYWxHP7BpTOxIYryQJO/gRMTOnDPP/js49nECn1bW07HDYznhhztGVcjZAgNND8hELHxAmG3WYAsR0/sOMM1ddddM3GbYaCzX++3EE26dvEWpy3J6rHRq9mvGhRG7p8Y2LozlyDXo8wodNNci2/kXTArgeZnU5W15awjl9G5haPcoeNxg467T7bIKGq9JdLkHhqBrqGesrM0ADcDLufgrcT0SIukrc9rSOgVWtYfnXeRWfj7FrjaT15FpWeFSxBXqkQeOrScrPpmbkE7fR5xJYPFDugXQs2FvIjfvW8TsSWaxyt7eLbiFdfa1czGO1S5SOIASIn4/6CuECasvMalSX0JKKLV3Yhs7zXMk3t3wiAHXJ8m+PZB7sY0jhU1UDJIymbvwSzEtrRpbXLkQDhf9XHuG0yNS8dd9u25X6jqoWogPGKoEpQX/2xicebMfJRA3TLWuOM4RtqwAYNVrjsmAfXVmAewvlAtPNFrD0JeKJANVGfa6JFvLfhGHI2UVmdt5vzQzneI/31/+2jNbglcheAfsUO5gPbq3BdToM1bDxJ8hWw3sS2gZ2DZybVz/95rdh9zcj+ciCDMjYypVzgmDROskoAcoVRdKyOE1ZJ3jCOPvJphEPwDSNUBGiYu6LCZdTcwMsepGOvNYbk/c9LIIyczNFh/H46cgekYgVx0i8LwmhJkxCnaK7N15NkMHsK7yInjLqfzKvZQ0z7mfmeXKVgDQVEDxjsdUYq3UUCcbA8muWyuSUtTu3+wSG/v2xhl woody.rrze.fau.de ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPC9x4/BNwKQro3+95Gwh4DZpHBT2tVHPjKouwIBOk6g woody.rrze.fau.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL7lrwFFhlZZ7mGBJ3f5gSxDEKcxvebrXLXd/bz0fH6A9Qk2GrJN2tL+sleVPRJHTboOFbdeaJy0igSwivqI2vc=
meggie
meggie.rrze.fau.de ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwi2jQPuIe88/SJRmaKmA1VOOse4UxyjWlqp6VHM+8gggkpajGz3l6xZD1BihqOpY10oIA6rRHBQipZmFGgDkgTT40jdMvP8sLzqtJqKoQILXJqQbGWrGgjEDwXdZHIWaiV5Q8XDAgqj9+4W9ZHfeGtgS2OqhzAlTdgHzx94h8m6J8JUc+QtPGlWGBr/Z2Ee+KFEG1siT09k7E72sOnL9VDqMHFlWtHUsGfcR+8f6hnKnSHBB2TpxGac2Yv0KpqtHFdGMLY22RzDgCoEeY42fLvOqF9xIU8NgWoqII4W1AcvvpPDe8EthnKkaMsQjqj6N1uJ1qpsOZry7TiwQQF2/D meggie.rrze.fau.de ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNmjmhh6fMxEkmNybzP3Maau/KRbOTZECKF8FxZVH3a3rMirSyjRG8LLNswctajPJxeQCAb5OIh1A63PbsIA2g8= meggie.rrze.fau.de ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILOVhUpUyaugYDdwpHCuKfcgS0PQjZN+7KlbJ5ByZvhi
Please keep in mind that these keys are changed from time to time. So if you get a warning while connecting, first check the above keys if they have changed!
Advanced Usage
Proxy Jump
If you want to connect from one host to another without the risks involved by using SSH agent forwarding and without having to type your password each time, the ProxyJump functionality of SSH can be a good alternative. When using ProxyJump, the connection is forwarded through one or more jump hosts to the target destination host via SSH. This is the most secure method because encryption is end-to-end. You can use ProxyJump for example to connect to the emmy cluster frontend by using cshpc as the jump host. This can be achieved via the following additions to ~/.ssh/config:
Host emmy HostName emmy.rrze.fau.de ProxyJump cshpc.rrze.fau.de
SSH config
There are some options to use in ~/.ssh/config that can simplify your workflow. To see all available options, type man ssh_config
in your terminal.
- Instead of defining the same identity file explicitly for every host, you can also define to always use the same key for a specific user:
Match User JohnDoe IdentityFile ~/.ssh/private_ssh_key_name
- Specify that only the SSH keys listed in ~/.ssh/config should be tried for authentication. This can avoid a “Too many authentication failures” error, if the SSH agent offers many different keys.
IdentitiesOnly yes
- It is possible to use wildcards (*,?,..) in hostnames to reduce the number of explicit entries. For example, it is possible to deny SSH agent and X11 forwarding for all hosts via:
Host * ForwardAgent no ForwardX11 no
SSHFS
In order to access your data on a remote system, you can mount the remote directory to your local machine and use all your local tools to work on the data. If not installed, you have to install sshfs
locally. It uses the FUSE subsystem to mount file systems with user privileges.
A basic mount looks like this:
$ sshfs <user>@<remote_host>:<remote_directory> <local_directory>
In order to unmount it, you call:
$ fusermount -u <local_directory>
It is recommended to use some mount options that help with shaky connections and adaption to the local system:
Linux
$ sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,idmap=user,cache=yes <user>@<remote_host>:<remote_directory> <local_directory>
macOS
$ sshfs -o noappledouble,noapplexattr,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,kernel_cache,cache=yes,idmap=user <user>@<remote_host>:<remote_directory> <local_directory>
You can add an alias to your ~/.bashrc
for convenience:
alias sshfs="sshfs -o <opts>"
OpenFOAM
OpenFOAM (for “Open-source Field Operation And Manipulation”) is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, most prominently including computational fluid dynamics (CFD). It contains solvers for a wide range of problems, from simple laminar regimes to DNS or LES including reactive turbulent flows. It provides a framework for manipulating fields and solving general partial differential equations on unstructured grids based on finite volume methods. Therefore, it is suitable for complex geometries and a wide range of configurations and applications.
There are three main variants of OpenFOAM that are released as free and open-source software under a GPLv3 license: ESI OpenFOAM, The OpenFOAM Foundation, Foam Extend.
Availability / Target HPC systems
We provide modules for some major OpenFOAM versions, which were mostly requested by specific groups or users. If you have a request for a new version, please contact hpc-support@fau.de. Please note that we will only provide modules for fully released versions, which will be used by more than one user. If you need some specific custom configuration or version, please consider building it yourself. Installation guides are available from the respective OpenFOAM distributors.
The installed versions of OpenFOAM may differ between the different HPC clusters. You can check the available versions via module avail openfoam
.
Production jobs should be run on the parallel HPC systems in batch mode. It is NOT permitted to run computationally intensive OpenFOAM simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.
Notes
- OpenFOAM produces per default lots of small files – for each processor, every step, and for each field. The parallel file system ($FASTTMP) is not made for such a finely grained file/folder structure. For more recent versions of OpenFOAM, you can use collated I/O, which produces somewhat less problematic output.
- Paraview is used for post-processing and is also available via the modules system on the HPC cluster. However, keep an eye on the main memory requirements for this visualization, especially on the frontends!
Sample job scripts
All job scripts have to contain the following information:
- Resource definition for the queuing system (more details here)
- Load OpenFOAM environment module
- Start command for parallel execution of solver of choice
For meggie/Slurm batch system: mpirun takes the parameters (nodes, tasks-per-node) that you specified in the header of your batch file. You don’t have to specify this again in your mpirun
call (see also MPI on meggie). In order that this works correctly, the total number of MPI tasks (nodes times tasks-per-node) must be equal to numberOfSubdomains inside system/decomposeParDict!
parallel OpenFOAM on Emmy
#!/bin/bash -l #PBS -lnodes=4:ppn=40,walltime=24:00:00 #PBS -N my-job-name #PBS -j eo # number of cores to use per node PPN=20 # load environment module module load openfoam/XXXX # change to working directory cd ${PBS_O_WORKDIR} # count the number of nodes NODES=`uniq ${PBS_NODEFILE} | wc -l` # calculate the number of cores actually used CORES=$(( ${NODES} * ${PPN} )) # Please insert here your preferred solver executable! mpirun -np ${CORES} -npernode ${PPN} icoFoam -parallel -fileHandler collated > logfile
parallel OpenFOAM on Meggie (requires special account activation)
#!/bin/bash -l #SBATCH --job-name=my-job-name #SBATCH --nodes=4 #SBATCH --tasks-per-node=20 # for 20 physical cores on meggie #SBATCH --time=24:00:00 #SBATCH --export=NONE # load environment module module load openfoam/XXXX unset SLURM_EXPORT_ENV # Please insert here your preferred solver executable! mpirun icoFoam -parallel -fileHandler collated > logfile
Further information
- Tutorials and guides can be found in the official OpenFOAM.com documentation or the OpenFOAM.org User Guide.
Mentors
- please volunteer!
ANSYS CFX
ANSYS CFX is a general purpose Computational Fluid Dynamics (CFD) code. It provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). It is mostly used for simulating turbomachinery, such as pumps, fans, compressors and gas and hydraulic turbines.
Please note that the clusters do not come with any license. If you want to use ANSYS products on the HPC clusters, you have to have access to suitable licenses. These can be purchased directly from RRZE. To efficiently use the HPC resources, ANSYS HPC licenses are necessary.
Availability / Target HPC systems
Different versions of all ANSYS products are available via the modules system, which can be listed by module avail ansys
. A special version can be loaded, e.g. by module load ansys/2020R1
.
We mostly install the current versions automatically, but if something is missing, please contact hpc-support@fau.de.
Production jobs should be run on the parallel HPC systems in batch mode.
ANSYS CFX can also be used in interactive GUI mode for serial pre- and/or post-processing on the login nodes (Linux: SSH Option “-X”; Windows: using PuTTY and XMing for X11-forwarding). This should only be used to make quick simulation setup changes. It is NOT permitted to run computationally intensive ANSYS CFX simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.
Alternatively, ANSYS CFX can be run interactively with GUI on TinyFat (for large main memory requirements) or on a compute node.
Getting started
The (graphical) CFX launcher is started by typing
cfx5launch
on the command line. If you want to use the separate pre- or postprocessing capabilities, you can also launch cfx5pre
or cfx5post
, respectively.
For running simulations in batch mode on the HPC systems, use the
cfx5solve
command. You can find out the available parameters via cfx5solve -help
. One example call to use in your batch script would be
cfx5solve -batch -par-dist $NODELIST -double -def <solver input file>
The number of processes and the hostnames of the compute nodes to be used are defined in $NODELIST. For how to compile this list, refer to the example script below. Using SMT threads is not recommended.
Notes
- We recommend writing automatic backup files (every 6 to 12 hours) for longer runs to be able to restart the simulation in case of a job or machine failure. This can be specified in Output Control → User Interface → Backup Tab
- Furthermore, it is recommended to use the “Elapsed Wall Clock Time Control” in the job definition in ANSYS CFX Pre (Solver Control → Elapsed Wall Clock Time Control → Maximum Run Time → <24h). Also plan enough buffer time for writing the final output, depending on your application, this can take quite a long time!
Sample job scripts
All job scripts have to contain the following information:
- Resource definition for the queuing system (more details here)
- Load ANSYS environment module
- Generate a file with names of hosts of the current simulation run to tell CFX on which nodes it should run (see example below)
- Execute
cfx5solve
with appropriate command line parameters (available options viacfx5solve -help
)
distributed parallel job on Emmy
#!/bin/bash -l #PBS -lnodes=4:ppn=40,walltime=24:00:00 #PBS -N cfx #PBS -j eo # specify the name of your input-def file DEFFILE="example.def" # number of cores to use per node PPN=20 # load environment module module load ansys/XXXX # generate node list uniq $PBS_NODEFILE | sed -e 's/$/*'$PPN'/' | paste -d ',' -s > NODELIST # execute cfx with command line parameters (see cfx5solve -help for all available parameters) cfx5solve -batch -double -par-dist $NODELIST -def $DEFFILE
Further information
- Documentation is available within the application help manual. Further information is provided through the ANSYS Customer Portal for registered users.
- More in-depth documentation is available at LRZ. Please note: not everything is directly applicable to HPC systems at RRZE!
Mentors
- Dr.-Ing. Katrin Nusser, RRZE, hpc-support@fau.de
- please volunteer!
Tensorflow and PyTorch
TensorFlow is an Open Source Machine Learning Framework.
Security issue of TensorBoard on multi-user systems
It is not recommended for security reasons to run TensorBoard on a multi-user system. ThensorBoard does not come with any means of access control and anyone with access to the multi-user system can attach to your TensorBoard port and act as you! (It might only need some effort to find the port if you do not use the default port.) There is nothing NHR@FAU can do to mitigate these security issues. Even the hint about --host localhost
in https://github.com/tensorflow/tensorboard/issues/260#issuecomment-471737166 does not help on a multi-user system. The suggestion from https://github.com/tensorflow/tensorboard/issues/267#issuecomment-671820015 does not help either on a multi-user system.
We patched the preinstalled TensorBoard version on Alex according to https://github.com/tensorflow/tensorboard/pull/5570 using a hash will be enforced.
However, we recommend using TensorBoard on your local machine with the HPC-filesystem mounted (e.g. sshfs).
Availability / Target HPC systems
TensorFlow and PyTorch currently are not installed on any of RRZE’s HPC systems as new versions are very frequently released and all groups have their own special needs.
The following HPC systems are best suited:
- TinyGPU, Alex, or GPU nodes in Emmy
- Woody smaller but many for CPU-only runs
Notes
Different routes can be taken to get your private installation of TensorFlow or PyTorch. Don’t waste valuable storage in $HOME
and use $WORK
instead for storing your installation.
#reminder make sure your dependancies are loaded and you are running the installation in an interactive job module avail python module load python/XY module load cuda module load cudnn
Using pre-built Docker images from DockerHub
Official Docker images are regularly published on https://hub.docker.com/r/tensorflow/tensorflow and https://hub.docker.com/r/pytorch/pytorch/. These images can be used with Singularity on our HPC systems. Run the following steps on the woody frontend to pull your image:
cd $WORK export SINGULARITY_CACHEDIR=$(mktemp -d) singularity pull tensorflow-2.1.0-gpu-py3.sif docker://tensorflow/tensorflow:2.1.0-gpu-py3 rm -rf $SINGULARITY_CACHEDIR
Within your job script, you use the container as follows. /home/*
and /apps/
are automatically bind-mounted into the container. On TinyGPU (but currently not on Emmy), GPU device libraries are also automatically bind-mounted into the container.
./tensorflow-2.1.0-gpu-py3.sif ./script.py
On the GPU nodes of Emmy, you have to use singularity run --nv tensorflow-2.1.0-gpu-py3.sif ./script.py
.
Using pre-built Docker images from Nvidia
cd $WORK export SINGULARITY_CACHEDIR=$(mktemp -d) singularity pull tensorflow-ngc-20.03-tf2-py3.sif docker://nvcr.io/nvidia/tensorflow:20.03-tf2-py3 rm -rf $SINGULARITY_CACHEDIR
Within your job script, you use the container as follows. /home/*
and /apps/
are automatically bind-mounted into the container. On TinyGPU (but currently not on Emmy), GPU device libraries are also automatically bind-mounted into the container.
./tensorflow-ngc-20.03-tf2-py3.sif script.py
On the GPU nodes of Emmy, you have to use singularity run --nv tensorflow-ngc-20.03-tf2-py3.sif ./script.py
.
pip / virtual env
When manually installing TensorFlow or PyTorch (into a Python VirtualEnv) using pip
, remember to load a python module first! The system python will not be sufficient.
A simple pip install tensorflow
will not work! You need to install cudatoolkit cudnn
first to get GPU support.
PyTorch provides some help for the pip install
command see https://pytorch.org/get-started/locally/. Select stable linux pip python cuda-$version
, where $version
is the CUDA module version you previously loaded from modules.
conda
Anaconda also comes with TensorFlow packages in conda-forge. Either load one of the python modules and install the additional packages into one of your directories or start with your private (mini)conda installation from scratch! The system python will not be sufficient.
PyTorch provides some help for the conda install
command see https://download.pytorch.org/whl/torch_stable.html. Select stable linux pip python cuda-$version
, where $version
is the CUDA module version you previously loaded from modules.
To check that your TensorFlow is functional and detects the hardware, you can use the following simple python sequence:
from tensorflow.python.client import device_lib print(device_lib.list_local_devices())
To check that your PyTorch is functional and detects the hardware, you can use the following simple lines on your bash:
python -c 'import torch; print(torch.rand(2,3).cuda())'
Further information
- https://www.tensorflow.org/
- https://github.com/tensorflow/tensorflow
- https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow
Mentors
- please volunteer!
- Prof. Harald Köstler (NHR/LSS)
Test cluster
The RRZE test and benchmark cluster is an environment for porting software to new CPU architectures and running benchmark tests. It comprises a variety of nodes with different processors, clock speeds, memory speeds, memory capacity, number of CPU sockets, etc. There is no high-speed network, and MPI parallelization is restricted to one node. The usual NFS file systems are available.
This is a testing ground. Any job may be canceled without prior notice. For further information about proper usage, please contact HPC@RRZE.
This is a quick overview of the systems including their host names (frequencies are nominal values) – NDA systems are not listed:
- aurora1: Single Intel Xeon “Skylake” Gold 6126 CPU (12 cores + SMT) @ 2.60GHz.
Accelerators: 2x NEC Aurora “TSUBASA” 10B (48 GiB RAM) - broadep2: Dual Intel Xeon “Broadwell” CPU E5-2697 v4 (2x 18 cores + SMT) @ 2.30GHz, 128 GiB RAM
- casclakesp2: Dual Intel Xeon “Cascade Lake” Gold 6248 CPU (2x 20 cores + SMT) @ 2.50GHz, 384 GiB RAM
- hasep1: Dual Intel Xeon “Haswell” E5-2695 v3 CPU (2x 14 cores + SMT) @ 2.30GHz, 64 GiB RAM
- icx32: Dual Intel Xeon “Icelake” Platinum 8358 CPU (2x 32 cores + SMT) @ 2.60GHz, 256 GiB RAM
- icx36: Dual Intel Xeon “Icelake” Platinum 8360Y CPU (2x 36 cores + SMT) @ 2.40GHz, 256 GiB RAM
- interlagos1: Dual AMD Opteron 6276 “Interlagos” CPU (2x 16 cores) @ 2.3 GHz, 64 GiB RAM.
Accelerator: AMD Radeon VII GPU (16 GiB HBM2) - ivyep1: Dual Intel Xeon “Ivy Bridge” E5-2690 v2 CPU (2x 10 cores + SMT) @ 3.00GHz, 64 GiB RAM
- medusa: Dual Intel Xeon “Cascade Lake” Gold 6246 CPU (2x 12 cores + SMT) @ 3.30GHz, 192 GiB RAM.
Accelerators:
– NVIDIA GeForce RTX 2070 SUPER (8 GiB GDDR6)
– NVIDIA GeForce RTX 2080 SUPER (8 GiB GDDR6)
– NVIDIA Quadro RTX 5000 (16 GiB GDDR6)
– NVIDIA Quadro RTX 6000 (24 GiB GDDR6) - milan1: Dual AMD EPYC 7543 “Milan” CPU (32 cores + SMT) @ 2.8 GHz, 256 GiB RAM
Accelerators: NVIDIA A40 (48 GiB GDDR6) - naples1: Dual AMD EPYC 7451 “Naples” CPU (2x 24 cores + SMT) @ 2.3 GHz, 128 GiB RAM
- phinally: Dual Intel Xeon “Sandy Bridge” CPU E5-2680 (8 cores + SMT) @ 2.70GHz, 64 GiB RAM
- rome1: Single AMD EPYC 7452 “Rome” CPU (32 cores + SMT) @ 2.35 GHz, 128 GiB RAM
- rome2: Dual AMD EPYC 7352 “Rome” CPU (24 cores + SMT) @ 2.3 GHz, 256 GiB RAM
Accelerators: AMD MI100 (32 GiB HBM2) - skylakesp2: Intel Xeon “Skylake” Gold 6148 CPU (2x 20 cores + SMT) @ 2.40GHz, 96 GiB RAM
- summitridge1: AMD Ryzen 7 1700X CPU (8 cores + SMT), 32 GiB RAM
- teramem: Dual Intel Xeon Platinum “Icelake” 8360Y CPU (2x 36 cores + SMT) @ 2.40GHz, 2.048 GiB RAM, 30 TB of local NVMe storage – will later be moved to the new Cluster2021 cluster or TinyFat
- warmup: Dual Cavium/Marvell “ThunderX2” (ARMv8) CN9980 (2x 32 cores + 4-way SMT) @ 2.20 GHz, 128 GiB RAM
Technical specifications of all more or less recent GPUs available at RRZE (either in the Testcluster or in TinyGPU):
RAM | BW
[GB/s] |
Ref Clock [GHz] |
Cores Shader/TMUs/ROPs |
TDP
[W] |
SP [TFlop/s] |
DP
[TFlop/s] |
Host | Host CPU (base clock frequency) |
|
Nvidia Geforce GTX980 | 4 GB GDDR5 | 224 | 1,126 | 2.048/128/64 | 180 | 4,98 | 0,156 | ||
Nvidia Geforce GTX1080 | 8 GB GDDR5 | 320 | 1,607 | 2.560/160/64 | 180 | 8,87 | 0,277 | tg03x | Intel Xeon Broadwell E5-2620 v4 (8 C, 2.10GHz) |
Nvidia Geforce GTX1080Ti | 11 GB GDDR5 | 484 | 1,480 | 3.584/224/88 | 250 | 11,34 | 0,354 | tg04x | Intel Xeon Broadwell E5-2620 v4 (2x 8 C, 2.10GHz) |
Nvidia Geforce RTX2070Super | 8 GB GDDR6 | 448 | 1,605 | 2.560/160/64 | 215 | 9,06 | 0,283 | medusa | Intel Xeon Cascadelake Gold 6246 (2x 12 C, 3.30GHz) |
Nvidia Quadro RTX5000, active | 16 GB GDDR6 | 448 | 1,620 | 3.072/192/64 | 230 | 11,15 | 0,348 | medusa | Intel Xeon Cascadelake Gold 6246 (2x 12 C, 3.30GHz) |
Nvidia Geforce RTX2080Super | 8 GB GDDR6 | 496 | 1,650 | 3.072/192/64 | 250 | 11,15 | 0,348 | medusa | Intel Xeon Cascadelake Gold 6246 (2x 12 C, 3.30GHz) |
Nvidia Geforce RTX2080Ti | 11 GB GDDR6 | 616 | 1,350 | 4.352/272/88 | 250 | 13,45 | 0,420 | tg06x | Intel Xeon Skylake Gold 6134 (2x 8 Cores + SMT, 3.20GHz) |
Nvidia Quadro RTX6000, active | 24 GB GDDR6 | 672 | 1,440 | 4608/288/96 | 260 | 16,31 | 0,510 | medusa | Intel Xeon Cascadelake Gold 6246 (2x 12 C, 3.30GHz) |
Nvidia Geforce RTX3080 | 10 GB, GDDR6X | 760 | 1.440 | 8.704 | 320 | 29.77 | 0.465 | tg08x |
Intel Xeon IceLake Gold 6226R (2x 32 cores + SMT, 2.90GHz) |
Nvidia Tesla V100 (PCIe, passive) | 32 GB HBM2 | 900 | 1,245 | 5.120 Shader | 250 | 14,13 | 7,066 | tg07x | Intel Xeon Skylake Gold 6134 (2x 8 Cores + SMT, 3.20GHz) |
Nvidia A40 (passiv) |
48 GB GDDR6 | 696 | 1.305 | 10.752 Shader | 300 | 37.42 | 1,169 | milan1 | AMD Milan 7543 (2x 32 cores + SMT, 2.8 GHz) |
Nvidia A100 (SMX4/NVlink, passive) | 40 GB HBM2 | 1.555 | 1.410 | 6.912 Shader | 400 | 19,5 | 9.7 | tg09x | AMD Rome 7662 (2x 64 Cores, 2.0GHz) |
AMD Instinct MI100 (PCIe Gen4, passive) | 32 GB HBM2 | 1229 | 1.502 | 120 Compute Units / 7680 Cores | 300 | 21,1 | 11,5 | rome2 | AMD Rome 7352 (2x 24 cores + SMT, 2.3 GHz) |
AMD Radeon VII | 16 GB HBM2 | 1024 | 1,400 | 3.840/240/64 | 300 | 13,44 | 3,360 | interlagos1 | AMD Interlagos Opteron 6276 |
This website shows information regarding the following topics:
Access, User Environment, and File Systems
Access to the machine
Note that access to the test cluster is restricted: If you want access to it, you will need to contact hpc@rrze. In order to get access to the NDA machines you have to provide a short (!) description of what you want to do there.
From within the FAU network, users can connect via SSH to the frontend
testfront.rrze.fau.de
If you need access from outside of FAU, you usually have to connect for example to the dialog server cshpc.rrze.fau.de
first and then ssh to testfront from there.
While it is possible to ssh directly to a compute node, a user is only allowed to do this while they have a batch job running there. When all batch jobs of a user on a node have ended, all of their processes, including any open shells, will be killed automatically.
The login nodes and most of the compute nodes run Ubuntu 18.04. As on most other RRZE HPC systems, a modules environment is provided to facilitate access to software packages. Type “module avail
” to get a list of available packages. Note that, depending on the node, the modules may be different due to the wide variety of architectures. Expect inconsistencies. In case of questions, contact hpc@rrze.
File Systems
The nodes have local hard disks of very different capacities and speeds. These are not production systems, so do not expect a production environment.
When connecting to the front end node, you’ll find yourself in your regular RRZE $HOME
directory (/home/hpc/...
). There are relatively tight quotas there, so it will most probably be too small for the inputs/outputs of your jobs. It however does offer a lot of nice features, like fine grained snapshots, so use it for “important” stuff, e.g. your job scripts, or the source code of the program you’re working on. See the HPC file system page for a more detailed description of the features and the other available file systems including, e.g., $WORK
.
Batch processing
As with all production clusters at RRZE, resources are controlled through a batch system, SLURM in this case. Due to the broad spectrum of architectures in the test cluster, it is usually advisable to compile on the target node using an interactive SLURM job (see below).
There is a “work” queue and an “nda” queue, both with up to 24 hours of runtime. Access to the “nda” queue is restricted because the machines tied to this queue are pre-production hardware or otherwise special so that benchmark results must not be published without further consideration.
Batch jobs can be submitted on the frontend. The default job runtime is 10 minutes.
The currently available nodes can be listed using:
sinfo -o "%.14N %.9P %.11T %.4c %.8z %.6m %.35f"
To select a node, you can either use the host name or a feature name from sinfo
:
sbatch --nodes=1 --constraint=featurename --time=hh:mm:ss --export=NONE jobscript
sbatch --nodes=1 --nodelist=hostname --time=hh:mm:ss --export=NONE jobscript
Submitting an interactive job:
srun --nodes=1 --nodelist=hostname --time=hh:mm:ss --export=NONE --pty /bin/bash -l
For getting access to performance counter registers and other restricted parts of the hardware (so that likwid-perfctr
works as intended), use the option -C hwperf
.
By default, SLURM exports the environment of the shell where the job was submitted. If this is not desired, use --export=NONE and unset SLURM_EXPORT_ENV. Otherwise, problems may arise on nodes that do not run Ubuntu.
Please see the batch system description for further details.
VASP
Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modeling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
Availability / Target HPC systems
VASP requires an individual license.
Notes
- Parallelization and optimal performance:
- (try to) always use full nodes (ppn=20 for Emmy/Meggie)
- NCORE=5/10 & PPN=20 results in optimal performance in almost all cases, in general NCORE should be a divisor of PPN
- OpenMP parallelization is supposed to supersede NCORE
- use kpar if possible
- do not use hyperthreads on Emmy!
- Compilation:
- use -Davoidalloc
- use Intel toolchain and MKL
- in case of very large jobs with high memory requirements add ‘ -heap-arrays 64’ to Fortran flags before compilation (only possible for Intel ifort)
- Filesystems:
- Occasionally VASP user reported failing I/O on Meggie’s $FASTTMP (/lxfs), this might be a problem with Lustre and Fortran I/O. Please try to use the fix described here: https://github.com/RRZE-HPC/getcwd-autoretry-preload
- Since VASP does not do parallel MPI I/O, $WORK is more appropriate than $FASTTMP
- For medium sized jobs, even node local /dev/shm/ might be an option
- Walltime limit:
- VASP can only be gracefully stopped by creating the file “STOPCAR” https://www.vasp.at/wiki/index.php/STOPCAR automatic creation is shown in the example scripts below
Sample job scripts
parallel Intel MPI job on Emmy
#!/bin/bash -l #PBS -lnodes=1:ppn=40,walltime=10:00:00 #PBS -N my-VASP #PBS -j eo #enter submit directory cd $PBS_O_WORKDIR #define executable: VASP=/path-to-your-vasp-installation/vasp #load modules module load intel64 #set PPN and pinning export PPN=20 export I_MPI_PIN=enable #set stacksize to unlimited ulimit -s unlimited #create STOPCAR with LSTOP 1800s before reaching walltimelimit lstop=1800 #create STOPCAR with LABORT 600s before reaching walltimelimit labort=600 #timer for STOP = .TRUE. let SLEEPTIME1=$PBS_WALLTIME-$lstop #timer for LABORT = .TRUE. let SLEEPTIME2=$PBS_WALLTIME-$labort echo "lstop in $SLEEPTIME1 seconds" echo "labort in $SLEEPTIME2 seconds" (sleep ${SLEEPTIME1} ; echo "LSTOP = .TRUE." > STOPCAR) & lstoppid=!$ (sleep ${SLEEPTIME2} ; echo "LABORT = .TRUE." > STOPCAR) & labortpid=!$ mpirun -ppn $PPN $VASP pkill -p $lstoppid pkill -p $labortpid
parallel Intel MPI job on Meggie
#! /bin/bash -l # #SBATCH --nodes=4 #SBATCH --tasks-per-node=20 #SBATCH --time=24:00:00 #SBATCH --job-name=my-vasp #SBATCH --mail-user=my.mail #SBATCH --mail-type=ALL #SBATCH --export=NONE unset SLURM_EXPORT_ENV #enter submit directory cd $SLURM_SUBMIT_DIR #load modules module load intel64 #set PPN and pinning export PPN=20 export I_MPI_PIN=enable #define executable: VASP=/path-to-your-vasp-installation/vasp #create STOPCAR with LSTOP 1800s before reaching walltimelimit lstop=1800 #create STOPCAR with LABORT 600s before reaching walltimelimit labort=600 #automatically detect how much time this batch job requested and adjust the # sleep accordingly TIMELEFT=$(squeue -j $SLURM_JOBID -o %L -h) HHMMSS=${TIMELEFT#*-} [ $HHMMSS != $TIMELEFT ] && DAYS=${TIMELEFT%-*} IFS=: read -r HH MM SS <<< $TIMELEFT [ -z $SS ] && { SS=$MM; MM=$HH; HH=0 ; } [ -z $SS ] && { SS=$MM; MM=0; } #timer for STOP = .TRUE. SLEEPTIME1=$(( ( ( ${DAYS:-0} * 24 + 10#${HH} ) * 60 + 10#${MM} ) * 60 + 10#$SS - $lstop )) echo "Available runtime: ${DAYS:-0}-${HH:-0}:${MM:-0}:${SS}, sleeping for up to $SLEEPTIME1, thus reserving $lstop for clean stopping/saving results" #timer for LABORT = .TRUE. SLEEPTIME2=$(( ( ( ${DAYS:-0} * 24 + 10#${HH} ) * 60 + 10#${MM} ) * 60 + 10#$SS - $labort )) echo "Available runtime: ${DAYS:-0}-${HH:-0}:${MM:-0}:${SS}, sleeping for up to $SLEEPTIME2, thus reserving $labort for clean stopping/saving results" (sleep ${SLEEPTIME1} ; echo "LSTOP = .TRUE." > STOPCAR) & lstoppid=!$ (sleep ${SLEEPTIME2} ; echo "LABORT = .TRUE." > STOPCAR) & labortpid=!$ mpirun -ppn $PPN $VASP pkill -p $lstoppid pkill -p $labortpid
Further information
Mentors
- T. Klöffel, RRZE, hpc-support@fau.de
- AG A. Görling (Chair of Theoretical Chemistry)
ANSYS Mechanical
ANSYS Mechanical is a computational structural mechanics software which makes it possible to solve structural engineering problems. It is available in two different software environments – ANSYS Workbench (the newer GUI-oriented environment) and ANSYS Mechanical APDL (sometimes called ANSYS Classic, the older MAPDL scripted environment).
Please note that the clusters do not come with any license. If you want to use ANSYS products on the HPC clusters, you have to have access to suitable licenses. These can be purchased directly from RRZE. To efficiently use the HPC resources, ANSYS HPC licenses are necessary.
Availability / Target HPC systems
Production jobs should be run on the parallel HPC systems in batch mode. For simulations with high memory requirements, a single-node job on TinyFAT can be used.
ANSYS Mechanical can also be used in interactive GUI mode via Workbench for serial pre- and/or post-processing on the login nodes. This should only be used to make quick simulation setup changes. It is NOT permitted to run computationally/memory intensive ANSYS Mechanical simulations on login nodes.
Different versions of all ANSYS products are available via the modules system, which can be listed by module avail ansys
. A special version can be loaded, e.g. by module load ansys/2019R1
.
We mostly install the current versions automatically, but if something is missing, please contact hpc-support@fau.de.
Notes
- Two different parallelization methods are available: shared-memory and distributed-memory parallelization.
- Shared-memory parallelization: uses multiple cores on a single node; specify via
ansys191 -smp -np N
, default: N=2 - Distributed-memory parallelization: uses multiple nodes; specify via
ansys191 -dis -b -machines machine1:np:machine2:np:...
Sample job scripts
All job scripts have to contain the following information:
- Resource definition for the queuing system (more details here)
- Load ANSYS environment module
- Generate a variable with names of hosts of the current simulation run and specify the number of processes per host
- Execute Mechanical with appropriate command line parameters (distributed memory run in batch mode)
- Specify input and output file
distributed parallel job on Emmy
#!/bin/bash -l #PBS -lnodes=2:ppn=40,walltime=24:00:00 #PBS -N mech #PBS -j eo # load environment module module load ansys/XXXX # generate machine list, uses 20 processes per node machines=$(cat $PBS_NODEFILE | uniq | echo $(awk '{print $0":20"}') | sed 's/ /:/g') # execute mechanical with command line parameters # Please insert here the correct version and your own input and output file with its correct name! ansys191 -dis -b -machines $machines < input.dat > output.out
Further information
- Documentation is available within the application help manual. Further information is provided through the ANSYS Customer Portal for registered users.
- More in-depth documentation is available at LRZ. Please note: not everything is directly applicable to HPC systems at RRZE!
Mentors
- Dr.-Ing. Katrin Nusser, RRZE, hpc-support@fau.de
- please volunteer!