Test cluster
The RRZE test and benchmark cluster is an environment for porting software to new CPU architectures and running benchmark tests. It comprises a variety of nodes with different processors, clock speeds, memory speeds, memory capacity, number of CPU sockets, etc. There is no high-speed network, and MPI parallelization is restricted to one node. The usual NFS file systems are available.
This is a testing ground. Any job may be canceled without prior notice. For further information about proper usage, please contact HPC@RRZE.
This is a quick overview of the systems including their host names (frequencies are nominal values) – NDA systems are not listed:
-
-
- aurora1: Single Intel Xeon “Skylake” Gold 6126 CPU (12 cores + SMT) @ 2.60GHz.
Accelerators: 2x NEC Aurora “TSUBASA” 10B (48 GiB RAM) - broadep2: Dual Intel Xeon “Broadwell” CPU E5-2697 v4 (2x 18 cores + SMT) @ 2.30GHz, 128 GiB RAM
- casclakesp2: Dual Intel Xeon “Cascade Lake” Gold 6248 CPU (2x 20 cores + SMT) @ 2.50GHz, 384 GiB RAM
- euryale: Dual Intel Xeon “Broadwell” CPU E5-2620 v4 (2x 8 cores) @ 2.10GHz, 64 GiB RAM
Accelerator:AMD RX 6900 XT (16 GB) - genoa1: Dual AMD EPYC 9654 “Genoa” CPU (2x 96 cores + SMT) @ 2.40GHz, 768 GiB RAM
- genoa2: Dual AMD EPYC 9354 “Genoa” CPU (2x 32 cores + SMT) @ 3.25GHz, 768 GiB RAM.
Accelerators:
– NVIDIA A40 (48 GiB GDDR6)
– NVIDIA L40 (48 GiB GDDR6) - hasep1: Dual Intel Xeon “Haswell” E5-2695 v3 CPU (2x 14 cores + SMT) @ 2.30GHz, 64 GiB RAM
- icx32: Dual Intel Xeon “Icelake” Platinum 8358 CPU (2x 32 cores + SMT) @ 2.60GHz, 256 GiB RAM
- icx36: Dual Intel Xeon “Icelake” Platinum 8360Y CPU (2x 36 cores + SMT) @ 2.40GHz, 256 GiB RAM
- interlagos1: Dual AMD Opteron 6276 “Interlagos” CPU (2x 16 cores) @ 2.3 GHz, 64 GiB RAM.
Accelerator: AMD Radeon VII GPU (16 GiB HBM2) - ivyep1: Dual Intel Xeon “Ivy Bridge” E5-2690 v2 CPU (2x 10 cores + SMT) @ 3.00GHz, 64 GiB RAM
- medusa: Dual Intel Xeon “Cascade Lake” Gold 6246 CPU (2x 12 cores + SMT) @ 3.30GHz, 192 GiB RAM.
Accelerators:
– NVIDIA GeForce RTX 2070 SUPER (8 GiB GDDR6)
– NVIDIA GeForce RTX 2080 SUPER (8 GiB GDDR6)
– NVIDIA Quadro RTX 5000 (16 GiB GDDR6)
– NVIDIA Quadro RTX 6000 (24 GiB GDDR6) - optane1: Dual Intel Xeon “Ice Lake” Platinum 8362 CPU (2x 32 cores + SMT) @ 2.80 GHz, 256 GiB RAM, 1024 GiB Optane Memory
- milan1: Dual AMD EPYC 7543 “Milan” CPU (32 cores + SMT) @ 2.8 GHz, 256 GiB RAM
Accelerator: AMD MI210 (64 GiB HBM2e) - naples1: Dual AMD EPYC 7451 “Naples” CPU (2x 24 cores + SMT) @ 2.3 GHz, 128 GiB RAM
- phinally: Dual Intel Xeon “Sandy Bridge” CPU E5-2680 (8 cores + SMT) @ 2.70GHz, 64 GiB RAM
- rome1: Single AMD EPYC 7452 “Rome” CPU (32 cores + SMT) @ 2.35 GHz, 128 GiB RAM
- rome2: Dual AMD EPYC 7352 “Rome” CPU (24 cores + SMT) @ 2.3 GHz, 256 GiB RAM
Accelerators:
– AMD MI100 (32 GiB HBM2)
– AMD MI210 (64 GiB HBM2e) - skylakesp2: Intel Xeon “Skylake” Gold 6148 CPU (2x 20 cores + SMT) @ 2.40GHz, 96 GiB RAM
- summitridge1: AMD Ryzen 7 1700X CPU (8 cores + SMT), 32 GiB RAM
- warmup: Dual Cavium/Marvell “ThunderX2” (ARMv8) CN9980 (2x 32 cores + 4-way SMT) @ 2.20 GHz, 128 GiB RAM
Technical specifications of all more or less recent GPUs available at RRZE (either in the Testcluster or in TinyGPU):
RAM BW [GB/s]
Ref Clock
[GHz]Cores
Shader/TMUs/ROPsTDP [W]
SP
[TFlop/s]DP [TFlop/s]
Host Host CPU
(base clock frequency)
Nvidia Geforce GTX980 4 GB GDDR5 224 1,126 2.048/128/64 180 4,98 0,156 tg00xIntel Xeon Nehalem X5550 (4 Cores, 2.67GHz)Nvidia Geforce GTX1080 8 GB GDDR5 320 1,607 2.560/160/64 180 8,87 0,277 tg03x Intel Xeon Broadwell E5-2620 v4 (8 C, 2.10GHz) Nvidia Geforce GTX1080Ti 11 GB GDDR5 484 1,480 3.584/224/88 250 11,34 0,354 tg04x Intel Xeon Broadwell E5-2620 v4 (2x 8 C, 2.10GHz) Nvidia Geforce RTX2070Super 8 GB GDDR6 448 1,605 2.560/160/64 215 9,06 0,283 medusa Intel Xeon Cascadelake Gold 6246 (2x 12 C, 3.30GHz) Nvidia Quadro RTX5000, active 16 GB GDDR6 448 1,620 3.072/192/64 230 11,15 0,348 medusa Intel Xeon Cascadelake Gold 6246 (2x 12 C, 3.30GHz) Nvidia Geforce RTX2080Super 8 GB GDDR6 496 1,650 3.072/192/64 250 11,15 0,348 medusa Intel Xeon Cascadelake Gold 6246 (2x 12 C, 3.30GHz) Nvidia Geforce RTX2080Ti 11 GB GDDR6 616 1,350 4.352/272/88 250 13,45 0,420 tg06x Intel Xeon Skylake Gold 6134 (2x 8 Cores + SMT, 3.20GHz) Nvidia Quadro RTX6000, active 24 GB GDDR6 672 1,440 4608/288/96 260 16,31 0,510 medusa Intel Xeon Cascadelake Gold 6246 (2x 12 C, 3.30GHz) Nvidia Geforce RTX3080 10 GB, GDDR6X 760 1.440 8.704 320 29.77 0.465 tg08x
Intel Xeon IceLake Gold 6226R (2x 32 cores + SMT, 2.90GHz)
Nvidia Tesla V100 (PCIe, passive) 32 GB HBM2 900 1,245 5.120 Shader 250 14,13 7,066 tg07x Intel Xeon Skylake Gold 6134 (2x 8 Cores + SMT, 3.20GHz) Nvidia A40 (passiv)
48 GB GDDR6 696 1.305 10.752 Shader 300 37.42 1,169 genoa2 AMD Genoa 9354 (2x 32 cores + SMT, 3.25 GHz) Nvidia A100 (SMX4/NVlink, passive) 40 GB HBM2 1.555 1.410 6.912 Shader 400 19,5 9.7 tg09x AMD Rome 7662 (2x 64 Cores, 2.0GHz) Nvidia L40 (passiv)
48 GB GDDR6 864 0.735 18.176 Shader 300 90.52 1.414 genoa2 AMD Genoa 9354 (2x 32 cores + SMT, 3.25 GHz) AMD Instinct MI100 (PCIe Gen4, passive) 32 GB HBM2 1229 1,502 120 Compute Units / 7680 Cores 300 21,1 11,5 rome2 AMD Rome 7352 (2x 24 cores + SMT, 2.3 GHz) AMD Radeon VII 16 GB HBM2 1,024 1,400 3,840/240/64 300 13.44 3.360 interlagos1 AMD Interlagos Opteron 6276 AMD Instinct MI210 (PCIe Gen4, passive) 64 GB HBM2e 1,638 1,000 104 Compute Units / 6,656 Cores 300 22.6 22.6 milan1, rome2 AMD Milan 7543 (2×32 cores + SMT, 2.8 GHz), AMD Rome 7352 (2x 24 cores + SMT, 2.3 GHz) This website shows information regarding the following topics:
Access, User Environment, and File Systems
Access to the machine
Note that access to the test cluster is restricted: If you want access to it, you will need to contact hpc@rrze. In order to get access to the NDA machines you have to provide a short (!) description of what you want to do there.
From within the FAU network, users can connect via SSH to the frontend
testfront.rrze.fau.de
If you need access from outside of FAU, you usually have to connect for example to the dialog servercshpc.rrze.fau.de
first and then ssh to testfront from there.While it is possible to ssh directly to a compute node, a user is only allowed to do this while they have a batch job running there. When all batch jobs of a user on a node have ended, all of their processes, including any open shells, will be killed automatically.
The login nodes and most of the compute nodes run Ubuntu 18.04. As on most other RRZE HPC systems, a modules environment is provided to facilitate access to software packages. Type “
module avail
” to get a list of available packages. Note that, depending on the node, the modules may be different due to the wide variety of architectures. Expect inconsistencies. In case of questions, contact hpc@rrze.File Systems
The nodes have local hard disks of very different capacities and speeds. These are not production systems, so do not expect a production environment.
When connecting to the front end node, you’ll find yourself in your regular RRZE
$HOME
directory (/home/hpc/...
). There are relatively tight quotas there, so it will most probably be too small for the inputs/outputs of your jobs. It however does offer a lot of nice features, like fine grained snapshots, so use it for “important” stuff, e.g. your job scripts, or the source code of the program you’re working on. See the HPC file system page for a more detailed description of the features and the other available file systems including, e.g.,$WORK
.Batch processing
As with all production clusters at RRZE, resources are controlled through a batch system, SLURM in this case. Due to the broad spectrum of architectures in the test cluster, it is usually advisable to compile on the target node using an interactive SLURM job (see below).
There is a “work” queue and an “nda” queue, both with up to 24 hours of runtime. Access to the “nda” queue is restricted because the machines tied to this queue are pre-production hardware or otherwise special so that benchmark results must not be published without further consideration.
Batch jobs can be submitted on the frontend. The default job runtime is 10 minutes.
The currently available nodes can be listed using:
sinfo -o "%.14N %.9P %.11T %.4c %.8z %.6m %.35f"
To select a node, you can either use the host name or a feature name from
sinfo
:sbatch --nodes=1 --constraint=featurename --time=hh:mm:ss --export=NONE jobscript
sbatch --nodes=1 --nodelist=hostname --time=hh:mm:ss --export=NONE jobscript
By default, SLURM exports the environment of the shell where the job was submitted. If this is not desired, use
--export=NONE
andunset SLURM_EXPORT_ENV
. Otherwise, problems may arise on nodes that do not run Ubuntu.Submitting an interactive job:
salloc --nodes=1 --nodelist=hostname --time=hh:mm:ss
For getting access to performance counter registers and other restricted parts of the hardware (so that
likwid-perfctr
and other LIKWID tools works as intended), use the constraint-C hwperf
. The Linux kernel’s NUMA balancing feature can be turned off with-C numa_off
. When the system should use huge pages transparently for the applications use-C thp_always
to switch to always mode. For the specification of multiple constraints, combine them with&
and proper quoting like-C "hwperf&thp_always"
.Please see the batch system description for further details.
- aurora1: Single Intel Xeon “Skylake” Gold 6126 CPU (12 cores + SMT) @ 2.60GHz.
-