Systems, Documentation & Instructions

The HPC group of RRZE operates a number of clusters and related systems mainly for scientists at FAU.

Beginning in January 2021, RRZE is opening its systems also nationwide for NHR users. Until the expected installation of a new system in Q4/2021 only limited compute resources can be offered to NHR users. The workflow for applying for an NHR account will be published in the coming months. Early adopters can contact HPC support directly.

Support offerings, especially in the area of performance engineering, will start early in 2021. See the Teachig&Training section.

Getting started

If you are new to HPC and want to know about the different clusters, how to log in, transfer files and run jobs, please refer to our Getting Started Guide.

HPC Clusters

The following table lists the available clusters and their key properties. To get further information about each cluster, click on the cluster name in the first column.

Cluster name #nodes target applications Parallel
filesystem
Local
storage
description
NHR&Tier3 parallel cluster (“Fritz”)
756 (NHR) + 188 (Tier3) nodes massively parallel Yes very limited dual-socket nodes with Intel IceLake processors (72 cores and 256 GB per node) and HDR100 interconnect

Expected to be operational in early 2022.

Access to this cluster will be restricted.

NHR&Tier3 GPGPU cluster (“Alex”)
TBA GPGPU applications limited Yes 128 (NHR) +32 (Tier3) Nvidia A100 GPGPUs
plus 244 (NHR) + 60 (Tier3) Nvidia A40 GPGPUs

Expected to be operational in late 2021.

Access to this cluster will be restricted.

Meggie 728 massively parallel Yes No This is still RRZEs newest cluster, intended for highly parallel jobs.
Emmy 560 massively parallel Yes No This is the current main cluster for parallel jobs.
LiMa The system has been retired in December 2018.
Woody 248 single-node throughput No Yes Cluster with fast single-socket CPUs for serial throughput workloads.
TinyEth 20 throughput No Yes Cluster for throughput workloads.
TinyGPU 48 GPGPU No Yes (SSDs) The nodes in this cluster are equipped with different types and generations of NVIDIA GPUs. Access restrictions / throttling policies may apply.
TinyFat 47 large memory requirements No Yes (SSDs) This cluster is for applications that require large amounts of memory. Each node has 256 or 512 gigabytes of main memory. Access restrictions may apply.

If you’re unsure about which systems to use, feel free to contact the HPC group.

Other HPC systems at RRZE

Dialog server

This machine can be used as an access portal to reach the rest of the HPC systems from outside the university network. This is necessary, because most of our HPC systems are in private IP address ranges that can only be reached directly from inside the network of the FAU

HPC Testcluster

The HPC Testcluster consists of a diversity of systems (countless X86 variants, ARM, NEC Aurora/Tsubasa, …) intended for benchmarking and software testing. Contact the HPC group for details. Access to certain machines may be restricted owing to NDA agreements.

HPC software environment

The software environment available on RRZE’s HPC systems has been made as uniformly as possible, in order to allow users to change between clusters without having to start from scratch. Documentation for this environment can be found under HPC environment.

Access to external (national) HPC systems

LRZ systems

Users that require huge amounts of computing power can also apply to use the HPC systems of the “Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften” in Garching near Munich.

National systems at HLRS and JSC

Access to the national supercomputers at HLRS (High Performance Computing Center Stuttgart) or JSC (Jülich Supercomputing Centre) requires a scientific proposal similar to SuperMUC at LRZ. Depending on the size of the project, the proposals have to be submitted locally at HLRS / JSC or through large-scale calls of GCS (Gauss Center for Supercomputing).