Systems, Documentation & Instructions
The HPC group of RRZE – or now NHR@FAU – operates a number of clusters and related systems mainly for scientists at FAU.
Beginning in January 2021, the HPC group of RRZE became NHR@FAU. With the rebranding, we have been opening our (new) systems also nationwide for NHR users, i.e. scientists from any German university. See NHR@FAU application rules for details on national access. Also FAU researcher with demands beyond the free basic Tier3 resources have to apply link people form outside for NHR resources.
Support offerings, especially in the area of performance engineering and atomistic simulations, also start early in 2021 for national customers. Also see the Teaching&Training section.
Getting started
If you are new to HPC and want to know about the different clusters, how to log in, transfer files and run jobs, please refer to our Getting Started Guide.
HPC Clusters
The following table lists the available clusters and their key properties. To get further information about each cluster, click on the cluster name in the first column.
Cluster name | #nodes | target applications | Parallel filesystem |
Local storage |
description |
---|---|---|---|---|---|
NHR&Tier3 parallel cluster (“Fritz“) |
992 | massively parallel | Yes | very limited | dual-socket nodes with Intel IceLake processors (72 cores and 256 GB per node) and HDR100 interconnect Plus some additional nodes with 1 or 2 TB of main memory. Access to this cluster is restricted. |
NHR&Tier3 GPGPU cluster (“Alex“) |
82 | GPGPU applications | limited | Yes | 304 Nvidia A100 and 352 Nvidia A40 GPGPUs.
Access to this cluster will is restricted. |
Meggie (Tier3) | 728 | parallel | no longer | No | This is the current main cluster for parallel jobs, intended for parallel jobs. |
The system has been retired in September 2022. | |||||
The system has been retired in December 2018. | |||||
Woody (Tier3) | 248 | single-node throughput | No | Yes | Cluster with fast (single- and dual-socket) CPUs for serial throughput workloads. |
The system has been retired end of November 2021. | |||||
TinyGPU (Tier3) | 48 | GPGPU | No | Yes (SSDs) | The nodes in this cluster are equipped with different types and generations of NVIDIA GPUs. Access restrictions / throttling policies may apply. |
TinyFat (Tier3) | 47 | large memory requirements | No | Yes (SSDs) | This cluster is for applications that require large amounts of memory. Each node has 256 or 512 gigabytes of main memory. |
If you’re unsure about which systems to use, feel free to contact the HPC group.
Other HPC systems at RRZE
Dialog server
This machine can be used as an access portal to reach the rest of the HPC systems from outside the university network. This is necessary, because most of our HPC systems are in private IP address ranges that can only be reached directly from inside the network of the FAU
HPC Testcluster
The HPC Testcluster consists of a diversity of systems (countless X86 variants, ARM, NEC Aurora/Tsubasa, …) intended for benchmarking and software testing. Contact the HPC group for details. Access to certain machines may be restricted owing to NDA agreements.
HPC software environment
The software environment available on RRZE’s HPC systems has been made as uniformly as possible, in order to allow users to change between clusters without having to start from scratch. Documentation for this environment can be found under HPC environment.
Access to external (national) HPC systems
LRZ systems
Users that require huge amounts of computing power can also apply to use the HPC systems of the “Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften” in Garching near Munich.
- The LRZ Linux Cluster is open for all Bavaria and access requires only very little paper work.
- Access to SuperMUC-NG requires a scientific project proposal like all national HPC systems.
Federal systems at HLRS and JSC
Access to the national supercomputers at HLRS (High Performance Computing Center Stuttgart) or JSC (Jülich Supercomputing Centre) requires a scientific proposal similar to SuperMUC at LRZ. Depending on the size of the project, the proposals have to be submitted locally at HLRS / JSC or through large-scale calls of GCS (Gauss Center for Supercomputing).
Other NHR centers
NHR@FAU is not the only NHR center; there are 8 more. See https://www.nhr-verein.de/rechnernutzung. Each center has its own scientific and application focus – but all serve researchers from universities all over Germany.