CRIS Reports

In CRIS verfügbare Daten zur Forschungsinfrastruktur

Name Fritz

Linux based parallel computer / High performance computer Fritz

FAU’s Fritz cluster (system integrator: Megware) is a high-performance compute resource with high speed interconnect, i.e., a parallel computer. It is intended for multi-node parallel workloads. Fritz serves for both, FAU’s basic Tier3 resources as well as NHR’s project resources.

  • 4 front end nodes with the same CPUs as the compute nodes but 512 GB of RAM, and 100 GbE connection to RRZE’s network backbone.
  • 1 visualization node with the same CPUs as the compute nodes but 1024 GB of RAM, one Nvidia A16 GPU, 30 TB of local NVMe SSD storage, and 100 GbE connection to RRZE’s network backbone.
  • 992 compute nodes with direct liquid cooling (DLC), each with two Intel Xeon Platinum 8360Y “Ice Lake” processors (36 cores per chip) running at a base frequency of 2.4 GHz and 54 MB Shared L3 cache per chip, 256 GB of DDR4-RAM.
  • Lustre-based parallel filesystem with a capacity of about 3,5 PB and an aggregated parallel I/O bandwidth of > 20 GB/s.
  • Blocking HDR100 Infiniband with up to 100 GBit/s bandwidth per link and direction. There are islands with 64 nodes (i.e. 4.608 cores). The blocking factor between islands is 1:4.
Manufacturer: Megware
Model: Parallelrechner 2022
Construction Year: 2022
Location: Erlangen
URL: https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/fritz-cluster/
Funding source: Deutsche Forschungsgemeinschaft (DFG), Bundesministerium für Bildung und Forschung (BMBF), Bayerisches Staatsministerium für Bildung und Kultus, Wissenschaft und Kunst (ab 10/2013), DFG - Infrastrukturförderung (INFRA)

Related Research Areas:

Related Research Projects:

Related Publications:

Name Alex

Linux-basierter Parallelrechner / HPC-Cluster Alex

FAU’s Alex cluster (system integrator: Megware) is a high-performance compute resource with Nvidia GPGPU accelerators and partially high speed interconnect. It is intended for single and multi GPGPU workloads, e.g. from molecular dynamics, or machine learning. Alex serves for both, FAU’s basic Tier3 resources as well as NHR’s project resources.

  • 2 front end nodes, each with two AMD EPYC 7713 “Milan” processors (64 cores per chip) running at 2.0 GHz with 256 MB Shared L3 cache per chip, 512 GB of RAM, and 100 GbE connection to RRZE’s network backbone but no GPGPUs.
  • 8 GPGPU nodes, each with two AMD EPYC 7662 “Rome” processors (64 cores per chip) running at 2.0 GHz with 256 MB Shared L3 cache per chip, 512 GB of DDR4-RAM, four Nvidia A100 (each 40 GB HBM2 @ 1,555 GB/s; DGX board with NVLink; 9.7 TFlop/s in FP64 or 19.5 TFlop/s in FP32), one HDR200 Infiniband HCAs, 25 GbE, and 6 TB on local NVMe SSDs. (During the year 2021 and early 2022, these nodes have previously been part of TinyGPU.)
  • 20 GPGPU nodes, each with two AMD EPYC 7713 “Milan” processors (64 cores per chip) running at 2.0 GHz with 256 MB Shared L3 cache per chip, 1,024 GB of DDR4-RAM, eight Nvidia A100 (each 40 GB HBM2 @ 1,555 GB/s; HGX board with NVLink; 9.7 TFlop/s in FP64 or 19.5 TFlop/s in FP32), two HDR200 Infiniband HCAs, 25 GbE, and 14 TB on local NVMe SSDs.
  • 12 GPGPU nodes, each with two AMD EPYC 7713 “Milan” processors (64 cores per chip) running at 2.0 GHz with 256 MB Shared L3 cache per chip, 2,048 GB of DDR4-RAM, eight Nvidia A100 (each 80 GB HBM2 @ 1,555 GB/s; HGX board with NVLink; 9.7 TFlop/s in FP64 or 19.5 TFlop/s in FP32), two HDR200 Infiniband HCAs, 25 GbE, and 14 TB on local NVMe SSDs.
  • 38 GPGPU nodes, each with two AMD EPYC 7713 “Milan” processors (64 cores per chip) running at 2.0 GHz with 256 MB Shared L3Cache per chip, 512 GB of DDR4-RAM, eight Nvidia A40 (each with 48 GB DDR6 @ 696 GB/s; 37.42 TFlop/s in FP32), 25 GbE, and 7 TB on local NVMe SSDs.
Manufacturer: Megware
Model: GPU-Cluster 2022
Construction Year: 2022
Location: Erlangen
URL: https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/alex-cluster/
Funding source: Deutsche Forschungsgemeinschaft (DFG), Bundesministerium für Bildung und Forschung (BMBF), Bayerisches Staatsministerium für Bildung und Kultus, Wissenschaft und Kunst (ab 10/2013), DFG - Infrastrukturförderung (INFRA)

Related Research Areas:

Related Research Projects:

Related Publications:

Name TinyFat

Linux based HPC compute nodes with large main memory / High performance computer TinyFat

TinyFat addresses applications with the demand for large main memory  and has nodes with three different types:

  • 8 nodes with 2x 2x Intel Xeon E5-2643 v4 (“Broadwell”) @3.4 GHz = 12 cores/24 threads and 256 GB main memory
  • 3 nodes with 2x Intel Xeon E5-2680 v4 (“Broadwell”) @2.4 GHz = 28 cores/56 threads and 512 GB main memory
  • 36 nodes with 2x AMD Rome 7502 @2.5 GHz = 64 cores/128 threads and 512 GM main memory

All nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

Manufacturer: diverse
Model: TinyFat-Cluster
Location: Erlangen
URL: https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/tinyfat-cluster/

Related Research Projects:

Related Publications:

Name Meggie

Linux based parallel computer / High performance computer Meggie

The RRZE’s Meggie cluster (manufacturer: Megware) is a high-performance compute resource with high speed interconnect. It is intended for distributed-memory (MPI) or hybrid parallel programs with medium to high communication requirements.
  • 728 compute nodes, each with two Intel Xeon E5-2630v4 „Broadwell“ chips (10 cores per chip) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
  • 2 front end nodes with the same CPUs as the compute nodes but 128 GB of RAM.
  • Lustre-based parallel filesystem with a capacity of almost 1 PB and an aggregated parallel I/O bandwidth of > 9000 MB/s.
  • Intel OmniPath interconnect with up to 100 GBit/s bandwidth per link and direction.
  • Measured LINPACK performance of ~481 TFlop/s.

Meggie is a system that is designed for running parallel programs using significantly more than one node.

Manufacturer: Megware
Model: HPC-Cluster 2016
Construction Year: 2016
Location: Erlangen
URL: https://www.hpc.fau.de/systems-services/systems-documentation-instructions/clusters/meggie-cluster/
Funding source: Deutsche Forschungsgemeinschaft (DFG), DFG - Infrastrukturförderung (INFRA)

Related Research Areas:

Related Research Projects:

Related Publications:

Name Woody

Linux based throughput computer / High performance computer Woody

The RRZE’s "Woody" is the preferred cluster for serial/single-node throughput jobs.

The cluster has changed significantly over time. You can find more about the history in the section about history.  The current hardware configuration looks like this:

  • 70 compute nodes (w11xx nodes) with Xeon E3-1240 v3 CPUs („Haswell“, 4 cores, HT disabled, 3,4 GHz base frequency), 8 GB RAM, 1 TB HDD – from 09/2013
  • 64 compute nodes (w12xx/w13xx nodes) with Xeon E3-1240 v5 CPUs („Skylake“, 4 cores, HT disabled, 3,5 GHz base frequency), 32 GB RAM, 1 TB HDD – from 04/2016 and 01/2017
  • 112 compute nodes (w14xx/w15xx nodes) with Xeon E3-1240 v6 CPUs („Kaby Lake“, 4 cores, HT disabled, 3,7 GHz base frequency), 32 GB RAM, 960 GB SDD – from Q3/2019
Manufacturer: diverse
Model: Durchsatz-HPC-Cluster
Location: Erlangen
URL: https://www.hpc.fau.de/systems-services/systems-documentation-instructions/clusters/woody-cluster/

Related Research Areas:

Related Research Projects:

Related Publications:

Name TinyGPU

Linux based GPGPU computer / High performance computer TinyGPU

TinyGPU addresses the increasing demand for GPGPU-accelerated HPC systems and has nodes with six different types of GPUs (mostly of consumer type):
  • 7 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“, 8 Cores@2.1 GHz); 64 GB main memory; 4x NVIDIA Gefroce GTX 1080 (8 GB memory); 1.8 TB SSD
  • 10 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“, 8 Cores@2.1 GHz); 64 GB main memory; 4x NVIDIA Geforce GTX 1080Ti (11 GB memory); 1.8 TB SSD
  • 12 nodes with 2x Intel Xeon Gold 6134 („Skylake“, 8 Cores@3.2 GHz); 96 GB main memory; 4x NVIDIA Geforce RTX 2080 Ti (11 GB memory); 1.8 TB SSD
  • 4 nodes with 2x Intel Xeon Gold 6134 („Skylake“, 8 Cores@3.2 GHz); 96 GB main memory; 4x NVIDIA Tesla V100 (32 GB memory); 2.9 TB SSD
  • 7 nodes with 2x Intel Xeon Gold 6226R („Cascadelake“, 16 Cores@2.9 GHz); 394 GB main memory; 8x NVIDIA Geforce RTX3080 (10 GB memory); 3,8 TB SSD
  • 8 nodes with 2x AMD Epyc 7662 („Rome“, 64 Cores@2,0 GHz); 512 GB main memory; 4x NVIDIA A100 SXM4/Nvlink; 6,4 TB SSD

45 out of the 48 nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

Manufacturer: diverse
Model: GPGPU-Cluster
Location: Erlangen
URL: https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/tinygpu-cluster/

Related Research Areas:

Related Research Projects:

Related Publications:

Name Emmy

Linux based parallel computer / High performance computer Emmy

RRZE’s Emmy cluster (NEC) is a high-performance compute resource with high speed interconnect. It is intended for distributed-memory (MPI) or hybrid parallel programs with medium to high communication requirements.
  • 560 compute nodes, each with two Xeon 2660v2 „Ivy Bridge“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
  • 2 front end nodes with the same CPUs as the compute nodes.
  • 16 Nvidia K20 GPGPUs spread over 10 compute nodes plus 4 nodes each with a Nvidia V100/16GB GPGPU.
  • parallel filesystem (LXFS) with a capacity of 400 TB and an aggregated parallel I/O bandwidth of > 7000 MB/s
  • fat-tree InfiniBand interconnect fabric with 40 GBit/s bandwidth per link and direction
  • overall peak performance of ca. 234 TFlop/s (191 TFlop/s LINPACK, using only the CPUs).

The Emmy cluster is named after famous mathematician Emmy Noether who was born he

Manufacturer: NEC
Model: HPC-Cluster 2013
Construction Year: 2013
Location: Erlangen
URL: https://www.hpc.fau.de/systems-services/systems-documentation-instructions/clusters/emmy-cluster/
Funding source: Deutsche Forschungsgemeinschaft (DFG)

Related Research Areas:

Related Research Projects:

Related Publications: