HPC clusters & systems

RRZE operates different HPC clusters which target different application areas:

Overview

Cluster name #nodes target applications Parallel
filesystem
Local
harddisks
description
Fritz (NHR+Tier3) 944 high-end massively parallel Yes No not opened yet
Alex (NHR+Tier3) 192 Nvidia A100 and 304 Nvidia A40 GPGPUs in 66 nodes high-end GPGPU not yet Yes (SSDs) not opened yet
Meggie (Tier3) 728 massively parallel Yes No This is the RRZE’ main working horse, intended for highly parallel jobs.
Emmy (Tier3) 560 parallel Yes No This is still the main cluster for single-node and multi-node parallel jobs
Woody (Tier3) 248 serial throughput No Yes Cluster with fast single-socket CPUs for serial throughput workloads
TinyGPU (Tier3) 48 GPU No Yes (SSDs) The nodes in this cluster are equipped with NVIDIA GPUs (mostly with 4 GPUs per node)
TinyFat (Tier3) 47 large memory requirements No Yes (SSDs) This cluster is for applications that require large amounts of memory. Each node has 256 or 512 gigabytes of main memory.

Alex (installed 2021)

Fritz (installed 2021/2022)

Meggie (installed 2017)

Nodes 728 compute nodes, each with two Intel Xeon E5-2630v4 „Broadwell“ chips (10 cores per chip) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
Parallel file system Lustre-based parallel filesystem with a capacity of almost 1 PB and an aggregated parallel I/O bandwidth of > 9000 MB/s.
Network Intel OmniPath interconnect with up to 100 GBit/s bandwidth per link and direction.
Linpack Performance 481 TFlop/s

Emmy (installed 2013)

Nodes 560 compute nodes, each with two Xeon 2660v2 „Ivy Bridge“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM
Parallel file system LXFS with a capacity of 400 TB and an aggregated parallel I/O bandwidth of > 7000 MB/s
Network Fat-tree Infiniband interconnect fabric with 40 GBit/s bandwidth per link and direction
Linpack Performance 191 TFlop/s

Testcluster

For the evaluation of microarchitectures and research purposes we also maintain a cluster of test machines. We try to always have at least one machine of every relevant architecture in HPC. Currently all recent Intel processor generations are available. Frequently we also get early access prototypes for benchmarking.