Navigation

HPC clusters & systems

RRZE operates different HPC clusters which target different application areas:

Overview

Cluster name #nodes target applications Parallel
filesystem
Local
harddisks
description
Meggie 728 massively parallel Yes No This is the RRZEs newest cluster, intended for highly parallel jobs. Access to this cluster is restricted.
Emmy 560 massively parallel Yes No This is the current main cluster for parallel jobs
Woody 248 serial throughput No Yes Cluster with fast single-socket CPUs for serial throughput workloads
TinyGPU 45 GPU No Yes (SSDs) The nodes in this cluster are equipped with NVIDIA GPUs (mostly with 4 GPUs per node)
TinyFat 47 large memory requirements No Yes (SSDs) This cluster is for applications that require large amounts of memory. Each node has 256 or 512 gigabytes of main memory.
TinyEth 20 throughput No Yes Cluster for throughput workloads.

Meggie (installed 2017)

Nodes 728 compute nodes, each with two Intel Xeon E5-2630v4 „Broadwell“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
Parallel file system Lustre-based parallel filesystem with a capacity of almost 1 PB and an aggregated parallel I/O bandwidth of > 9000 MB/s.
Network Intel OmniPath interconnect with up to 100 GBit/s bandwith per link and direction.
Linpack Performance 481 TFlop/s

Emmy (installed 2013)

Nodes 560 compute nodes, each with two Xeon 2660v2 „Ivy Bridge“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM
Parallel file system LXFS with a capacity of 400 TB and an aggregated parallel I/O bandwidth of > 7000 MB/s
Network Fat-tree Infiniband interconnect fabric with 40 GBit/s bandwith per link and direction
Linpack Performance 191 TFlop/s

Testcluster

For the evaluation of microarchitectures and research purposes we also maintain a cluster of test machines. We try to always have at least one machine of every relevant architecture in HPC. Currently all recent Intel processor generations are available. Frequently we also get early access prototypes for benchmarking.