HPC Systems

The full technical documentation of all systems can be found on the RRZE documentation pages. To get access to any of your clusters you need an HPC account, which you can apply for using this form. As a prerequisite you need an IdM account, which you already have if you are a student or an employee at FAU. If you are not affiliated with FAU in any way, access to our systems can only be granted under special circumstances. Please contact HPC support in this case.

Overview

Cluster name #nodes target applications Parallel
filesystem
Local
harddisks
description
Meggie 728 massively parallel Yes No This is the RRZEs newest cluster, intended for highly parallel jobs. Access to this cluster is restricted.
Emmy 560 massively parallel Yes No This is the current main cluster for parallel jobs
LiMa ~300 serial to moderately parallel Yes No
Woody 176 serial throughput No Yes Cluster with fast single-socket CPUs for serial throughput workloads
TinyGPU 14 GPU No Yes, some
with SSDs
The nodes in this cluster are equipped with NVIDIA GPUs
TinyFat 25 large memory requirements No Yes, some
with SSDs
This cluster is for applications that require large amounts of memory. Each node has between 128 and 512 gigabytes of memory.
TinyEth 20 throughput No Yes Cluster for throughput workloads.

Meggie (installed 2017)

Nodes 728 compute nodes, each with two Intel Xeon E5-2630v4 „Broadwell“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
Parallel file system Lustre-based parallel filesystem with a capacity of almost 1 PB and an aggregated parallel I/O bandwidth of > 9000 MB/s.
Network Intel OmniPath interconnect with up to 100 GBit/s bandwith per link and direction.
Linpack Performance 481 TFlop/s

Emmy (installed 2013)

Nodes 560 compute nodes, each with two Xeon 2660v2 „Ivy Bridge“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM
Parallel file system LXFS with a capacity of 400 TB and an aggregated parallel I/O bandwidth of > 7000 MB/s
Network Fat-tree Infiniband interconnect fabric with 40 GBit/s bandwith per link and direction
Linpack Performance 191 TFlop/s

Lima (installed 2010)

Nodes 500 compute nodes, each with two Xeon 5650 „Westmere“ chips (12 cores + SMT) running at 2.66 GHz with 12 MB Shared Cache per chip and 24 GB of RAM (DDR3-1333).
Parallel file system LXFS with capacity of 100 TB and an aggregated parallel I/O bandwidth of > 3000 MB/s
Network Fat-tree Infiniband interconnect fabric with 40 GBit/s bandwith per link and direction
Linpack Performance 56.7 TFlop/s

Testcluster

For the evaluation of microarchitectures and research purposes we also maintain a cluster of test machines. We try to always have at least one machine of every relevant architecture in HPC. Currently all recent Intel processor generations are available, including the many-core chips Intel Xeon Phi “Knights Landing.” Frequently we also get early access prototypes for benchmarking.