CRIS-Export

Forschungsinfrastruktur des RRZE

  • Linux based HPC compute nodes with large main memory / High performance computer TinyFat
    diverse: TinyFat-Cluster
    Location: Erlangen
  • Linux based throughput computer / High performance computer Woody
    diverse: Durchsatz-HPC-Cluster
    Location: Erlangen
  • Linux based GPGPU computer / High performance computer TinyGPU
    diverse: GPGPU-Cluster
    Location: Erlangen
  • Linux based parallel computer / High performance computer Meggie
    Megware: HPC-Cluster 2016 (Constr. year 2016)
    Location: Erlangen

Linux based HPC compute nodes with large main memory / High performance computer TinyFat

TinyFat addresses applications with the demand for large main memory  and has nodes with three different types:
  • 8 nodes with 2x 2x Intel Xeon E5-2643 v4 (“Broadwell”) @3.4 GHz = 12 cores/24 threads and 256 GB main memory
  • 3 nodes with 2x Intel Xeon E5-2680 v4 (“Broadwell”) @2.4 GHz = 28 cores/56 threads and 512 GB main memory
  • 36 nodes with 2x AMD Rome 7502 @2.5 GHz = 64 cores/128 threads and 512 GM main memory
All nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/tinyfat-cluster/

Linux based throughput computer / High performance computer Woody
Linux based throughput computer / High performance computer Woody


The RRZE’s "Woody" is the preferred cluster for serial/single-node throughput jobs.

The cluster has changed significantly over time. You can find more about the history in the section about history.  The current hardware configuration looks like this:
  • 70 compute nodes (w11xx nodes) with Xeon E3-1240 v3 CPUs („Haswell“, 4 cores, HT disabled, 3,4 GHz base frequency), 8 GB RAM, 1 TB HDD – from 09/2013
  • 64 compute nodes (w12xx/w13xx nodes) with Xeon E3-1240 v5 CPUs („Skylake“, 4 cores, HT disabled, 3,5 GHz base frequency), 32 GB RAM, 1 TB HDD – from 04/2016 and 01/2017
  • 112 compute nodes (w14xx/w15xx nodes) with Xeon E3-1240 v6 CPUs („Kaby Lake“, 4 cores, HT disabled, 3,7 GHz base frequency), 32 GB RAM, 960 GB SDD – from Q3/2019

https://www.hpc.fau.de/systems-services/systems-documentation-instructions/clusters/woody-cluster/

Linux based GPGPU computer / High performance computer TinyGPU
Linux based GPGPU computer / High performance computer TinyGPU


TinyGPU addresses the increasing demand for GPGPU-accelerated HPC systems and has nodes with six different types of GPUs (mostly of consumer type):
  • 7 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“, 8 Cores@2.1 GHz); 64 GB main memory; 4x NVIDIA Gefroce GTX 1080 (8 GB memory); 1.8 TB SSD
  • 10 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“, 8 Cores@2.1 GHz); 64 GB main memory; 4x NVIDIA Geforce GTX 1080Ti (11 GB memory); 1.8 TB SSD
  • 12 nodes with 2x Intel Xeon Gold 6134 („Skylake“, 8 Cores@3.2 GHz); 96 GB main memory; 4x NVIDIA Geforce RTX 2080 Ti (11 GB memory); 1.8 TB SSD
  • 4 nodes with 2x Intel Xeon Gold 6134 („Skylake“, 8 Cores@3.2 GHz); 96 GB main memory; 4x NVIDIA Tesla V100 (32 GB memory); 2.9 TB SSD
  • 7 nodes with 2x Intel Xeon Gold 6226R („Cascadelake“, 16 Cores@2.9 GHz); 394 GB main memory; 8x NVIDIA Geforce RTX3080 (10 GB memory); 3,8 TB SSD
  • 8 nodes with 2x AMD Epyc 7662 („Rome“, 64 Cores@2,0 GHz); 512 GB main memory; 4x NVIDIA A100 SXM4/Nvlink; 6,4 TB SSD
45 out of the 48 nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/tinygpu-cluster/

Linux based parallel computer / High performance computer Meggie
Linux based parallel computer / High performance computer Meggie


THE SYSTEM HAS BEEN DECOMISIONED IN FEBRUARY 2026 AFTER ALMOST 10 YEARS OF SUCCESSFUL OPERATION.

The RRZE’s Meggie cluster (manufacturer: Megware) is a high-performance compute resource with high speed interconnect. It is intended for distributed-memory (MPI) or hybrid parallel programs with medium to high communication requirements.
  • 728 compute nodes, each with two Intel Xeon E5-2630v4 „Broadwell“ chips (10 cores per chip) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
  • 2 front end nodes with the same CPUs as the compute nodes but 128 GB of RAM.
  • Lustre-based parallel filesystem with a capacity of almost 1 PB and an aggregated parallel I/O bandwidth of > 9000 MB/s.
  • Intel OmniPath interconnect with up to 100 GBit/s bandwidth per link and direction.
  • Measured LINPACK performance of ~481 TFlop/s.
Meggie is a system that is designed for running parallel programs using significantly more than one node.

https://www.hpc.fau.de/systems-services/systems-documentation-instructions/clusters/meggie-cluster/

 

Publications using Meggie



Research projects using Meggie



Research fields using Meggie


Publications using Emmy

No equipment found.

Research projects using Emmy

No equipment found.

Research fields using Emmy

No equipment found.


Publications using Woody

No equipment found.

Research projects using Woody

No equipment found.

Research fields using Woody

No equipment found.


TinyGPU

No equipment found.

Publications using TinyGPU

No equipment found.

Research projects using TinyGPU

No equipment found.

Research fields using TinyGPU

No equipment found.


TinyFAT

No equipment found.

Publications using TinyFAT

No equipment found.

Research projects using TinyFAT

No equipment found.

Research fields using TinyFAT

No equipment found.


Research fields using any HPC cluster (items occur multiple times if a field is mentioned for multiple clusters)

No equipment found.

Pulications using any HPC cluster (items occur multiple times if a publication is liked to multiple clusters)

No equipment found.


Publications using Fritz

No equipment found.


Publications using Alex

No equipment found.