Navigation

CRIS-Export

Forschungsinfrastruktur des RRZE

  • Linux based throughput computer / High performance computer Woody
    diverse: Durchsatz-HPC-Cluster
    Location: Erlangen
  • Linux based GPGPU computer / High performance computer TinyGPU
    diverse: GPGPU-Cluster
    Location: Erlangen
  • Linux based parallel computer / High performance computer Emmy
    NEC: HPC-Cluster 2013 (Constr. year 2013)
    Location: Erlangen
  • Linux based parallel computer / High performance computer Meggie
    Megware: HPC-Cluster 2016 (Constr. year 2016)
    Location: Erlangen
Linux based throughput computer / High performance computer Woody
Linux based throughput computer / High performance computer Woody

The RRZE’s "Woody" is the preferred cluster for serial/single-node throughput jobs.

The cluster has changed significantly over time. You can find more about the history in the section about history. The current hardware configuration looks like this:

  • 40 compute nodes (w10xx nodes) with Xeon E3-1280 CPUs („SandyBridge“, 4 cores, HT disabled, 3,5 GHz base frequency), 8 GB RAM, 500 GB HDD – from 12/2011; these nodes have been shutdown in October 2020
  • 70 compute nodes (w11xx nodes) with Xeon E3-1240 v3 CPUs („Haswell“, 4 cores, HT disabled, 3,4 GHz base frequency), 8 GB RAM, 1 TB HDD – from 09/2013
  • 64 compute nodes (w12xx/w13xx nodes) with Xeon E3-1240 v5 CPUs („Skylake“, 4 cores, HT disabled, 3,5 GHz base frequency), 32 GB RAM, 1 TB HDD – from 04/2016 and 01/2017
  • 112 compute nodes (w14xx/w15xx nodes) with Xeon E3-1240 v6 CPUs („Kaby Lake“, 4 cores, HT disabled, 3,7 GHz base frequency), 32 GB RAM, 960 GB SDD – from Q3/2019

https://www.anleitungen.rrze.fau.de/hpc/woody-cluster/

Linux based GPGPU computer / High performance computer TinyGPU
Linux based GPGPU computer / High performance computer TinyGPU

TinyGPU addresses the increasing demand for GPGPU-accelerated HPC systems and has nodes with five different types of GPUs (mostly of consumer type):
  • 7  nodes with 2x Intel Xeon 5550 („Nehalem“) 8 Cores@2.66 GHz; 24 GB RAM main memory; 2x NVIDIA Geforce GTX 980 (4 GB)
  • 7 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“) 16 Cores@2.1 GHz; 64 GB main memory; 4x NVIDIA Gefroce GTX 1080 (8 GB memory); 1.8 TB SSD
  • 10 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“) 16 Cores@2.1 GHz; 64 GB main memory; 4x NVIDIA Geforce GTX 1080Ti (11 GB memory); 1.8 TB SSD
  • 12 nodes with 2x Intel Xeon Gold 6134 („Skylake“) 16 Cores@3.2 GHz; 96 GB main memory; 4x NVIDIA Geforce RTX 2080 Ti (11 GB memory); 1.8 TB SSD
  • 4 nodes with 2x Intel Xeon Gold 6134 („Skylake“) 16 Cores@3.2 GHz; 96 GB main memory; 4x NVIDIA Tesla V100 (32 GB memory); 2.9 TB SSD
  • 7 nodes with 2x Intel Xeon Gold 6226R („Cascadelake“) 16 Cores@2.9 GHz; 192 GB main memory; 8x NVIDIA Geforce RTX3080 (10 GB memory); 3,8 TB SSD
  • 3 nodes with 2x AMD Epyc 7662 („Rome“) 64 Cores@2,0 GHz; 256 GB main memory; 4x NVIDIA A100 SXM4/Nvlink; 6,4 TB SSD

44 out of the 50 nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

https://www.anleitungen.rrze.fau.de/hpc/tinyx-clusters/#tinygpu

Linux based parallel computer / High performance computer Emmy
Linux based parallel computer / High performance computer Emmy

RRZE’s Emmy cluster (NEC) is a high-performance compute resource with high speed interconnect. It is intended for distributed-memory (MPI) or hybrid parallel programs with medium to high communication requirements.
  • 560 compute nodes, each with two Xeon 2660v2 „Ivy Bridge“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
  • 2 front end nodes with the same CPUs as the compute nodes.
  • 16 Nvidia K20 GPGPUs spread over 10 compute nodes plus 4 nodes each with a Nvidia V100/16GB GPGPU.
  • parallel filesystem (LXFS) with a capacity of 400 TB and an aggregated parallel I/O bandwidth of > 7000 MB/s
  • fat-tree InfiniBand interconnect fabric with 40 GBit/s bandwidth per link and direction
  • overall peak performance of ca. 234 TFlop/s (191 TFlop/s LINPACK, using only the CPUs).

The Emmy cluster is named after famous mathematician Emmy Noether who was born he

https://www.anleitungen.rrze.fau.de/hpc/emmy-cluster/

Linux based parallel computer / High performance computer Meggie
Linux based parallel computer / High performance computer Meggie

The RRZE’s Meggie cluster (manufacturer: Megware) is a high-performance compute resource with high speed interconnect. It is intended for distributed-memory (MPI) or hybrid parallel programs with medium to high communication requirements.
  • 728 compute nodes, each with two Intel Xeon E5-2630v4 „Broadwell“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
  • 2 front end nodes with the same CPUs as the compute nodes but 128 GB of RAM.
  • Lustre-based parallel filesystem with a capacity of almost 1 PB and an aggregated parallel I/O bandwidth of > 9000 MB/s.
  • Intel OmniPath interconnect with up to 100 GBit/s bandwidth per link and direction.
  • Measured LINPACK performance of ~481 TFlop/s.

Meggie is a system that is designed for running parallel programs using significantly more than one node.

https://www.anleitungen.rrze.fau.de/hpc/meggie-cluster/

 

Publications using Meggie

Research projects using Meggie

Research fields using Meggie


Publications using Emmy

Research projects using Emmy

Research fields using Emmy


Publications using Woody

Research projects using Woody

-/-

Research fields using Woody


TinyGPU

Linux based GPGPU computer / High performance computer TinyGPU
TinyGPU addresses the increasing demand for GPGPU-accelerated HPC systems and has nodes with five different types of GPUs (mostly of consumer type):
  • 7  nodes with 2x Intel Xeon 5550 („Nehalem“) 8 Cores@2.66 GHz; 24 GB RAM main memory; 2x NVIDIA Geforce GTX 980 (4 GB)
  • 7 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“) 16 Cores@2.1 GHz; 64 GB main memory; 4x NVIDIA Gefroce GTX 1080 (8 GB memory); 1.8 TB SSD
  • 10 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“) 16 Cores@2.1 GHz; 64 GB main memory; 4x NVIDIA Geforce GTX 1080Ti (11 GB memory); 1.8 TB SSD
  • 12 nodes with 2x Intel Xeon Gold 6134 („Skylake“) 16 Cores@3.2 GHz; 96 GB main memory; 4x NVIDIA Geforce RTX 2080 Ti (11 GB memory); 1.8 TB SSD
  • 4 nodes with 2x Intel Xeon Gold 6134 („Skylake“) 16 Cores@3.2 GHz; 96 GB main memory; 4x NVIDIA Tesla V100 (32 GB memory); 2.9 TB SSD
  • 7 nodes with 2x Intel Xeon Gold 6226R („Cascadelake“) 16 Cores@2.9 GHz; 192 GB main memory; 8x NVIDIA Geforce RTX3080 (10 GB memory); 3,8 TB SSD
  • 3 nodes with 2x AMD Epyc 7662 („Rome“) 64 Cores@2,0 GHz; 256 GB main memory; 4x NVIDIA A100 SXM4/Nvlink; 6,4 TB SSD

44 out of the 50 nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

https://www.anleitungen.rrze.fau.de/hpc/tinyx-clusters/#tinygpu;

Publications using TinyGPU

Research projects using TinyGPU

Research fields using TinyGPU


Research fields using any HPC cluster (items occur multiple times if a field is mentioned for multiple clusters)

Pulications using any HPC cluster (items occur multiple times if a publication is liked to multiple clusters)