CRIS-Export

Forschungsinfrastruktur des RRZE

  • Linux based HPC compute nodes with large main memory / High performance computer TinyFat
    diverse: TinyFat-Cluster
    Location: Erlangen
  • Linux based throughput computer / High performance computer Woody
    diverse: Durchsatz-HPC-Cluster
    Location: Erlangen
  • Linux based GPGPU computer / High performance computer TinyGPU
    diverse: GPGPU-Cluster
    Location: Erlangen
  • Linux based parallel computer / High performance computer Emmy
    NEC: HPC-Cluster 2013 (Constr. year 2013)
    Location: Erlangen
  • Linux based parallel computer / High performance computer Meggie
    Megware: HPC-Cluster 2016 (Constr. year 2016)
    Location: Erlangen

Linux based HPC compute nodes with large main memory / High performance computer TinyFat

TinyFat addresses applications with the demand for large main memory  and has nodes with three different types:
  • 8 nodes with 2x 2x Intel Xeon E5-2643 v4 (“Broadwell”) @3.4 GHz = 12 cores/24 threads and 256 GB main memory
  • 3 nodes with 2x Intel Xeon E5-2680 v4 (“Broadwell”) @2.4 GHz = 28 cores/56 threads and 512 GB main memory
  • 36 nodes with 2x AMD Rome 7502 @2.5 GHz = 64 cores/128 threads and 512 GM main memory
All nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/tinyfat-cluster/

Linux based throughput computer / High performance computer Woody
Linux based throughput computer / High performance computer Woody


The RRZE’s "Woody" is the preferred cluster for serial/single-node throughput jobs.

The cluster has changed significantly over time. You can find more about the history in the section about history.  The current hardware configuration looks like this:
  • 70 compute nodes (w11xx nodes) with Xeon E3-1240 v3 CPUs („Haswell“, 4 cores, HT disabled, 3,4 GHz base frequency), 8 GB RAM, 1 TB HDD – from 09/2013
  • 64 compute nodes (w12xx/w13xx nodes) with Xeon E3-1240 v5 CPUs („Skylake“, 4 cores, HT disabled, 3,5 GHz base frequency), 32 GB RAM, 1 TB HDD – from 04/2016 and 01/2017
  • 112 compute nodes (w14xx/w15xx nodes) with Xeon E3-1240 v6 CPUs („Kaby Lake“, 4 cores, HT disabled, 3,7 GHz base frequency), 32 GB RAM, 960 GB SDD – from Q3/2019

https://www.hpc.fau.de/systems-services/systems-documentation-instructions/clusters/woody-cluster/

Linux based GPGPU computer / High performance computer TinyGPU
Linux based GPGPU computer / High performance computer TinyGPU


TinyGPU addresses the increasing demand for GPGPU-accelerated HPC systems and has nodes with six different types of GPUs (mostly of consumer type):
  • 7 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“, 8 Cores@2.1 GHz); 64 GB main memory; 4x NVIDIA Gefroce GTX 1080 (8 GB memory); 1.8 TB SSD
  • 10 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“, 8 Cores@2.1 GHz); 64 GB main memory; 4x NVIDIA Geforce GTX 1080Ti (11 GB memory); 1.8 TB SSD
  • 12 nodes with 2x Intel Xeon Gold 6134 („Skylake“, 8 Cores@3.2 GHz); 96 GB main memory; 4x NVIDIA Geforce RTX 2080 Ti (11 GB memory); 1.8 TB SSD
  • 4 nodes with 2x Intel Xeon Gold 6134 („Skylake“, 8 Cores@3.2 GHz); 96 GB main memory; 4x NVIDIA Tesla V100 (32 GB memory); 2.9 TB SSD
  • 7 nodes with 2x Intel Xeon Gold 6226R („Cascadelake“, 16 Cores@2.9 GHz); 394 GB main memory; 8x NVIDIA Geforce RTX3080 (10 GB memory); 3,8 TB SSD
  • 8 nodes with 2x AMD Epyc 7662 („Rome“, 64 Cores@2,0 GHz); 512 GB main memory; 4x NVIDIA A100 SXM4/Nvlink; 6,4 TB SSD
45 out of the 48 nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/tinygpu-cluster/

Linux based parallel computer / High performance computer Emmy
Linux based parallel computer / High performance computer Emmy


RRZE’s Emmy cluster (NEC) is a high-performance compute resource with high speed interconnect. It is intended for distributed-memory (MPI) or hybrid parallel programs with medium to high communication requirements.
  • 560 compute nodes, each with two Xeon 2660v2 „Ivy Bridge“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
  • 2 front end nodes with the same CPUs as the compute nodes.
  • 16 Nvidia K20 GPGPUs spread over 10 compute nodes plus 4 nodes each with a Nvidia V100/16GB GPGPU.
  • parallel filesystem (LXFS) with a capacity of 400 TB and an aggregated parallel I/O bandwidth of > 7000 MB/s
  • fat-tree InfiniBand interconnect fabric with 40 GBit/s bandwidth per link and direction
  • overall peak performance of ca. 234 TFlop/s (191 TFlop/s LINPACK, using only the CPUs).
The Emmy cluster is named after famous mathematician Emmy Noether who was born he

https://www.hpc.fau.de/systems-services/systems-documentation-instructions/clusters/emmy-cluster/

Linux based parallel computer / High performance computer Meggie
Linux based parallel computer / High performance computer Meggie


The RRZE’s Meggie cluster (manufacturer: Megware) is a high-performance compute resource with high speed interconnect. It is intended for distributed-memory (MPI) or hybrid parallel programs with medium to high communication requirements.
  • 728 compute nodes, each with two Intel Xeon E5-2630v4 „Broadwell“ chips (10 cores per chip) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
  • 2 front end nodes with the same CPUs as the compute nodes but 128 GB of RAM.
  • Lustre-based parallel filesystem with a capacity of almost 1 PB and an aggregated parallel I/O bandwidth of > 9000 MB/s.
  • Intel OmniPath interconnect with up to 100 GBit/s bandwidth per link and direction.
  • Measured LINPACK performance of ~481 TFlop/s.
Meggie is a system that is designed for running parallel programs using significantly more than one node.

https://www.hpc.fau.de/systems-services/systems-documentation-instructions/clusters/meggie-cluster/

 

Publications using Meggie



Research projects using Meggie



Research fields using Meggie


Publications using Emmy



Research projects using Emmy



Research fields using Emmy


Publications using Woody



Research projects using Woody

Research fields using Woody


TinyGPU


Linux based GPGPU computer / High performance computer TinyGPU
TinyGPU addresses the increasing demand for GPGPU-accelerated HPC systems and has nodes with six different types of GPUs (mostly of consumer type):
  • 7 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“, 8 Cores@2.1 GHz); 64 GB main memory; 4x NVIDIA Gefroce GTX 1080 (8 GB memory); 1.8 TB SSD
  • 10 nodes with 2x Intel Xeon E5-2620v4 („Broadwell“, 8 Cores@2.1 GHz); 64 GB main memory; 4x NVIDIA Geforce GTX 1080Ti (11 GB memory); 1.8 TB SSD
  • 12 nodes with 2x Intel Xeon Gold 6134 („Skylake“, 8 Cores@3.2 GHz); 96 GB main memory; 4x NVIDIA Geforce RTX 2080 Ti (11 GB memory); 1.8 TB SSD
  • 4 nodes with 2x Intel Xeon Gold 6134 („Skylake“, 8 Cores@3.2 GHz); 96 GB main memory; 4x NVIDIA Tesla V100 (32 GB memory); 2.9 TB SSD
  • 7 nodes with 2x Intel Xeon Gold 6226R („Cascadelake“, 16 Cores@2.9 GHz); 394 GB main memory; 8x NVIDIA Geforce RTX3080 (10 GB memory); 3,8 TB SSD
  • 8 nodes with 2x AMD Epyc 7662 („Rome“, 64 Cores@2,0 GHz); 512 GB main memory; 4x NVIDIA A100 SXM4/Nvlink; 6,4 TB SSD
45 out of the 48 nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/tinygpu-cluster/;

Publications using TinyGPU



Research projects using TinyGPU

Research fields using TinyGPU


TinyFAT


Linux based HPC compute nodes with large main memory / High performance computer TinyFat
TinyFat addresses applications with the demand for large main memory  and has nodes with three different types:
  • 8 nodes with 2x 2x Intel Xeon E5-2643 v4 (“Broadwell”) @3.4 GHz = 12 cores/24 threads and 256 GB main memory
  • 3 nodes with 2x Intel Xeon E5-2680 v4 (“Broadwell”) @2.4 GHz = 28 cores/56 threads and 512 GB main memory
  • 36 nodes with 2x AMD Rome 7502 @2.5 GHz = 64 cores/128 threads and 512 GM main memory
All nodes have been purchased/financed by specific groups or special projects. These users have priority access and nodes may be reserved exclusively for them.

https://hpc.fau.de/systems-services/systems-documentation-instructions/clusters/tinyfat-cluster/;

Publications using TinyFAT



Research projects using TinyFAT

Research fields using TinyFAT


-/-

Research fields using any HPC cluster (items occur multiple times if a field is mentioned for multiple clusters)

Pulications using any HPC cluster (items occur multiple times if a publication is liked to multiple clusters)










Publications using Fritz




Publications using Alex