• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Success Stories from the Support
    • Annual Report
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials & Courses
    • Theses
    • HPC Café
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Training Resources
    • Summary of System Utilization
    Portal Systems & Services
  • FAQ

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. HPC clusters & systems

HPC clusters & systems

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • Job monitoring with ClusterCockpit
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody(-old) throughput cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
        • Continuous Integration / One-way syncing of GitHub to Gitlab repositories
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • Spack package manager
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
  • HPC User Training
  • HPC System Utilization

HPC clusters & systems

NHR@FAU operates different HPC systems which target different application areas. Some systems are for basic Tier3 FAU service only while others are jointly operated for NHR and Tier3 FAU access. Tier3 systems/parts are financed by FAU or as DFG Forschungsgroßgerät while NHR systems/parts are funded by federal and state authorities (BMBF and Bavarian State Ministry of Science and the Arts, respectively.

Overview

Cluster name #nodes target applications Parallel
filesystem
Local
harddisks
description
Fritz (NHR+Tier3) 944 high-end massively parallel Yes No open for NHR and Tier3 after application
Alex (NHR+Tier3) 256 Nvidia A100 and 304 Nvidia A40 GPGPUs in 66 nodes high-end GPGPU Yes (but only via Ethernet) Yes (NVMe SSDs) open for NHR and Tier3 after application
Meggie (Tier3) 728 parallel Yes No This is the RRZE’ main working horse, intended for parallel jobs.
Emmy (Tier3) 560 parallel Yes No EOL – This has been the main cluster for single-node and multi-node parallel jobs
Woody (Tier3) 248 serial throughput No Yes Cluster with fast (single- and dual-socket) CPUs for serial throughput workloads
TinyGPU (Tier3) 48 GPU No Yes (SSDs) The nodes in this cluster are equipped with NVIDIA GPUs (mostly with 4 GPUs per node)
TinyFat (Tier3) 47 large memory requirements No Yes (SSDs) This cluster is for applications that require large amounts of memory. Each node has 256 or 512 gigabytes of main memory.

Alex (installed 2021/2022)

Nodes 20 GPGPU nodes, each with two AMD EPYC 7713 “Milan” processors (64 cores per chip) running at 2.0 GHz with 256 MB Shared L3 Cache per chip and 1,024 GB of DDR4-RAM, eight Nvidia A100 (each 40 GB HBM2 @ 1,555 GB/s; HGX board with NVLink; 9.7 TFlop/s in FP64 or 19.5 TFlop/s in FP32), two HDR200 Infiniband HCAs, 25 GbE, and 14 TB on local NVMe SSDs.

 

38 GPGPU nodes, each with two AMD EPYC 7713 “Milan” processors (64 cores per chip) running at 2.0 GHz with 256 MB Shared L3Cache per chip, 512 GB of DDR4-RAM, eight Nvidia A40 (each with 48 GB DDR6 @ 696 GB/s; 37.42 TFlop/s in FP32), 25 GbE, and 7 TB on local NVMe SSDs.

Linpack Performance 1.7 PFlop/s

Fritz (installed 2021/2022)

© MEGWARE

Nodes 944 compute nodes, each with two Intel Xeon Platinum 8360Y “Ice Lake” chips (36 cores per chip), running at 2.4 GHz with 54 MB Shared L3 Cache per chip and 256 GB of DDR4-RAM.
Parallel file system Lustre-based parallel filesystem with a capacity of about 3,5 PB and an aggregated parallel I/O bandwidth of > 20 GB/s.
Network Blocking HDR100 Infiniband with up to 100 GBit/s bandwidth per link and direction.
Linpack Performance ## PFlop/s

Meggie (installed 2017)

Nodes 728 compute nodes, each with two Intel Xeon E5-2630v4 „Broadwell“ chips (10 cores per chip) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM.
Parallel file system Lustre-based parallel filesystem with a capacity of almost 1 PB and an aggregated parallel I/O bandwidth of > 9000 MB/s.
Network Intel OmniPath interconnect with up to 100 GBit/s bandwidth per link and direction.
Linpack Performance 481 TFlop/s

Emmy (EOL; 2013-2022)

Nodes 560 compute nodes, each with two Xeon 2660v2 „Ivy Bridge“ chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM
Parallel file system LXFS with a capacity of 400 TB and an aggregated parallel I/O bandwidth of > 7000 MB/s
Network Fat-tree Infiniband interconnect fabric with 40 GBit/s bandwidth per link and direction
Linpack Performance 191 TFlop/s

Testcluster

For the evaluation of microarchitectures and research purposes we also maintain a cluster of test machines. We try to always have at least one machine of every relevant architecture in HPC. Currently all recent Intel processor generations are available. Frequently we also get early access prototypes for benchmarking.

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up