• Jump to content
  • Jump to navigation
  • Jump to bottom of page
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Geschäftsstelle
  • Gauß-Allianz

Navigation Navigation close
  • News
  • People
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials and Courses
    • Theses
    • HPC Cafe
    • Student Cluster Competition
    Portal Teaching
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Success Stories from the Support
    • Training Resources
    • Summary of System Utilization
    • Reports from User Projects
    Portal Systems & Services

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. NHR application rules – NHR@FAU

NHR application rules – NHR@FAU

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
      • Torque batch system
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • Monthly HPC Cafe
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • Support Success Stories
      • Success story: Elmer/Ice
  • HPC User Training
  • HPC System Utilization
  • User projects
    • Biology, life sciences & pharmaceutics
      • HPC User Report from A. Bochicchio (Professorship of Computational Biology)
      • HPC User Report from A. Horn (Bioinformatics)
      • HPC User Report from C. Söldner (Professorship for Bioinformatics)
      • HPC User Report from J. Calderón (Computer Chemistry Center)
      • HPC User Report from J. Kaindl (Chair of Medicinal Chemistry)
      • HPC User Report from K. Pluhackova (Computational Biology Group)
    • Chemical & mechanical engineering
      • HPC User Report from A. Leonardi (Institute for Multiscale Simulation)
      • HPC User Report from F. Lenahan (Institute of Advanced Optical Technologies – Thermophysical Properties)
      • HPC User Report from F. Weber (Chair of Applied Mechanics)
      • HPC User Report from K. Nusser (Institute of Process Machinery and Systems Engineering)
      • HPC User Report from K. Nusser (Institute of Process Machinery and Systems Engineering)
      • HPC User Report from L. Eckendörfer (Catalytic Reactors and Process Technology)
      • HPC User Report from M. Klement (Institute for Multiscale Simulation)
      • HPC User Report from M. Münsch (Chair of Fluid Mechanics)
      • HPC User Report from T. Klein (Institute of Advanced Optical Technologies – Thermophysical Properties)
      • HPC User Report from T. Schikarski (Chair of Fluid Mechanics / Chair of Particle Technology)
      • HPC User Report from U. Higgoda (Institute of Advanced Optical Technologies – Thermophysical Properties)
    • Chemistry
      • HPC User Report from B. Becit (Professorship of Theoretical Chemistry)
      • HPC User Report from B. Meyer (Computational Chemistry – ICMM)
      • HPC User Report from D. Munz (Chair of Inorganic and General Chemistry)
      • HPC User Report from J. Konrad (Professorship of Theoretical Chemistry)
      • HPC User Report from P. Schwarz (Interdisciplinary Center for Molecular Materials)
      • HPC User Report from S. Frühwald (Chair of Theoretical Chemistry)
      • HPC User Report from S. Maisel (Chair of Theoretical Chemistry)
      • HPC User Report from S. Sansotta (Professorship of Theoretical Chemistry)
      • HPC User Report from S. Seiler (Interdisciplinary Center for Molecular Materials)
      • HPC User Report from S. Trzeciak (Professorship of Theoretical Chemistry)
      • HPC User Report from T. Klöffel (Interdisciplinary Center for Molecular Materials)
      • HPC User Report from T. Kollmann (Professorship of Theoretical Chemistry)
    • Computer science & Mathematics
      • HPC User Report from B. Jakubaß & S. Falk (Division of Phoniatrics and Pediatric Audiology)
      • HPC User Report from D. Schuster (Chair for System Simulation)
      • HPC User Report from F. Wein (Professorship for Mathematical Optimization)
      • HPC User Report from J. Hornich (Professur für Höchstleistungsrechnen)
      • HPC User Report from L. Folle and K. Tkotz (Chair of Computer Science 5, Pattern Recognition)
      • HPC User Report from R. Burlacu (Economics, Discrete Optimization, and Mathematics)
      • HPC User Report from S. Falk (Division of Phoniatrics and Pediatric Audiology)
      • HPC User Report from S. Falk (Phoniatrics and Pediatric Audiology)
      • HPC User Report from S. Jacob (Chair of System Simulation)
    • Electrical engineering & Audio processing
      • HPC User Report from N. Pia (AudioLabs)
      • HPC User Report from S. Balke (Audiolabs)
    • Geography & Climatology
      • HPC usage report from F. Temme, J. V. Turton, T. Mölg and T. Sauter
      • HPC usage report from J. Turton, T. Mölg and E. Collier
      • HPC usage report from N. Landshuter, T. Mölg, J. Grießinger, A. Bräuning and T. Peters
      • HPC User Report from C. Pickler and T. Mölg (Climate System Research Group)
      • HPC User Report from E. Collier (Climate System Research Group)
      • HPC User Report from E. Collier and T. Mölg (Climate System Research Group)
      • HPC User Report from E. Collier, T. Sauter, T. Mölg & D. Hardy (Climate System Research Group, Institute of Geography)
      • HPC User Report from E. Kropač, T. Mölg, N. J. Cullen, E. Collier, C. Pickler, and J. V. Turton (Climate System Research Group)
      • HPC User Report from J. Fürst (Department of Geography)
      • HPC User Report from P. Friedl (Department of Geography)
      • HPC User Report from T. Mölg (Climate System Research Group)
    • Linguistics
      • HPC User Report from P. Uhrig (Chair of English Linguistics)
    • Material sciences
      • HPC User Report from A. Rausch (Chair of Materials Science and Engineering for Metals)
      • HPC User Report from D. Wei (Chair of Materials Simulation)
      • HPC User Report from J. Köpf (Chair of Materials Science and Engineering for Metals)
      • HPC User Report from P. Baranova (Chair of General Materials Properties)
      • HPC User Report from S. Nasiri (Chair for Materials Simulation)
      • HPC User Report from S.A. Hosseini (Chair for Materials Simulation)
    • Medical research
      • HPC User Report from H. Sadeghi (Phoniatrics and Pediatric Audiology)
      • HPC User Report from P. Ritt (Imaging and Physics Group, Clinic of Nuclear Medicine)
      • HPC User Report from S. Falk (Division of Phoniatrics and Pediatric Audiology)
    • Physics
      • HPC User Report from D. Jankowsky (High-Energy Astrophysics)
      • HPC User Report from M. Maiti (Inst. Theoretische Physik 1)
      • HPC User Report from N. Vučemilović-Alagić (PULS group of the Physics Department)
      • HPC User Report from O. Malcioglu (Theoretische Festkörperphysik)
      • HPC User Report from S. Fey (Chair of Theoretical Physics I)
      • HPC User Report from S. Ninova (Theoretical Solid-State Physics)
      • HPC User Report from S. Schmidt (Erlangen Centre for Astroparticle Physics)
    • Regional users and student projects
      • HPC User Report from Dr. N. Ferruz (University of Bayreuth)
      • HPC User Report from J. Martens (Comprehensive Heart Failure Center / Universitätsklinikum Würzburg)
      • HPC User Report from M. Fritsche (HS-Coburg)
      • HPC User Report from M. Heß (TH-Nürnberg)
      • HPC User Report from M. Kögel (TH-Nürnberg)
  • NHR compute time projects

NHR application rules – NHR@FAU

Since January 2021, NHR@FAU is one of now nine centers of the NHR Alliance.

The final rules and workflows for applying for an NHR project / account are not yet fully fixed within the alliance. There will be a joint portal of all NHR centers for compute time applications. Nevertheless, you’ll find our preliminary local conditions below.

Contacting NHR@HPC

You can reach NHR@FAU by email to hpc-support@fau.de. Reachability by phone cannot be guaranteed as many of us are still partially working from at home.

Individual consulting using Zoom, MS Teams, BigBlueButton, or DFN-Conf can be arranged on short notice. Just send us an email to hpc-support@fau.de.

Offerings of NHR@FAU

Scientific offerings

NHR@FAU is offering support especially in the area of performance engineering and single-node performance analysis. The application focus is on atomistic simulations, i.e. molecular dynamics, chemistry, and certain areas of material sciences. Special experience is in particular available from Gromacs and Amber.

Compute resources

NHR@FAU is offering two different types of compute resources:

  • GPGPU cluster “Alex” with Nvidia A40 and Nvidia A100 GPGPUs
  • Parallel computer “Fritz” with Intel Xeon Platium 8360Y “IceLake” processors and HDR100 interconnect within islands of 64 nodes (i.e. 4.608 cores)

Types of projects at NHR@FAU

Regular projects by default have a duration of 12 month. The review process typically takes less than 3 months. Test / porting projects will be decided within 3-4 weeks.

Details of the resource limits listed in the table are still subject to change.

Project types and (annual) resource limits
Type of project possible resources on “Alex” possible resources on “Fritz” type of review remarks
test / porting project
(typical project runtime not more than 3-4 months)
up to 3.000 GPU hours (A40 or A100) up to 500.000 core hours
  • technical review by NHR@FAU
Rolling call.

A scientific advisor from NHR@FAU (liaison scientist) is recommended.

normal compute time projects for granted DFG or BMBF projects
  • 6.000 – 60.000 GPU hours (A40)
  • 4.000 – 40.000 GPU hours (A100)
1 – 10 mio core hours
  • technical review by NHR@FAU
  • 1 simplified scientific review
Rolling call.

A scientific advisor from NHR@FAU (liaison scientist) can be requested

normal compute time projects built without  granted DFG or BMBF projects
  • 6.000 – 60.000 GPU hours (A40)
  • 4.000 – 40.000 GPU hours (A100)
1 – 10 mio core hours
  • technical review by NHR@FAU
  • two external scientific reviews
Rolling call.

A scientific advisor from NHR@FAU (liaison scientist) can be requested.

large compute time projects
  • 60.000 – 180.000 GPU hours (A40)
  • 40.000 – 120.000 GPU hours (A100)
10 – 30 mio core hours
  • technical review by NHR@FAU
  • two external scientific reviews
next cut-off deadline for large-scale applications: July 1st.

A scientific advisor from NHR@FAU (liaison scientist) is mandatory.

follow-up projects as above as above
  • technical review by NHR@FAU
  • external scientific reviews of the proposal including an intermediate report
a scientific advisor from NHR@FAU (liaison scientist) can be requested

Projects falling into the “large compute time projects” category should also consider applying for compute time at one of the three federal Tier1 HPC centers of the Gauss Centre for Supercomputing (GCS).

Template for NHR@FAU compute time application proposals

During the course of the year 2022, a central online application portal hopefully will be come operational. For now, please use our template for NHR@FAU compute time proposals and send it by email from your university account to hpc-support@fau.de.

The document contains detailed information on what to fill in. Depending on the project type, some sections are optional and the expected length of some sections may also vary; see the notes in the document for details and contact us via hpc-support@fau.de for further assistance.

template for NHR@FAU compute time proposals

Acknowledgement of NHR@FAU

Please use the following formulation for acknowledging the resources and the support provided by NHR@FAU:

The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) under the NHR project <ID of your NHR@FAU project>. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) – 440719683.

Please also send electronic copies of these publications by email to nhr-redaktion@lists.fau.de. Proper acknowledgements of our services is important for our center’s evaluation and its future funding.

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up