• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
NHR@FAU
  • FAUTo the central FAU website
Suche öffnen
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

NHR@FAU

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • BayernKI
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Support Success Stories
    • Annual Reports
    • NHR@FAU Newsletters
    • Previous Events
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters & Talks
    • Performance Tools and Libraries
    • NHR PerfLab Seminar
    • Projects
    • Workshops
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures & Seminars
    • Tutorials & Courses
    • Monthly HPC Café and Beginner’s Introduction
    • Theses
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • HPC User Training
    • HPC System Utilization
    Portal Systems & Services
  • FAQ

NHR@FAU

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. HPC clusters & systems
  5. Emmy parallel cluster (Tier3)

Emmy parallel cluster (Tier3)

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
    • Job monitoring with ClusterCockpit
    • NHR application rules - NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH - Secure Shell access to HPC systems
    • File systems
    • Batch Processing
    • Software environment
    • Special applications, and tips & tricks
  • Support & Contact
  • HPC User Training
  • HPC System Utilization

Emmy parallel cluster (Tier3)

Aisle in the serverroom with racks full of servers on both sides
The Emmy-cluster at RRZE

This cluster has been shut down in September 2022.

RRZE’s Emmy cluster (NEC) is a high-performance compute resource with high speed interconnect. It is intended for distributed-memory (MPI) or hybrid parallel programs with medium to high communication requirements.

  • 560 compute nodes, each with two Xeon 2660v2 “Ivy Bridge” chips (10 cores per chip + SMT) running at 2.2 GHz with 25 MB Shared Cache per chip and 64 GB of RAM
  • 2 front end nodes with the same CPUs as the nodes.
  • 16 Nvidia Tesla K20 GPGPUs spread over 10 compute nodes.
  • 4 Nvidia Tesla V100 (16 GB/PCIe) spread over 4 compute nodes.
  • parallel filesystem (LXFS) with a capacity of 400 TB and an aggregated parallel I/O bandwidth of > 7000 MB/s
  • fat-tree InfiniBand interconnect fabric with 40 GBit/s bandwidth per link and direction
  • overall peak performance of ca. 234 TFlop/s (191 TFlop/s LINPACK, using only the CPUs).

The Emmy cluster is named after famous mathematician Emmy Noether who was born here in Erlangen.

sideview of serverracks in the serverroom, with posters about Emmy Noether
The Emmy-cluster at RRZE

Emmy is a system that is designed for running parallel programs using significantly more than one node. Jobs with less than one node are not supported by RRZE.

This website shows information regarding the following topics:

  • Access, User Environment, File Systems
    • Access to the machine
    • File systems
    • Batch processing
    • MPI
  • Further Information
    • InfiniBand Interconnect Fabric

Access, User Environment, and File Systems

Access to the machine

This cluster has been shut down in September 2022.

Users can connect to emmy.rrze.fau.de by SSH and will be randomly routed to one of the two front ends. All systems in the cluster, including the front ends, have private IP addresses in the 10.28.8.0/22 range. Thus they can only be accessed directly from within the FAU networks. If you need access from outside of FAU, you have to connect for example to the dialog server cshpc.rrze.fau.de first and then ssh to emmy from there. While it is possible to ssh directly to a compute node, a user is only allowed to do this while they have a batch job running there. When all batch jobs of a user on a node have ended, all of their processes, including any open shells, will be killed automatically.

The login and compute nodes run CentOS (which is basically Redhat Enterprise without the support). As on most other RRZE HPC systems, a modules environment is provided to facilitate access to software packages.
Type “module avail” to get a list of available packages.

The shell for all users on Emmy is always bash. This is different from our other clusters and the rest of RRZE, where the shell used to be tcsh unless you had requested it to be changed.

File Systems

The following table summarizes the available file systems and their features. It is only an excerpt from the description of the HPC file systems.

File system overview for the Emmy cluster
Mount point Access via Purpose Technology, size Backup Data lifetime Quota
/home/hpc $HOME Storage of source, input and important results NFS on central servers, small YES + Snapshots Account lifetime YES (restrictive)
/home/vault $HPCVAULT Medium- to long-term, high-quality storage central servers, HSM YES + Snapshots Account lifetime YES
/home/{woody, saturn, titan, janus, atuin} $WORK Short- to medium-term storage or small files central NFS server NO Account lifetime YES
/elxfs $FASTTMP High performance parallel I/O; short-term storage

only available for existing legacy users due to stability issues (8/2021)

LXFS (Lustre) parallel file system via InfiniBand, 400 TB NO High watermark deletion; the system is out of warranty since many years!
NO

Please note the following differences to our older clusters:

  • There is no cluster local NFS server like on previous clusters (e.g. /home/woody)
  • The nodes do not have any local hard disc drives like on previous clusters. Exception: The GPU nodes.
  • /tmp lies in RAM, so it is absolutely NOT possible to store more than a few MB of data here

NFS file system $HOME

When connecting to one of the front end nodes, you’ll find yourself in your regular RRZE $HOME directory (/home/hpc/...). There are relatively tight quotas there, so it will most probably be too small for the inputs/outputs of your jobs. It however does offer a lot of nice features, like fine grained snapshots, so use it for “important” stuff, e.g. your job scripts, or the source code of the program you’re working on. See the HPC file systems page for a more detailed description of the features.

Parallel file system $FASTTMP

The cluster’s parallel file system is mounted on all nodes under /elxfs/$GROUP/$USER/ and available via the $FASTTMP environment variable for existing legacy users only (i.e.people who had data on $FASTTMP already before 8/2021). It supports parallel I/O using the MPI-I/O functions and can be accessed with an aggregate bandwidth of >7000 MBytes/sec (and even much larger if caching effects can be used).

The parallel file system is strictly intended to be a high-performance short-term storage, so a high watermark deletion algorithm is employed: When the filling of the file system exceeds a certain limit (e.g. 80%), files will be deleted starting with the oldest and largest files until a filling of less than 60% is reached. Be aware that the normal tar -x command preserves the modification time of the original file instead of the time when the archive is unpacked. So unpacked files may become one of the first candidates for deletion. Use tar -mx or touch in combination with find to work around this. Be aware that the exact time of deletion is unpredictable.

Note that parallel filesystems generally are not made for handling large amounts of small files. This is by design: Parallel filesystems achieve their amazing speed by writing to multiple different servers at the same time. However, they do that in blocks, in our case 1 MB. That means that for a file that is smaller than 1 MB, only one server will ever be used, so the parallel filesystem can never be faster than a traditional NFS server – on the contrary: due to larger overhead, it will generally be slower. They can only show their strengths with files that are at least a few megabytes in size, and excel if very large files are written by many nodes simultaneously (e.g. checkpointing). For that reason, we have set a limit on the number of files you can store there.

Batch processing

This cluster has been shut down in September 2022.

As with all production clusters at RRZE, resources are controlled through a batch system. The front ends can be used for compiling and very short serial test runs, but everything else has to go through the batch system to the cluster.

Please see the batch system description for further details.

The following queues are available on this cluster:

Queues on the Emmy cluster
Queue min – max walltime min – max nodes availability Comments
route N/A N/A all users Default router queue; sorts jobs into execution queues
devel 0 – 01:00:00 1 – 8 all users Some nodes reserved for queue during working hours
work 01:00:01 – 24:00:00 1 – 64 all users “Workhorse”
big 01:00:01 – 24:00:00 1 – 560 special users Not active all the time as it causes quite some waste. Users can get access for benchmarking or after proving they can really make use of more than 64 nodes with their codes.
special 0 – infinity 1 – all special users Direct job submit with -q special

As full nodes have to be requested, you always need to specify -l nodes=<nnn>:ppn=40 on qsub.

All nodes have properties that you can use to request nodes of a certain type. This is mostly needed to request one of the GPU nodes. You request nodes with a certain property by appending :property to your request, e.g. -l nodes=<nnn>:ppn=40:v100.
The following properties are available:

Node properties on Emmy
Property Description
:k20m nodes with one or two NVidia Kepler cards. 10 nodes qualify
:k20m1x nodes with one NVidia Kepler card. 4 nodes qualify
:k20m2x nodes with two NVidia Kepler cards. 6 nodes qualify
:v100 nodes with one NVidia Tesla V100 (16GB) card. 4 nodes qualify
:anygpu nodes with any NVidia GPU. 4 nodes qualify

Properties can also be used to request a certain CPU clock frequency. This is not something you will usually want to do, but it can be used for certain kinds of benchmarking. Note that you cannot make the CPUs go any faster, only slower, as the default already is the turbo mode, which makes the CPU clock as fast as it can (up to 2.6 GHz) without exceeding its thermal or power budget. So please do not use any of the following options unless you know what you’re doing. The available options are: :noturbo to disable Turbo Mode, :f2.2 to request 2.2 GHz (this is equivalent to :noturbo), :f2.1 to request 2.1 GHz, and so on in 0.1 GHz steps down to :f1.2 to request 1.2 GHz.

To request access to the hardware performance counters (i.e. to use likwid-perfctr), you have to add the property :likwid. Otherwise you will get the error message Access to performance monitoring registers locked from likwid-perfctr. The property is not required (and should also not be used) for other parts of the LIKWID suite, e.g. it is not required for likwid-pin.

MPI

Intel MPI is recommended, but OpenMPI is available, too. For more details on running MPI parallel applications, please refer to the documentation on parallel computing.

Further Information

Intel Xeon E5-2660 v2 “Ivy Bridge” Processor

Intel’s ark lists some technical details about the Xeon E5-2660 v2 processor.

InfiniBand Interconnect Fabric

The InfiniBand network on Emmy is a quad data rate (QDR) network, i.e. the links run at 40 GBit/s in each direction. This is identical to the network on LiMa. The network is fully non blocking, i.e. the backbone is capable of handling the maximum amount of traffic coming in through the client ports without any congestion. However, due to the fact that InfiniBand still uses static routing, i.e. once a route is established between two nodes it doesn’t change even if the load on the backbone links changes, it is possible to generate traffic patterns that will cause congestion on individual links. This is however not likely to happen on normal user jobs.

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
  • RSS Feed
Up