• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Success Stories from the Support
    • Annual Report
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials & Courses
    • Theses
    • HPC Café
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Training Resources
    • Summary of System Utilization
    Portal Systems & Services
  • FAQ

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. HPC clusters & systems
  5. Dialog server

Dialog server

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • Job monitoring with ClusterCockpit
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody(-old) throughput cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
        • Continuous Integration / One-way syncing of GitHub to Gitlab repositories
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • Spack package manager
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
  • HPC User Training
  • HPC System Utilization

Dialog server

Because practically all HPC-systems at RRZE use private IP-addresses that can only be reached from within the FAU, the dialog servers are the entry point for customers that want to access the clusters from the outside. Another alternative can be VPN, but that usually is more hassle.

VPN is not available for NHR users.

Login to the dialog-servers is via SSH.

Available servers

  • cshpc.rrze.fau.de – cshpc is a Linux-System that permits login to all HPC-accounts. A more verbose description of this system can be found below.

cshpc

Various software-packages like e.g. webbrowsers, mail clients, PDF readers or gnuplot are available on cshpc.

The standard filesystems (/home/hpc, /home/vault, $WORK, essentially everything that starts with /home/...) are directly reachable from this system as well, so that you can easily copy data around using scp.

System specs
CPU 16 x 2,60 GHz (2 x Xeon E5-2650 v2)
Memory 128 GB
Operating system Ubuntu LTS (20.04 as of November 2022)
Network connectivity 2x 10 GBit

 

Nomachine NX on cshpc

cshpc can also be used as server for Nomachine NX. NX enables use of a graphical desktop environment and applications (e.g. firefox) even over relatively slow connections (e.g. Hotel-Wifi abroad). In addition, it is possible to “park” sessions and resume them from elsewhere later, so in a way it is sort of screen for X. Your detached session keeps on running on the server, and when you reattach it later, all the applications you had open still are open.
To use this, you will need the Nomachine Enterprise Client, available for Windows, Linux and MacOS. It can be downloaded for free from the Nomachine website.

The most important settings you will need to make when you create a new connection with the client are: protocol SSH, host cshpc.rrze.fau.de, port 22, Use the system login, Authentication by Password. Alternatively you can open this configuration file in your nomachine-client.

While it is in principle possible to use Gnome or KDE4/Plasma desktops on cshpc, we do not recommend that. The reason is that these desktops nowadays require hardware 3D acceleration to be bearable, and the remote session cannot offer that, so using them will feel like molasses. In addition, in our experience Plasma crashes randomly at every second click if no 3D acceleration is available. We therefore recommend you use more lightweight desktop-environments like XFCE, or Trinity (KDE3). For the latter, you’ll need to click on “create a new custom session” in the client, and use the following settings:

  • Run the following command: starttde
  • Run the command in a virtual desktop

Because this system is shared by many users, it should be self-explanatory that you will need to be considerate of other users. Do not attempt to run things sucking up gigabytes of memory or long running calculations there.

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up