• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Success Stories from the Support
    • Annual Report
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials & Courses
    • Theses
    • HPC Café
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Training Resources
    • Summary of System Utilization
    Portal Systems & Services
  • FAQ

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. Special applications, and tips & tricks
  5. ANSYS CFX

ANSYS CFX

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • Job monitoring with ClusterCockpit
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody(-old) throughput cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
        • Continuous Integration / One-way syncing of GitHub to Gitlab repositories
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • Spack package manager
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
  • HPC User Training
  • HPC System Utilization

ANSYS CFX

ANSYS CFX is a general purpose Computational Fluid Dynamics (CFD) code. It provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains). It is mostly used for simulating turbomachinery, such as pumps, fans, compressors and gas and hydraulic turbines.

Please note that the clusters do not come with any license. If you want to use ANSYS products on the HPC clusters, you have to have access to suitable licenses. These can be purchased directly from RRZE. To efficiently use the HPC resources, ANSYS HPC licenses are necessary.

Availability / Target HPC systems

Different versions of all ANSYS products are available via the modules system, which can be listed by module avail ansys. A special version can be loaded, e.g. by module load ansys/2020R1.

We mostly install the current versions automatically, but if something is missing, please contact hpc-support@fau.de.

Production jobs should be run on the parallel HPC systems in batch mode.

ANSYS CFX can also be used in interactive GUI mode for serial pre- and/or post-processing on the login nodes (Linux: SSH Option “-X”; Windows: using PuTTY and XMing for X11-forwarding). This should only be used to make quick simulation setup changes. It is NOT permitted to run computationally intensive ANSYS CFX simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.

Alternatively, ANSYS CFX can be run interactively with GUI on TinyFat (for large main memory requirements) or on a compute node.

Getting started

The (graphical) CFX launcher is started by typing

cfx5launch

on the command line. If you want to use the separate pre- or postprocessing capabilities, you can also launch cfx5pre or cfx5post, respectively.

For running simulations in batch mode on the HPC systems, use the

cfx5solve

command. You can find out the available parameters via cfx5solve -help. One example call to use in your batch script would be

cfx5solve -batch -par-dist $NODELIST -double -def <solver input file>

The number of processes and the hostnames of the compute nodes to be used are defined in $NODELIST. For how to compile this list, refer to the example script below. Using SMT threads is not recommended.

Notes

  • We recommend writing automatic backup files (every 6 to 12 hours) for longer runs to be able to restart the simulation in case of a job or machine failure. This can be specified in Output Control → User Interface → Backup Tab
  • Furthermore, it is recommended to use the “Elapsed Wall Clock Time Control” in the job definition in ANSYS CFX Pre (Solver Control → Elapsed Wall Clock Time Control → Maximum Run Time → <24h). Also plan enough buffer time for writing the final output, depending on your application, this can take quite a long time!
  • Please note that the default (Intel) MPI startup mechanism is not working on meggie. This will lead to the solver hanging without producing any output. Use the option -start-method "Open MPI Distributed Parallel" to prevent this.

Sample job scripts

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details here)
  • Load ANSYS environment module
  • Generate a file with names of hosts of the current simulation run to tell CFX on which nodes it should run (see example below)
  • Execute cfx5solve with appropriate command line parameters (available options via cfx5solve -help)

distributed parallel job on meggie

#!/bin/bash -l
#SBATCH --job-name=mycfx
#SBATCH --nodes=4
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV
# specify the name of your input-def file
DEFFILE="example.def"
# number of cores to use per node
PPN=20
# load environment module
module load ansys/XXXX
# generate node list
NODELIST=$(for node in $( scontrol show hostnames $SLURM_JOB_NODELIST | uniq ); do echo -n "${node}*$PPN,"; done | sed 's/,$//')
# execute cfx with command line parameters (see cfx5solve -help for all available parameters)  
 cfx5solve -batch -double -par-dist "$NODELIST" -def "$DEFFILE" -start-method "Open MPI Distributed Parallel"

Further information

  • Documentation is available within the application help manual. Further information is provided through the ANSYS Customer Portal for registered users.
  • More in-depth documentation is available at LRZ. Please note: not everything is directly applicable to HPC systems at RRZE!

Mentors

  • Dr.-Ing. Katrin Nusser, RRZE, hpc-support@fau.de
  • please volunteer!

 

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up