• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Success Stories from the Support
    • Annual Report
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials & Courses
    • Theses
    • HPC Café
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Training Resources
    • Summary of System Utilization
    Portal Systems & Services
  • FAQ

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. Special applications, and tips & tricks
  5. ANSYS Fluent

ANSYS Fluent

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • Job monitoring with ClusterCockpit
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody(-old) throughput cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
        • Continuous Integration / One-way syncing of GitHub to Gitlab repositories
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • Spack package manager
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
  • HPC User Training
  • HPC System Utilization

ANSYS Fluent

Fluent is a general-purpose Computational Fluid Dynamics (CFD) code developed by ANSYS. It is used for a wide range of engineering applications, as it provides a variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion, and chemical reactions, and heat and mass transfer.

Please note that the clusters do not come with any license. If you want to use ANSYS products on the HPC clusters, you have to have access to suitable licenses. These can be purchased directly from RRZE. To efficiently use the HPC resources, ANSYS HPC licenses are necessary.

Availability / Target HPC systems

Different versions of all ANSYS products are available via the modules system, which can be listed by module avail ansys. A special version can be loaded, e.g. by module load ansys/2020R1.

We mostly install the current versions automatically, but if something is missing, please contact hpc-support@fau.de.

Production jobs should be run on the parallel HPC systems in batch mode.

ANSYS Fluent can also be used in interactive GUI mode for serial pre- and/or post-processing on the login nodes (Linux: SSH Option “-X”; Windows: using PuTTY and XMing for X11-forwarding). This should only be used to make quick simulation setup changes. However, most of these can also be done in batch mode, please refer to the documentation of the fluent-specific TUI (text user interface). Please be aware that ANSYS Fluent is loading the full mesh into the login node’s memory when you open a simulation file. You should do this only with comparable small cases. It is NOT permitted to run computationally intensive ANSYS Fluent simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.

Alternatively, Fluent can be run interactively with GUI on TinyFat (for large main memory requirements) or on a compute node.

Getting started

The (graphical) Fluent launcher is started by typing

fluent

on the command line. Here, you have to specify the properties of the simulation run: 3D or 2D, single or double precision, meshing or solver mode, and serial or parallel mode. When using Fluent in a batch job, all these properties have to be specified on the command line, e.g.

fluent 3ddp -g -t 20 -cnf="$NODELIST"

This launches a 3D, double-precision simulation. For a 2D, single-precision simulation 2dsp has to be specified. By using the -g option, no GUI or graphics are launched. If your simulation should produce graphical output, e.g. plot of convergence history in PNG or JPG format, -gu -driver null has to be used instead.

The number of processes is defined by the -t option. This number corresponds to the number of physical CPU cores that should be used. Using also SMT threads is not recommended. The hostnames of the compute nodes and the number of processes to be launched on each node have to be specified in a host list via the -cnf option. Please refer to the sample script below for more information.

For more information about the available parameters, use fluent -help.

Journal files

In contrast to ANSYS CFX and other simulation tools, submitting the .cas file is not sufficient to run a simulation on a parallel cluster. For a proper simulation run using a batch job, a simple journal file (.jou) is required to specify the specific solution steps.

Such a basic journal file contains a number of so-called TUI commands to ANSYS Fluent (TUI = Text User Interface). Details these commands can be found in the ANSYS Fluent documentation, Part II: Solution Mode; Chapter 2: Text User Interface (TUI).

Every configuration that is done in the GUI also has a corresponding TUI command. You can, therefore, change the configuration of the simulation during the simulation run, for example by adjusting the solution time step after a specified number of iterations. A simple example journal file for a steady-state simulation is given below. Please note that running a transient simulation would require different commands for time integration. The same applies when re-starting the simulation from a previous run or initialization.

The journal file has to be specified at the time of the application launch with -i <journal-file>.

Notes

  • ANSYS Fluent does not consist of different pre-, solver, and postprocessing applications as e.g. ANSYS CFX. Everything is included in one single-windowed GUI.
  • The in-build Fluent post-processing can also be run in parallel mode. Normally, much fewer processes than for simulation runs are needed. However, do not use this on the login nodes!
  • We recommend writing automatic backup files (every 6 to 12 hours) for longer runs to be able to restart the simulation in case of a job or machine failure. This can be specified in ANSYS Fluent under Solution → Calculation Activities → Autosave Every Iterations.
  • Fluent cannot stop a simulation based on elapsed time. Therefore, you have to estimate the number of iterations which will fit into your desired runtime. The above auto-save can also be useful as a precaution. Also plan enough buffer time for writing the final output, depending on your application, this can take quite a long time!
  • Please note that for some versions, the default (Intel) MPI startup mechanism is not working on meggie. This will lead to the solver hanging without producing any output. Use the option -mpi=openmpi to prevent this.

Sample job scripts

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details here)
  • Load ANSYS environment module
  • Generate a file with names of hosts of the current simulation run to tell Fluent on which nodes it should run (see example below)
  • Execute fluent with appropriate command line parameters (available options via fluent -help)
  • Specify ANSYS Fluent journal file (*.jou) as input; this is used to control the execution of the simulation since *.cas files do not contain any solver control information

parallel job on meggie

#!/bin/bash -l
#SBATCH --job-name=myfluent
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=20
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load environment module 
module load ansys/XXXX 

# generate node list 
NODELIST=$(for node in $( scontrol show hostnames $SLURM_JOB_NODELIST | uniq ); do echo -n "${node}:${SLURM_NTASKS_PER_NODE},"; done | sed 's/,$//')
# calculate the number of cores actually used 
CORES=$(( ${SLURM_JOB_NUM_NODES} * ${SLURM_NTASKS_PER_NODE} )) 

# execute fluent with command line parameters (in this case: 3D, double precision) 
# Please insert here your own .jou and .out file with their correct names! 
fluent 3ddp -g -t ${CORES} -mpi=openmpi -cnf="$NODELIST" -i fluent_batch.jou > outfile.out

parallel job on fritz

#!/bin/bash -l
#SBATCH --job-name=myfluent
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=72
#SBATCH --time=24:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load environment module 
module load ansys/XXXX 

# generate node list 
NODELIST=$(for node in $( scontrol show hostnames $SLURM_JOB_NODELIST | uniq ); do echo -n "${node}:${SLURM_NTASKS_PER_NODE},"; done | sed 's/,$//')
# calculate the number of cores actually used 
CORES=$(( ${SLURM_JOB_NUM_NODES} * ${SLURM_NTASKS_PER_NODE} )) 

# execute fluent with command line parameters (in this case: 3D, double precision) 
# Please insert here your own .jou and .out file with their correct names! 
fluent 3ddp -g -t ${CORES} -mpi=openmpi -cnf="$NODELIST" -i fluent_batch.jou > outfile.out

example journal file for steady-state simulation

;feel free to modify all subsequent lines to adapt them to your application case
;read case file
/file/read-case "./example-case.cas"

;initialization and start of steady state simulation

/solve/initialize/hyb-initialization
(format-time #f #f)
/solve/iterate 100
(format-time #f #f)

;write final output and exit
/file/write-case-data "./example-case-final.cas"

exit y

example journal file for steady-state simulation

;feel free to modify all subsequent lines to adapt them to your application case
;read case file
/file/read-case "./example-case.cas"

;initialization and start of steady state simulation

/solve/initialize/hyb-initialization
(format-time #f #f)
/solve/iterate 100
(format-time #f #f)

;write final output and exit
/file/write-case-data "./example-case-final.cas"

exit y

Further information

  • Documentation is available within the application help manual. Further information is provided through the ANSYS Customer Portal for registered users.
  • More in-depth documentation is available at LRZ. Please note: not everything is directly applicable to HPC systems at RRZE!

Mentors

  • Dr.-Ing. Katrin Nusser, RRZE, hpc-support@fau.de
  • please volunteer!

 

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up