OpenFOAM
OpenFOAM (for “Open-source Field Operation And Manipulation”) is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, most prominently including computational fluid dynamics (CFD). It contains solvers for a wide range of problems, from simple laminar regimes to DNS or LES including reactive turbulent flows. It provides a framework for manipulating fields and solving general partial differential equations on unstructured grids based on finite volume methods. Therefore, it is suitable for complex geometries and a wide range of configurations and applications.
There are three main variants of OpenFOAM that are released as free and open-source software under a GPLv3 license: ESI OpenFOAM, The OpenFOAM Foundation, Foam Extend.
Availability / Target HPC systems
We provide modules for some major OpenFOAM versions, which were mostly requested by specific groups or users. If you have a request for a new version, please contact hpc-support@fau.de. Please note that we will only provide modules for fully released versions, which will be used by more than one user. If you need some specific custom configuration or version, please consider building it yourself. Installation guides are available from the respective OpenFOAM distributors.
The installed versions of OpenFOAM may differ between the different HPC clusters. You can check the available versions via module avail openfoam
.
Production jobs should be run on the parallel HPC systems in batch mode. It is NOT permitted to run computationally intensive OpenFOAM simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.
Notes
- OpenFOAM produces per default lots of small files – for each processor, every step, and for each field. The parallel file system ($FASTTMP) is not made for such a finely grained file/folder structure. For more recent versions of OpenFOAM, you can use collated I/O, which produces somewhat less problematic output.
- Paraview is used for post-processing and is also available via the modules system on the HPC cluster. However, keep an eye on the main memory requirements for this visualization, especially on the frontends!
Sample job scripts
All job scripts have to contain the following information:
- Resource definition for the queuing system (more details here)
- Load OpenFOAM environment module
- Start command for parallel execution of solver of choice
For meggie/Slurm batch system: mpirun takes the parameters (nodes, tasks-per-node) that you specified in the header of your batch file. You don’t have to specify this again in your mpirun
call (see also MPI on meggie). In order that this works correctly, the total number of MPI tasks (nodes times tasks-per-node) must be equal to numberOfSubdomains inside system/decomposeParDict!
[/collapse]
parallel OpenFOAM on Meggie
#!/bin/bash -l #SBATCH --job-name=my-job-name #SBATCH --nodes=4 #SBATCH --tasks-per-node=20 # for 20 physical cores on meggie #SBATCH --time=24:00:00 #SBATCH --export=NONE# load environment module module load openfoam/XXXX
unset SLURM_EXPORT_ENV
# Please insert here your preferred solver executable! mpirun icoFoam -parallel -fileHandler collated > logfile
Further information
- Tutorials and guides can be found in the official OpenFOAM.com documentation or the OpenFOAM.org User Guide.
Mentors
- please volunteer!