• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Success Stories from the Support
    • Annual Report
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials & Courses
    • Theses
    • HPC Café
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Training Resources
    • Summary of System Utilization
    Portal Systems & Services
  • FAQ

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. Special applications, and tips & tricks
  5. STAR-CCM+

STAR-CCM+

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • Job monitoring with ClusterCockpit
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody(-old) throughput cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
        • Continuous Integration / One-way syncing of GitHub to Gitlab repositories
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • Spack package manager
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
  • HPC User Training
  • HPC System Utilization

STAR-CCM+

Simcenter STAR-CCM+ is a commercial software tool for CFD and more generally computational continuum mechanics (by CD-adapco or nowadays Siemens PLM). As a general purpose CFD code Simcenter StarCCM+ provides a wide variety of physical models for turbulent flows, acoustics, Eulerian and Lagrangian multiphase flow modeling, radiation, combustion and chemical reactions, heat and mass transfer including CHT (conjugate heat transfer in solid domains).

Please note that the clusters do not come with any license. If you want to use Simcenter STAR-CCM+ on the HPC clusters, you have to have access to suitable licenses. Several groups hold a joint license pool for non-commercial academic use which is coordinated through the software group of RRZE.

Availability / Target HPC systems

Different versions of all Simcenter STAR-CCM+ are available via the modules system, which can be listed by module avail star-ccm+. A special version can be loaded, e.g. by module load star-ccm+/2020.1.

We mostly install the current versions automatically, but if something is missing, please contact hpc-support@fau.de.

Production jobs should be run on the parallel HPC systems in batch mode.

Simcenter STAR-CCM+ can also be used in interactive GUI mode for serial pre- and/or post-processing on the login nodes (Linux: SSH Option „-X“; Windows: using PuTTY and XMing for X11-forwarding). This should only be used to make quick simulation setup changes. Please be aware that Simcenter STAR-CCM+ is loading the full mesh into the login node’s memory when you open a simulation file. You should do this only with comparable small cases. It is NOT permitted to run computationally intensive Simcenter STAR-CCM+ simulation runs or serial/parallel post-processing sessions with large memory consumption on login nodes.

Notes

  • Once you load the star-ccm+ module, the environment variable $PODKEY will hold your specific POD-key. Please only use the environment variable as the value will be updated centrally as needed. The POD-key from the HPC system will not work at your chair and vice versa.
  • Do not use SMT/Hyperthreads, since this will impact performance and slow down your simulation! Refer to the sample job scripts below on how to set it up correctly.
  • We recommend writing automatic backup files (every 6 to 12 hours) for longer runs to be able to restart the simulation in case of a job or machine failure.
  • Besides the default mixed precision solver, Siemens PLM is also providing installation packages for higher accuracy double precision simulations. The latter comes for the price of approx. 20% higher execution times and approx. twice as large simulation results files. These modules are only available on demand and are named star-ccm+/XXX-r8.
  • Siemens PLM recently changed (with release 2020.x?) the default MPI used by STAR-CCM+ from HP/Platform MPI to Open MPI. Old job scripts may required changes to avoid errors due to incompatible MPI options. The sample scripts below have been updated accordingly.

Sample job scripts

All job scripts have to contain the following information:

  • Resource definition for the queuing system (more details here)
  • Load Simcenter STAR-CCM+ environment module
  • Generate a file with names of hosts of the current simulation run to tell STAR-CCM+ on which nodes it should run (see example below)
  • Start command for parallel execution of starccm+ with all appropriate command line parameters, including a controlling StarCCM+ java macro. Available parameters can be listed via starccm+ -help.

parallel STAR-CCM+ on Meggie

#!/bin/bash -l
#SBATCH --job-name=my-ccm
#SBATCH --nodes=2
#SBATCH --time=01:00:00
#SBATCH --export=NONE

# star-ccm+ arguments
CCMARGS="-load simxyz.sim"

# specify the time you want to have to save results, etc.
# (remove or comment the line if you don not want this feature)
TIME4SAVE=1200

# number of cores to use per node (must be an even number!)
PPN=20

# STAR-CCM+ version to use
module add star-ccm+/2022.1

#####################################################
### normally, no changes should be required below ###
#####################################################

unset SLURM_EXPORT_ENV

echo
echo "Job starts at $(date) - $(date +%s)"
echo

# count the number of nodes
NUMNODES=$SLURM_NNODES
# calculate the number of cores actually used
NUMCORES=$(( $NUMNODES * ${PPN} ))

# change to working directory (should not be necessary for SLURM)
cd $SLURM_SUBMIT_DIR

if [ ! -z $TIME4SAVE ]; then
# automatically detect how much time this batch job requested and adjust the
# sleep accordingly
TIMELEFT=$(squeue -j $SLURM_JOBID -o %L -h)
HHMMSS=${TIMELEFT#*-}
[ $HHMMSS != $TIMELEFT ] && DAYS=${TIMELEFT%-*}
IFS=: read -r HH MM SS <<< $TIMELEFT
[ -z $SS ] && { SS=$MM; MM=$HH; HH=0 ; }
[ -z $SS ] && { SS=$MM; MM=0; }
SLEEP=$(( ( ( ${DAYS:-0} * 24 + 10#${HH} ) * 60 + 10#${MM} ) * 60 + 10#$SS - $TIME4SAVE ))
echo "Available runtime: ${DAYS:-0}-${HH:-0}:${MM:-0}:${SS}, sleeping for up to $SLEEP, thus reserving $TIME4SAVE for clean stopping/saving results"
( sleep $SLEEP && touch ABORT ) >& /dev/null &
SLEEP_ID=$!
fi

echo
echo "============================================================"
echo "Running STAR-CCM+ with $NUMCORES MPI processes in total"
echo " with $PPN cores per node"
echo " on $SLURM_NNODES different hosts"
echo "============================================================"

echo

# start STAR-CCM+

starccm+ -batch -cpubind v -np ${NUMCORES} --batchsystem slurm -power -podkey $PODKEY ${CCMARGS}
# final clean up
if [ ! -z $TIME4SAVE ]; then
pkill -P ${SLEEP_ID}
fi

echo "Job finished at $(date) - $(date +%s)"

Mentors

  • please volunteer!
  • for issue with the license server or POD key contact hpc-support@fau.de (T. Zeiser)
  • for contract questions regarding the joint license pool contact ZISC (H. Lanig)
Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up