• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Success Stories from the Support
    • Annual Report
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials & Courses
    • Theses
    • HPC Café
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Training Resources
    • Summary of System Utilization
    Portal Systems & Services
  • FAQ

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. Special applications, and tips & tricks
  5. ORCA

ORCA

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • Job monitoring with ClusterCockpit
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody(-old) throughput cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
        • Continuous Integration / One-way syncing of GitHub to Gitlab repositories
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • Spack package manager
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
  • HPC User Training
  • HPC System Utilization

ORCA

ORCA is an ab initio quantum chemistry program package that contains modern electronic structure methods including density functional theory, many-body perturbation, coupled cluster, multireference methods, and semi-empirical quantum chemistry methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic properties.

ORCA requires a license per individual or research group (cf. https://cec.mpg.de/orcadownload/ or the ORCA forum https://orcaforum.kofo.mpg.de/). Once you can proof that you are eligible, contact hpc-support@fau.de for activation of the ORCA module.

Availability / Target HPC systems

  • throughput cluster Woody and TinyFAT
  • owing to its limited scalability, ORCA is not suited for the parallel computers

New versions of ORCA are installed by RRZE upon request with low priority if the users provide the installation files.

Notes

  • orca has to be called with the full path otherwise parallel runs may fail.
  • The orca module will take care of loading an appropriate openmpi module, too.
  • ORCA often results in massive IO (“communication through files??”); thus, put temporary files into /dev/shm (RAM disk) or local scratch directory.

Sample job scripts

parallel orca on a Woody node

#!/bin/bash -l
#SBATCH --nodes=1 
#SBATCH --ntasks-per-node=4
#SBATCH --time=01:00:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

cd $SLURM_SUBMIT_DIR

module add orca/5.0.3

### No mpirun required as ORCA starts the parallel processes internally as needed. 
### The number of processes is specified in the input file using '%pal nprocs # end' 

${ORCABASE}/orca orca.inp "optional openmpi arguments"

Further information

  • https://orcaforum.kofo.mpg.de/
  • note in the ORCA forum on improving MKL performance on AMD Epyc processors: https://orcaforum.kofo.mpg.de/viewtopic.php?f=8&t=3340&hilit=mkl&start=20
    We recommend to not only set MKL_DEBUG_CPU_TYPE=5 but to also set MKL_CBWR=AUTO as environment variables (as long as ORCA still uses Intel MKL versions before 2020.1; these environment variables no longer work with newer MKL versions)

Mentors

  • please volunteer!

 

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up