• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Success Stories from the Support
    • Annual Report
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials & Courses
    • Theses
    • HPC Café
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Training Resources
    • Summary of System Utilization
    Portal Systems & Services
  • FAQ

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. Special applications, and tips & tricks
  5. Amber/AmberTools

Amber/AmberTools

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • Job monitoring with ClusterCockpit
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody(-old) throughput cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
        • Continuous Integration / One-way syncing of GitHub to Gitlab repositories
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • Spack package manager
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
  • HPC User Training
  • HPC System Utilization

Amber/AmberTools

Amber and AmberTools are suite of biomolecular simulation programs. Here, the term “Amber” does not refer to the set of molecular mechanical force fields for the simulation of biomolecules but to the package of molecular simulation programs consisting of the AmberTools (sander and many more) and Amber (pmemd).

AmberTools are open-source while Amber (pmemd) requires a license. NHR@FAU holds a “compute center license” of Amber, thus, Amber is generally available to everyone for non-profit use, i.e. for academic research.

Availability / Target HPC systems

  • TinyGPU and Alex: typically use pmemd.cuda  which uses a single GPU.
    Thermodynamic integration (TI) may require special tuning; contact us!
  • throughput cluster Woody and parallel computers: only use sander.MPI if the input is not supported by pmemd.MPI.
    cpptraj is also available in parallel versions (cpptraj.OMP and cpptraj.MPI).

New versions of Amber/AmberTools are installed by RRZE upon request.

Notes

The CPU-only module is called amber while the GPU version (which only contains pmemd.cuda) is called amber-gpu. The numbers in the module name specify the Amber version, Amber patch level, the AmberTools version, and the AmberTools patch level. The number are complemented by the used compilers/tools, e.g. amber/18p14-at19p03-intel17.0-intelmpi2017 or amber-gpu/18p14-at19p03-gnu-cuda10.0.

pmemd and sander do not have internal measures to limit the run time. Thus, you have to estimate the number of time steps which can finish within the requested wall time before hand and use that in your mdin file.

Recent versions of AmberTools install their only version  of Python which is independent of the Python of the Linux distribution or the usual Python modules of RRZE.

Sample job scripts

pmemd on TinyGPU

#!/bin/bash -l 
#SBATCH --time=06:00:00  
#SBATCH --job-name=Testjob 
#SBATCH --gres=gpu:1 
#SBATCH --export=NONE 
unset SLURM_EXPORT_ENV

module add amber-gpu/20p08-at20p12-gnu-cuda11.2 

### there is no need to fiddle around with CUDA_VISIBLE_DEVICES! 

pmemd.cuda -O -i mdin ...

pmemd on Alex

#!/bin/bash -l
#
#SBATCH --job-name=my-pmemd
#SBATCH --ntasks=16
#SBATCH --time=06:00:00
# use gpu:a100:1 and partition=a100 for A100
#SBATCH --gres=gpu:a40:1
#SBATCH --partition=a40
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

module load amber/20p12-at21p11-gnu-cuda11.5

srun pmemd.cuda -O -i mdin -c inpcrd -p prmtop -o output

parallel pmemd on Meggie

#!/bin/bash -l
#
# allocate 4 nodes with 20 cores per node = 4*20 = 80 MPI tasks
#SBATCH --nodes=4
#SBATCH --tasks-per-node=20
#
# allocate nodes for 6 hours
#SBATCH --time=06:00:00
# job name 
#SBATCH --job-name=my-pmemd
# do not export environment variables
#SBATCH --export=NONE
#
# first non-empty non-comment line ends SBATCH options

# do not export environment variables
unset SLURM_EXPORT_ENV
# jobs always start in submit directory

module load amber/20p03-at20p07-intel17.0-intelmpi2017

# run 
srun pmemd.MPI -O -i mdin ...

Further information

  • http://ambermd.org
  • http://ambermd.org/GPULogistics.php
  • https://www.exxactcorp.com/blog/Molecular-Dynamics/rtx3090-benchmarks-for-hpc-amber-a100-vs-rtx3080-vs-2080ti-vs-rtx6000

Mentors

  • Dr. A. Kahler, RRZE, hpc-support@fau.de
  • AG Sticht (Professur für Bioinformatik, MedFak)

 

Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up