• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
  • FAUTo the central FAU website
  • RRZE
  • NHR-Verein e.V.
  • Gauß-Allianz

Navigation Navigation close
  • News
  • About us
    • People
    • Funding
    • NHR Compute Time Projects
    • Tier3 User Project Reports
    • Success Stories from the Support
    • Annual Report
    • Jobs
    Portal About us
  • Research
    • Research Focus
    • Publications, Posters and Talks
    • Software & Tools
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
    • NHR PerfLab Seminar
    • Projects
    • Awards
    Portal Research
  • Teaching & Training
    • Lectures and Seminars
    • Tutorials & Courses
    • Theses
    • HPC Café
    • Student Cluster Competition
    Portal Teaching & Training
  • Systems & Services
    • Systems, Documentation & Instructions
    • Support & Contact
    • Training Resources
    • Summary of System Utilization
    Portal Systems & Services
  • FAQ

  1. Home
  2. Systems & Services
  3. Systems, Documentation & Instructions
  4. Special applications, and tips & tricks
  5. LAMMPS

LAMMPS

In page navigation: Systems & Services
  • Systems, Documentation & Instructions
    • Getting started with HPC
      • NHR@FAU HPC-Portal Usage
    • Job monitoring with ClusterCockpit
    • NHR application rules – NHR@FAU
    • HPC clusters & systems
      • Dialog server
      • Alex GPGPU cluster (NHR+Tier3)
      • Fritz parallel cluster (NHR+Tier3)
      • Meggie parallel cluster (Tier3)
      • Emmy parallel cluster (Tier3)
      • Woody(-old) throughput cluster (Tier3)
      • Woody throughput cluster (Tier3)
      • TinyFat cluster (Tier3)
      • TinyGPU cluster (Tier3)
      • Test cluster
      • Jupyterhub
    • SSH – Secure Shell access to HPC systems
    • File systems
    • Batch Processing
      • Job script examples – Slurm
      • Advanced topics Slurm
    • Software environment
    • Special applications, and tips & tricks
      • Amber/AmberTools
      • ANSYS CFX
      • ANSYS Fluent
      • ANSYS Mechanical
      • Continuous Integration / Gitlab Cx
        • Continuous Integration / One-way syncing of GitHub to Gitlab repositories
      • CP2K
      • CPMD
      • GROMACS
      • IMD
      • Intel MKL
      • LAMMPS
      • Matlab
      • NAMD
      • OpenFOAM
      • ORCA
      • Python and Jupyter
      • Quantum Espresso
      • R and R Studio
      • Spack package manager
      • STAR-CCM+
      • Tensorflow and PyTorch
      • TURBOMOLE
      • VASP
        • Request access to central VASP installation
      • Working with NVIDIA GPUs
      • WRF
  • Support & Contact
    • HPC Performance Lab
    • Atomic Structure Simulation Lab
  • HPC User Training
  • HPC System Utilization

LAMMPS

LAMMPS is a classical molecular dynamics code with a focus on materials modeling. LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

Availability / Target HPC systems

  • Woody, Meggie, Fritz
  • TinyGPU, Alex

Most of these installations were made using through SPACK – check https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lammps/package.py for possible versions and build options if you’d like to request a different compilation

Allocate an interactive job and run mpirun -np 1 lmp -help to see which Lammps packages have been included in a specific build.

On fritz, in addition to the installations from SPACK (normally based on GNU compilers), there is a LAMMPS installation built with the Intel compilers. For this installation the following plugins were included: AMOEBA ASPHERE ATC AWPMD BOCS BODY BPM BROWNIAN CG-DNA CG-SPICA CLASS2 COLLOID COLVARS COMPRESS CORESHELL DIELECTRIC DIFFRACTION DIPOLE DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF ELECTRODE EXTRA-COMPUTE EXTRA-DUMP EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP GRANULAR INTEL INTERLAYER KIM KSPACE LATBOLTZ MACHDYN MANIFOLD MANYBODY MC MDI MEAM MESONT MGPT MISC ML-HDNNP ML-IAP ML-PACE ML-POD ML-RANN ML-SNAP MOFFF MOLECULE MOLFILE MPIIO OPENMP OPT ORIENT PERI PHONON PLUGIN POEMS PTM QEQ QMMM QTB REACTION REAXFF REPLICA RIGID SHOCK SMTBQ SPH SPIN SRD TALLY UEF VORONOI YAFF

Notes

We regularly observe that LAMMPS jobs have severe load balancing issues; this can be cause by inhomogenous distribution of particles in a system or can happen in systems that have lots of empty space. It is possible to handle these problems with Lammps commands like processors, balance or fix balance. Please follow the links to the Lammps documentation.

Sample job scripts

single GPU job on Alex

#!/bin/bash -l
#SBATCH --time=10:00:00
#SBATCH --partition=a40
#SBATCH --gres=gpu:a40:1
#SBATCH --job-name=my-lammps
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

module load lammps/20201029-gcc10.3.0-openmpi-mkl-cuda

cd $SLURM_SUBMIT_DIR

srun --ntasks=16 --cpu-bind=core --mpi=pmi2 lmp -in input.in

MPI parallel job (single-node) on Fritz

#!/bin/bash -l

#SBATCH --partition=singlenode
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=72
#SBATCH --time=00:05:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load required modules
module load lammps/20221222-intel-impi-mkl

# run lammps
srun lmp -in input.lmp

MPI parallel job (multi-node) on Fritz

#!/bin/bash -l

#SBATCH --partition=multinode
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72
#SBATCH --time=00:05:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load required modules
module load lammps/20221222-intel-impi-mkl

# run lammps
srun lmp -in input.lmp

Hybrid OpenMP/MPI job (single node) on Fritz

#!/bin/bash -l

#SBATCH --partition=singlenode
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --time=00:05:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load required modules
module load lammps/20221222-intel-impi-mkl

# specify the number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# run lammps
srun lmp -sf omp -in input.lmp

Hybrid OpenMP/MPI job (multi-node) on Fritz

#!/bin/bash -l

#SBATCH --partition=multinode
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --time=00:05:00
#SBATCH --export=NONE

unset SLURM_EXPORT_ENV

# load required modules
module load lammps/20221222-intel-impi-mkl

# specify the number of OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# run lammps
srun lmp -sf omp -in input.lmp

Further information

  • https://www.lammps.org/
  • Accelerator packages
  • Basics of running LAMMPS

Mentors

  • Dr. A. Ghasemi, NHR@FAU, hpc-support@fau.de
  • AG Zahn  (Computer Chemistry Center)
Erlangen National High Performance Computing Center (NHR@FAU)
Martensstraße 1
91058 Erlangen
Germany
  • Imprint
  • Privacy
  • Accessibility
  • How to find us
Up