LAMMPS

LAMMPS is a classical molecular dynamics code with a focus on materials modeling. LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

Availability / Target HPC systems

  • Emmy parallel computer (untested)
  • Woody thoughput cluster (untested)
  • TinyGPU (untested)

All these installations were made using through SPACK – check https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lammps/package.py for possible versions and build options if you’d like to request a different compilation

Notes

We regularly observe that LAMMPS jobs have severe load balancing issues; this can be cause by inhomogenous distribution of particles in a system or can happen in systems that have lots of empty space. It is possible to handle these problems with Lammps commands like processors, balance or fix balance. Please follow the links to the Lammps documentation.

Sample job scripts

This script also makes use of the kokkos-package.

#!/bin/bash -l
#
#PBS -l nodes=2:ppn=40,walltime=02:00:00
#
#PBS -N name
#
# first non-empty non-comment line ends PBS options

module load lammps/20201029-intel19.1.3.304-impi-mkl-axtktbu

cd $PBS_O_WORKDIR

mpirun -np 4 lmp -k on t 10 -sf kk -in input

This script also makes use of the kokkos-package.

#!/bin/bash -l

#PBS -l nodes=1:ppn=4:cuda11:gtx1080,walltime=01:00:00
#PBS -N name

module load lammps/20201029-gcc9.2.0-openmpi-mkl

cd $PBS_O_WORKDIR

# use "neigh full" on newer GPU hardware
lmp -k on g 1 -sf kk -pk kokkos cuda/aware on neigh half comm host -in input

Further information

Mentors