LAMMPS
LAMMPS is a classical molecular dynamics code with a focus on materials modeling. LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
Availability / Target HPC systems
- Woody, Meggie, Fritz
- TinyGPU, Alex
Most of these installations were made using through SPACK – check https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lammps/package.py for possible versions and build options if you’d like to request a different compilation
Allocate an interactive job and run mpirun -np 1 lmp -help
to see which Lammps packages have been included in a specific build.
On fritz, in addition to the installations from SPACK (normally based on GNU compilers), there is a LAMMPS installation built with the Intel compilers. For this installation the following plugins were included: AMOEBA ASPHERE ATC AWPMD BOCS BODY BPM BROWNIAN CG-DNA CG-SPICA CLASS2 COLLOID COLVARS COMPRESS CORESHELL DIELECTRIC DIFFRACTION DIPOLE DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF ELECTRODE EXTRA-COMPUTE EXTRA-DUMP EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP GRANULAR INTEL INTERLAYER KIM KSPACE LATBOLTZ MACHDYN MANIFOLD MANYBODY MC MDI MEAM MESONT MGPT MISC ML-HDNNP ML-IAP ML-PACE ML-POD ML-RANN ML-SNAP MOFFF MOLECULE MOLFILE MPIIO OPENMP OPT ORIENT PERI PHONON PLUGIN POEMS PTM QEQ QMMM QTB REACTION REAXFF REPLICA RIGID SHOCK SMTBQ SPH SPIN SRD TALLY UEF VORONOI YAFF
Notes
We regularly observe that LAMMPS jobs have severe load balancing issues; this can be cause by inhomogenous distribution of particles in a system or can happen in systems that have lots of empty space. It is possible to handle these problems with Lammps commands like processors, balance or fix balance. Please follow the links to the Lammps documentation.
Sample job scripts
single GPU job on Alex
#!/bin/bash -l #SBATCH --time=10:00:00 #SBATCH --partition=a40 #SBATCH --gres=gpu:a40:1 #SBATCH --job-name=my-lammps #SBATCH --export=NONE unset SLURM_EXPORT_ENV module load lammps/20201029-gcc10.3.0-openmpi-mkl-cuda cd $SLURM_SUBMIT_DIR srun --ntasks=16 --cpu-bind=core --mpi=pmi2 lmp -in input.in
MPI parallel job (single-node) on Fritz
#!/bin/bash -l #SBATCH --partition=singlenode #SBATCH --nodes=1 #SBATCH --ntasks-per-node=72 #SBATCH --time=00:05:00 #SBATCH --export=NONE unset SLURM_EXPORT_ENV # load required modules module load lammps/20221222-intel-impi-mkl # run lammps srun lmp -in input.lmp
MPI parallel job (multi-node) on Fritz
#!/bin/bash -l #SBATCH --partition=multinode #SBATCH --nodes=2 #SBATCH --ntasks-per-node=72 #SBATCH --time=00:05:00 #SBATCH --export=NONE unset SLURM_EXPORT_ENV # load required modules module load lammps/20221222-intel-impi-mkl # run lammps srun lmp -in input.lmp
Hybrid OpenMP/MPI job (single node) on Fritz
#!/bin/bash -l #SBATCH --partition=singlenode #SBATCH --nodes=1 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=18 #SBATCH --time=00:05:00 #SBATCH --export=NONE unset SLURM_EXPORT_ENV # load required modules module load lammps/20221222-intel-impi-mkl # specify the number of OpenMP threads export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK export SRUN_CPUS_PER_TASK=$SLURM_CPUS_PER_TASK # run lammps srun lmp -sf omp -in input.lmp
Hybrid OpenMP/MPI job (multi-node) on Fritz
#!/bin/bash -l #SBATCH --partition=multinode #SBATCH --nodes=2 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=18 #SBATCH --time=00:05:00 #SBATCH --export=NONE unset SLURM_EXPORT_ENV # load required modules module load lammps/20221222-intel-impi-mkl # specify the number of OpenMP threads export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK export SRUN_CPUS_PER_TASK=$SLURM_CPUS_PER_TASK # run lammps srun lmp -sf omp -in input.lmp
Further information
Mentors
- Dr. A. Ghasemi, NHR@FAU, hpc-support@fau.de
- AG Zahn (Computer Chemistry Center)