Strong scaling of general-purpose molecular dynamics simulations on GPUs

Jens Glaser, Trung Dac Nguyen, Joshua A. Anderson, Pak Lui, Filippo Spiga, Jaime A. Millan, David C. Morse, Sharon C. Glotzer

Research output: Contribution to journalArticlepeer-review

507 Scopus citations

Abstract

We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, 2013). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al.; 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5×.

Original languageEnglish (US)
Pages (from-to)97-107
Number of pages11
JournalComputer Physics Communications
Volume192
DOIs
StatePublished - Jul 1 2015
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2015 Elsevier B.V. All rights reserved.

Keywords

  • Domain decomposition
  • LAMMPS
  • MPI/CUDA
  • Molecular dynamics
  • Multi-GPU
  • Strong scaling
  • Weak scaling

Fingerprint

Dive into the research topics of 'Strong scaling of general-purpose molecular dynamics simulations on GPUs'. Together they form a unique fingerprint.

Cite this