## 1 Introduction

Molecular Dynamics (MD) codes are well established computational tools for predicting the behaviour of complex physical, chemical and biological systems. They can be used to replace expensive laboratory experiments or to perform simulations in experimentally inaccessible areas of parameter space. Classical MD codes model a material by following a large number of interacting particles, which obey Newton’s laws of motion. Since statistical fluctuations are suppressed with inverse powers of , to overcome finite size effects and to study increasingly complex systems, calculations have and require substantial computational power.

To make efficient use of large HPC installations, developers of MD codes face several challenges when exploiting the hierarchical parallelism of modern manycore chip architectures. Unfortunately it is rare for a computational physicist or chemist to be an expert both in their domain and in the low level optimisation of parallel codes. This impedes the development of fast yet complex MD codes and limits scientific productivity. One solution to this problem, which has been successfully applied in other fields such as grid-based partial differential equation solvers

[1, 2], is the introduction of a “Separation of Concerns”. This allows the domain specialist to describe the problem at a high abstraction level independent of the hardware, while a computational scientist provides a mechanism for automatically executing this high level representation efficiently on various parallel architectures.In a previous paper [3] we described a domain specific language (DSL) for the implementation of performance portable MD codes. The key observation is that interactions between pairs of particles can be expressed as small kernels. While the domain specialist writes this kernel as a short piece of C-code, the hardware-dependent execution over all particle pairs is realised via a code generation system, which automatically produces efficient looping code. As demonstrated in [3], the resulting performance both for a multi-CPU and multi-GPU implementation is comparable with that of well established monolithic MD codes such as LAMMPS [4] and DL-POLY [5] for a short-range Lennard-Jones benchmark.

More generally, the potential between two particles can be split into a short- and long-range part. In many applications the computational bottleneck is the calculation of electrostatic forces between charged particles. Support for long range interactions has been missing so far in the framework described in [3]. In this paper we report on the implementation of a classical Ewald summation technique for the inclusion of electrostatic interactions within our framework and present parallel performance results for a system of interacting charged particles.

Parallel scalability can be improved by using shared memory parallelism within a node since this reduces communication costs and load imbalance. Here we also report on a new hybrid MPI+OpenMP backend for the framework in [3].

##### Structure.

This paper is organised as follows: In Section 2 we briefly review some key concepts of MD simulations with particular focus on long range force calculations; here we also summarise the design principles of the abstraction in [3]. The Ewald summation algorithm and its implementation in our code generation framework is described in Section 3. We present numerical results in Section 4 and conclude in Section 5.

## 2 Molecular Dynamics in a performance portable framework

### 2.1 Long range forces in Molecular Dynamics

In a MD simulation the force on each particle has to be calculated at every timestep; this is often the bottleneck of the model run. For conservative forces, the force on particle can be written as the gradient of a phenomenological potential , which depends on the positions , of all particles. We only consider two-particle potentials and split into a short-range (sr) part and long-range (lr) contribution to obtain the total force on particle as

(1) |

The short-range potential (such as Lennard-Jones interactions) can be safely truncated beyond a distance and only a fixed number of neighbours need to be considered for a given particle . As a result the total cost of the force calculation is (see e.g. [6]). This, however, is not true for electrostatic interactions for which the potential decays with the inverse separation of the particles

To see this consider the contributions arising from a particle interacting with a uniform charge density, at distances greater than the cut-off . If this contribution is neglected, the missing energy diverges as . As a consequence, computing the long range potential and the resulting force is naively an operation, since for each particle all neigbours have to be considered. In a parallel implementation this requires global communication of all particle positions and charges. Even worse, if periodic boundary conditions are used, another sum over all periodic images of the simulation domain is necessary and the resulting sum will not necessarily converge. However, for neutral systems algorithms exist for computing the electrostatic interaction by a suitable re-ordering of the sums over particle pairs. The computational complexity is reduced to with a classic Ewald method [7], as described in Section 3. This method is suitable for small to medium-size systems and in this work we explain how it can be implemented in the performance portable framework described in [3]

. Smooth Particle-Mesh Ewald (SPME) methods use the Fast Fourier transform to reduce this complexity further to

[8, 9] and the Fast Multipole Method [10] can achieve optimal complexity; since their implementation is more challenging, those approaches will be considered in a subsequent study.### 2.2 Abstraction and Python code generation framework

To calculate quantities such as the force in Eq. (1) requires looping over all particle pairs in an MD simulation and we now describe how this is implemented in our code generation framework. The abstraction in [3] assumes that (i) the physical system can be described by assigning a set of properties (such as mass, charge, position, velocity) to each model particle and that (ii) all computationally expensive operations in the MD code can be realised by looping over all particles, or all pairs of particles. Those operations are encoded in a local pairwise kernel which only operates on the properties on the two participating particles. For example, the kernel might increment the total force on a particle by adding the force exerted by another particle. Loops over individual particles are implemented in a similar way. The DSL is realised as a Python code generation framework, which is available at https://bitbucket.org/wrs20/ppmd. Individual particle properties such as position, velocity and charge are stored as Python ParticleDat objects; global properties shared by all particles such as the total potential energy are realised as GlobalArray objects. To describe a pairwise kernel, the user writes a short, hardware independent piece of C-code for the manipulation of the local properties, see Listing 1 for an example. This is launched via a Python PairLoop call which specifies the accessed ParticleDats and GlobalArrays together with access descriptors (see Listing 2). The access descriptors are used to trigger suitable halo exchanges and reductions in GlobalArray objects.

We stress, however, that the user never has to write any explicit calls to MPI routines or add OpenMP directives; the parallelisation is implicit. When the user runs the Python code, a hardware dependent C-wrapper library which executes the kernel over all particle pairs is generated, compiled and run. This allows the execution of the loop at the speed of a compiled language while still allowing the user to express the overarching algorithms (such as the MD time stepping method) in a high-level Python framework. To increase efficiency the PairLoop call can be specialised for local interactions to obtain a local PairLoop. For this only pairs of particles which are separated by less than a given cutoff distance are considered and efficient pair looping wrapper code is generated.

## 3 Methods

### 3.1 Ewald Summation

Consider a system of point-particles which interact via electrostatic forces. Each particle is specified by its mass, position and charge . The particles are contained in a cubic box of length with periodic boundary conditions. The Coulomb potential at position can be obtained by solving the equation

(2) |

where is the charge distribution of the particles. The first sum over extends over all periodic copies of the box. The total long range potential of particle is

To calculate the potential of a single point-charge, we rewrite the -function as

where is a function which only depends on the distance from the origin, integrates to 1 and decays exponentially as . The length scale characterises the speed of this decay. This split of the -function induces a separation in the potential into a short- and a long-range part with ,

Due to the construction of the resulting short-range potential decays exponentially. This part can be safely truncated (see Section 3.1.3

for error estimates) and calculated with a local

PairLoop with a cutoff . The long-range potential on the other hand is calculated in Fourier space.#### 3.1.1 Short range potential

To calculate the short-range potential, let be a Gaussian with width

Then the short-range potential is readily evaluated as

#### 3.1.2 Long range potential

The long-range potential is evaluated in Fourier space. The Fourier-space representation of the charge distribution is given by

The periodic boundary conditions restrict possible values of the reciprocal vector to

where . Since the Fourier-space representation of the Poisson equation in Eq. (2) is diagonal and given by , the long range potential in real space can be calculated as^{1}

^{1}1Since we consider only neutral systems the term can be dropped.

The second factor decays exponentially and the sum over can be truncated for all with .

#### 3.1.3 Error estimate and computational complexity

The short-range cutoff and long range-cutoff have to be carefully balanced to minimise the total error. As expected from dimensional analysis and discussed in detail in [11] (see also [12, Chapter 12]), the error in both contributions to the potential is equal if and . For fixed density the number of particle-pairs considered in the short-range kernel is and the total cost for this part of the calculation is . Similarly the number of Fourier-modes in the long range calculation is , and hence the total time in the long range part is . The optimal which minimises the total calculation time is given by , which results in a computational complexity of . This also implies that the number of Fourier-modes in the long range calculation is .

The evaluation of the long-range potential at position can be written as

(3) |

with and . The expression in Eq. (3) is essentially the product of a matrix with a vector of length followed by a multiplication by an matrix. Since the particles are distributed between the processors, but all Fourier modes computed on each processor, the computational cost is . Every processor only calculates the contribution of all locally stored particles to every Fourier mode. Combining the contributions of all particles to each of the Fourier modes therefore requires a global reduction of numbers, resulting in a total computational cost of where the ratio depends on the relative cost of computation and communication on a particular machine. We expect the code to scale well as long as .

### 3.2 Implementation

#### 3.2.1 Short range potential

By construction the short-range potential rapidly converges to zero as the inter-particle distance increases. We truncate the short-range contribution to the electrostatic potential and force with a cutoff (see Section 3.1.3),

(4) | ||||

The computational kernel for the local ParticlePair loop is given in Listing 1. The position and charge data are stored per particle in ParticleDat data objects. Similarly, the resulting forces and total potential energy are stored as a ParticleDat and a GlobalArray object. Listing 2 shows the corresponding Python code for launching the pair loop. In the C-kernel capitalised variables such as `REAL_CUTOFF_SQ`

are constants which are replaced by their numerical values at compile time using the `kernel_consts`

dictionary.

⬇ # Define kernel kernel = Kernel(’ewald_sr’, kernel_code, kernel_consts) # Define and execute pair loop pair_loop = PairLoop(kernel=kernel, shell_cutoff=rc, dat_dict={’r’: Positions(access.READ), ’q’: Charges(access.READ), ’F’: Forces(access.INC), ’u’: u_sr(access.INC)}) pair_loop.execute()

#### 3.2.2 Long range potential

The computation of the long-range potential is split into two ParticleLoops which correspond to the and matrix-vector products described in Section 3.1.3. The first iterates over all particles and for each particle computes the contribution to defined in Eq. (3) for all . An outline of the computational kernel is shown in Algorithm 1 (for brevity we do not show the corresponding C- and Python-code, but outline the access descriptors). We order the entries in the GlobalArray such that loops over reciprocal vectors are vectorised by the compiler (as confirmed by the generated assembly code).

Note that the calculation of requires global reductions since each -component receives contributions from all particles in the system. This, however, is automatically handled by the code generation system and requires no explicit coding for the user who only writes the local kernel in line 2 of Algorithm 1. In our implementation we store copies of the entire vector on each MPI task and do not attempt a parallel domain decomposition in space. Since the number of reciprocal vectors grows this does not lead to memory issues for moderately sized systems for which the Particle-Ewald method is competitive.

Given the vector , the electrostatic energies and forces are calculated as a second ParticleLoop using Eq. (3) for each particle in Algorithm 2.

The self-energy (not shown here) is calculated once at the beginning of the simulation and the cost of this operation is amortised over the total runtime.

#### 3.2.3 Hybrid parallelisation with OpenMP

In MPI-only mode, the simulation domain is split into local sub-domains which are distributed across CPU cores as described in [3]. To extend and improve scalability, we also implemented an MPI+OpenMP hybrid mode. In this case each node (MPI-rank) handles one sub-domain and the particles in this local domain are distributed across OpenMP threads. As discussed in [3] we require that ParticleLoops only write to one particle and therefore no special approaches to handle write conflicts such as colouring are required in this case. To deal with potential write conflicts in GlobalArray operations, thread safe reduction code is generated outside the C-kernel written by the user.

## 4 Results

### 4.1 Computational Complexity

With correct choice of the Ewald method exhibits computational cost. Figure 1 confirms this by plotting the time per iteration for a NaCl salt simulation against particle count at a fixed density of 1 atom per (2.5Å). We include repulsive Lennard-Jones interactions to prevent the particle distribution from collapsing. However, for sizable particle counts the dominant computational cost are electrostatic forces: for particles 87% of the time is spent computing Coulombic interactions. For all tests in this paper we set the error tolerance to and vary the parameters and (which balance the work between the real- and Fourier-space) to minimise the runtime. For our framework the pair takes values between for and for . For DL_POLY_4 [5] we choose a cutoff value of . All runs are carried out on the “Balena” cluster; one node consists of two Intel E5-2650v2 8-core CPUs.

Both implementations show better than expected scaling with . For small particle numbers the SPME method used by DL_POLY_4 is in the same ballpark as our implementation. The SPME method obviously outperforms our method for larger particle counts where it is an order of magnitude faster.

### 4.2 Strong Scaling

To study the parallel scalability we set the number of particles to in a box of size (at the same density as in Section 4.1) and increase the core count. The spatial domain cannot be decomposed into regions of side length less than the cutoff which prevents repeating the runs in Section 4.1 on more than one node. To address this, we fixed ( for DL_POLY_4) at the price of using a non-optimal value of ( instead of ). This allows to extend the scalability of the MPI-only implementation and DL_POLY_4 to 64 cores and we find that it has no negative impact on the runtime on one CPU. To scale beyond this limit we use the hybrid MPI+OpenMP scheme with one MPI process per CPU socket to run on up to 256 cores. To quantify any potential performance loss due to the non-optimal value of , we also include the relevant data point with from Fig. 1.

Both the MPI and MPI+OpenMP implementations exhibit decent scaling to 16 nodes (256 cores). DL_POLY_4 is faster overall on smaller core counts but does not scale to larger core counts. The MPI+OpenMP execution of Algorithm 2 on one node achieved an average of 34% of peak floating point vector performance. The computationally most expensive component is the loop over all Fourier modes . This has been vectorised over the four quadrants with , , and and we confirmed that the Intel compiler indeed generates packed vector instructions.

## 5 Conclusion

We demonstrated how the abstraction and Python code-generation system in [3] can be used to implement long-range electrostatic interactions in Molecular Dynamics simulations. Our Particle-Ewald implementation achieves good absolute performance and parallel scalability. In addition to [3] we now also support a hybrid MPI+OpenMP backend which is used to extend scalability in the strong scaling limit. To include long range forces for significantly larger systems we will investigate the implementation of SPME [8] algorithms or the Fast Multipole Method (FMM) [10]. This will require adding new data structures such as a hierarchical meshes for FMM or linking to existing Fast Fourier Transform libraries.

### Acknowledgements

The PhD project of William Saunders is funded by an EPSRC studentship. This research made use of the Balena HPC service at the University of Bath.

## References

- [1] C. Bertolli et al. in Euro-Par 2011, pages 191–200. Springer, Berlin, Heidelberg, 2012.
- [2] F. Rathgeber et al. In HPC, Networking Storage and Analysis, SC Companion:, pages 1116–1123, Los Alamitos, CA, USA, 2012. IEEE Computer Society.
- [3] W. R. Saunders, J. Grant, and E. H. Mueller. A Domain Specific Language for Performance Portable Molecular Dynamics Algorithms. submitted to Computer Physics Communications, preprint: arxiv:1704.03329, 2017.
- [4] S. Plimpton. Journal of Computational Physics, 117(1):1 – 19, 1995.
- [5] I. T. Todorov et al. J. Mater. Chem., 16:1911–1918, 2006.
- [6] D. C. Rapaport. The Art of Molecular Dynamics Simulation. Cambridge University Press, New York, 2nd edition, 2004.
- [7] P. P. Ewald. Annalen der Physik, 369(3):253–287, 1921.
- [8] T. Darden, D. York, and L. Pedersen. J. of Chem. Phys., 98(12):10089–10092, 1993.
- [9] U. Essmann et al. J. of Chem. Phys., 103(19):8577–8593, 1995.
- [10] L. Greengard and V. Rokhlin. Journal of Comp. Phys., 73(2):325–348, 1987.
- [11] J. Kolafa and J. W. Perram. Molecular Simulation, 9(5):351–368, 1992.
- [12] D. Frenkel and B. Smit. Understanding molecular simulation, volume 1. Academic Press, 2001.

Comments

There are no comments yet.