MPI support

PySM 3 can be executed in parallel over multiple nodes of a supercomputer with MPI.

The requirements to run with MPI are mpi4py and, just for distributed smoothing, libsharp.

The input maps are read from the first process in order to prevent overloading the filesystem, and then maps are distributed across all processes, not overlapping.

Distribution of maps across MPI processes

The pysm.MapDistribution object takes care of handling the metadata about how the data are distributed, there are 3 ways to distribute the pixels:

  • the most common, which is required to support smoothing over MPI with libsharp, is a distribution by ring, where HEALPix rings (stripes of pixels at the same latitude) are distributed symmetrically, i.e. the process that takes first ring close to the North pole also takes the ring close to the South Pole. This is the default distribution if pixel_indices is None and a MPI communicator is provided, given that the libsharp package is importable:

map_dist = pysm.MapDistribution(
pixel_indices=None, nside=nside, mpi_comm=MPI.COMM_WORLD
)
  • in case libsharp is not available, it is not possible to smooth the maps, but generating models and integrating bandpass works. The configuration of MapDistribution is the same as before, when libsharp is not importable. In this case the pixels are distributed uniformly across the processes.

  • the final case is a custom distribution, where each MPI process specifies an array of not-overlapping local pixel indices. This is also useful when running serially, if you only want to simulate a small patch of sky:

map_dist = pysm.MapDistribution(
pixel_indices=local_pixels_indices, nside=nside, mpi_comm=MPI.COMM_WORLD
)

Creation of the Sky object

Next you pass the MapDistribution object to Sky to have all the models being distributed,

sky = pysm.Sky(nside=nside, preset_strings=["d1", "s1", "f1", "a1"], map_dist=map_dist)

When the emission is requested, each process independently computes the emission just on its own portion of the sky.

m = sky.get_emission(
    freq=np.arange(50, 55) * u.GHz, weights=np.array([0.1, 0.3, 0.5, 0.3, 0.1])
)[0]

Distributed smoothing

When smoothing is applied, libsharp is used to efficiently perform a distributed spherical harmonics transform.

m_smoothed = pysm.apply_smoothing_and_coord_transform(
    m, fwhm=1 * u.deg, map_dist=map_dist
)

After smoothing, each process holds the smoothed version of their local pixels, this can then be used for example to generate timelines, or be collected to a single process for writing (not implemented in PySM currently, please open an issue if you need it).

Example

Check `pysm3_mpi.py <https://github.com/galsci/pysm/blob/master/mpi_examples/pysm3_mpi.py>`__ from the repository as an example.

Execute with (remove # to actually execute it):

[ ]:
#!mpirun -np 3 python pysm3_mpi.py