pvbatch in Slurm

Hi,

I am trying to run pvbatch in parallel in a Slurm supercomputer. My submission script is:

#!/bin/bash

#SBATCH --account gen6879@skylake
#SBATCH --partition skylake
#SBATCH -n 2
#SBATCH --mail-type=BEGIN,FAIL,END
#SBATCH -J heat_map-post
#SBATCH -t 0-00:10:00

echo `which pvbatch`

srun -o %j-%t.cout -e %j-%t.cerr pvbatch toy.py

The output of which pvbatch is the one expected (it points to my installation of ParaView 5.11.1, the default one downloaded from ParaView’s website). Toy.py is a very simple MPI python script:

from mpi4py import MPI

MPI_rank = MPI.COMM_WORLD.Get_rank()
MPI_size = MPI.COMM_WORLD.Get_size()

print("MPI_rank =", MPI_rank)
print("MPI_size =", MPI_size)

At the moment, both ranks are printing exactly the same thing:

MPI_rank = 0
MPI_size = 1

I.e. pvbatch is not recognizing it is in a parallel environment composed of two ranks. Could you help me figure out what I am doing wrong?

Thanks a lot,

HI @Rigel

You need to run pvbatch itself with mpi.

Thanks Mathieu for your reply. How can we do it? Adding --mpi after pvbatch makes no difference.

mpirun -np 2 pvbatch

Thanks, but that’s it: I need to use srun in order to properly run the script within the supercomputer. How can we make pvbatch work in parallel with srun? For comparison, the corresponding command srun -o %j-%t.cout -e %j-%t.cerr python3 toy.py works as expected.

do you build paraview against the srun provided mpi implementation ?

No, I am just using the pre-compiled binaries that we download from ParaView’s website, stored in my home directory within the cluster

Then you either need to use MPICH compatibility if available on your system or build ParaView yourself.

pvbatch --system-mpi

Ok. Thanks. It is sad that we cannot use pvbatch in parallel with the pre-compiled binaries within clusters…

I don’t think that’s the right takeaway here. Just to clarify:

First, determine which MPI implementations are provided by your HPC system.

If any MPI implementaiton that is listed here are available on your system i.e. it is compatible with MPICH ABI, then you’re golden. module load ... (or whatever is recommended on your system) the appropriate MPI implementation and then ensure you pass --system-mpi command line argument to pvbatch.

If not, you’ll need to build ParaView from source using the MPI implementation available.

1 Like