I spun out a small 2 node MPI cluster on AWS to get familiar with Paraview in distributed mode.
Each node has 2 cores
From the head node, I ran the following command
/shared/ParaView-5.10.1-osmesa-MPI-Linux-Python3.9-x86_64/bin/mpiexec -hosts queue0-st-queue0-t2medium-2,queue0-st-queue0-t2medium-1 -np 2 /shared/ParaView-5.10.1-osmesa-MPI-Linux-Python3.9-x86_64/bin/pvserver
On the head node, I started paraview, connected to the pvserver and proceeded to load the head.vti dataset to exercise the MPI communication.
I enable the HUD to confirm that it was indeed performing remote/distributed rendering.
Running top
on each of the compute node, I noticed that only one CPU in the compute node is being utilized.
Is this to be expected because of the way I started pvserver or is this due to me using VTI files for testing ?
Kind regards