run pvbatch in parallel using mpi help!

Hello to all,

currently I do external aerodynamics analysis for my thesis with OpenFOAM. For postprocessing I use paraview 5.4.1

I figured out how to do all the work in the GUI on parallel computing, with e.g. ‘mpiexec -np 40 pvserver --mpi --use-offscreen-rendering’. I connected Paraview to the server and the time to load Data and to switch between values for different field data is now reduced by at least factor 50!! It really is ultra fast now. This works pretty good so far.

I want to use a python script with pvbatch which creates slides in xyz directions, saving views for diffrent fields with mesh visible and without. as .png screenshots to my OpenFoam case folder.
Unfortunatly this process is super slow, because it uses only one! of my 44 cores available to get slices and screenshots done.

So far I tried to run ‘mpiexec -np 40 pvbatch --mpi --use-offscreen-rendering’ . The scripts starts and it creates all the slices as the previous single core run. But this time all 40 Cores are 100% load. The output in the terminal shows me something like:

############ ABORT #############
############ ABORT #############
(39 Times)
screenshot of slice #152 done
… repeating

So the input command changed something on the system, but there is no performance improvement to the process, it still takes a lot of time.
I dont want 40 times the same screenshot but 40 cores creating one screenshot and then the next one.

What can cause the problem? did i get something wrong about the usage for pvbatch? Do I need to connect to a pvserver with a command and execute the pvbatch later?
Can you help me to improve the performance on this? It will help me alot with my thesis.

Thanks for your help,

Kind Regards,

Eric

1 Like

Welcome to ParaView !

Is your dataset reconstructed or decomposed ?

Hi Mathieu!
thank you for answering.

My dataset ist decomposed as the calculation is done with 40 cores in parallel as well.

can you check that the dataset is distributed when you run pvserver in parallel and connect to with ParaView by showing the vtkProcessId array ?

How exactly do I do this? My experience with paraview is limited, i am sorry.

So I just connectet to the pvserver with np 40, loaded my case, used the ProcessIdscalars, click apply, then I did an extract block to the internal mesh and used the vtkProcessId for visualization. It showed me colored Domain with 40 areas as you can see in the attached picture (DrivAer)

I hope that this is the information you are looking for.
thx,
Eric

Your dataset is distributed and all core should work when you apply a filter, either in pvbatch on in Paraview with a parallel pvserver.

Can you try to apply the slice in ParaView connected to the parallel pvserver ? Is the timing similar as before with pvbatch ?

So now I tried to create slices for U magnitude with mesh (Surface with Edges). It shows the new slice instantly when hitting the apply button. I noticed something very strange!. I dont know why but this instant slice update works only for slices normal to y and z axis. When i try to create slices or update the position off the slice normal to x axis, the slice updates slowly and i get the following error message in the output messages window:

ERROR: In /home/buildslave/dashboards/buildbot/paraview-pvbinsdash-linux-shared-release_superbuild/build/superbuild/paraview/src/VTK/Common/DataModel/vtkDataObjectTree.cxx, line 377
vtkMultiBlockDataSet (0x11aee2a0): Structure does not match. You must use CopyStructure before calling this method.

The update is not as slow as with pvbatch but only slightly slower than y and z slices.
Why is that ?

thx,
Eric

This looks like an unrelated issues. You will need to share your dataset if you want us to take a look into that.

regarding pvbatch slowness, it looks like your script must have some issues, you will need to share it if you need more help with that.

So I tried to recreate this error with other cases but i was not able to do so. First thought was the following: case computet with 39 procs causes this issue when using pvserver with 40 procs, but the error is not showing up again either way, so maybe it was just one time thing.

regarding pvbatch i can send you my script directly or via mail if you want.

please send to mathieu.westphal@kitware.com if you cant share publicly.

Your script is much more complex than what I expected. I would try to reproduce with a simple pvbatch script first.