For one of our users I’m trying to get a Python script working that reads an OpenFoam dataset (using the empty .foam file trick), adding a few Calculator filters and then saving the output to a parallel Xdmf dataset for later parallel visualization.
The script is made using a trace from the GUI and the load/save statements look like this:
testfoam = OpenFOAMReader(registrationName='test.foam', FileName='/home/user/Test_paraview/Fresh-Salt/test.foam')
testfoam.MeshRegions = ['internalMesh']
testfoam.CellArrays = ['U', 'epsilon', 'k', 'nut', 'p']
<set up pipeline>
SaveData('/home/paulm/doh.xmf', proxy=calculator3, PointDataArrays=['U', 'coords_X', 'coords_Y', 'coords_Z', 'epsilon', 'k', 'nut', 'p'],
FieldDataArrays=['CasePath'])
When I load this into the ParaView GUI as state, or run it directly under pvbatch
, then it executes fine and saves the Xdmf file.
So the question is then, can such a script be used unmodified to write a parallel Xdmf file (or any file format), simply by allocating an MPI job and running it under srun pvbatch
? Because when I do, the job gets stuck and simply doesn’t finish. Even though it only takes a few seconds in the GUI with the test datasets I’m using. Is there any documentation specifically on writing in parallel from Python? And is Xdmf special in this respect?
Also, I’m using a self-compiled ParaView 5.11.2 here, so there might be something up with MPI, options, etc.
Thanks for any guidance!