Certain readers, like OBJ reader, are not parallel-aware. In such cases, all data is generally read on the root node. You can then apply filters such as Redistribute DataSet (5.8 and newer) or D3 (for earlier versions) to redistribute the dataset among all ranks.
Thank you! My problem partly solved. I can apply D3 for redistribute the dataset like this,
but when I use Redistribute DataSet(and my paraview version is 5.8.0), it will be a connection refused like this,
and I have no idea about it.
I am not sure what that is. Looks like it’s coming from OpenMPI code. Is it causing any issues? Maybe try updating to a newer version of OpenMPI, if possible. You can simply continue to use D3 in 5.8.
I had a state file (*.py) prepared using non-MPI ParaView, when I was trying to run this in a MPI ParaView version, it was crushing. is this the reason ?
In attachment there are two state files. MPI_Test_01.py was my older state file, created in non-MPI version of Paraview. To use this code in MPI version I modified MPI_Test_01.py the code to MPI_Test_02.py. You will see in line 69, I have added redistributeDataSet filter.
When I use the MPI_code_02.py, Paraview GUI crashed without giving any error message.