Hello,
I am trying to run my pvpython script in parallel because the file is huge (~3Gb). For these purposes, I have used Pat’s advice
where he runs using pvbatch
“mpirun -np 1 /path/to/pvbatch /path/to/script.py”
I am running my code in a Mac OS laptop “mpirun -np 4 /path/to/pvbatch /path/to/script.py”
and I am getting
( 9.626s) [pvbatch.1 ] vtkDelaunay3D.cxx:452 ERR| vtkDelaunay3D (0x2f21d60): <<Cannot triangulate; no input points ( 9.627s) [pvbatch.1 ] vtkPVContourFilter.cxx:183 INFO| Contour array is null. ( 9.622s) [pvbatch.2 ] vtkDelaunay3D.cxx:452 ERR| vtkDelaunay3D (0x2560e30): <<Cannot triangulate; no input points ( 9.623s) [pvbatch.3 ] vtkDelaunay3D.cxx:452 ERR| vtkDelaunay3D (0x1b5bce0): <<Cannot triangulate; no input points ( 9.622s) [pvbatch.2 ] vtkPVContourFilter.cxx:183 INFO| Contour array is null. ( 9.623s) [pvbatch.3 ] vtkPVContourFilter.cxx:183 INFO| Contour array is null.
It seems that it does not parallelize computations because a master node does not share points with other cores.
Paraviiew version 5.8.0
I am using *.pvd format, in which I put *.pvtu files, which are a header for *.vtu files. Is it possible to run the code in parallel if the script reads a huge single file, or input files should be splitted into several files to ease parallel reading?
Thank you in advance)
Best regards,
Evgenii