I come back to an older issue regarding the MPI error
"vtkOutputWindow.cxx:86 WARN| Generic Warning: In /home/paraview_5.8.1/ParaView-v5.8.1/VTK/Parallel/MPI/vtkMPICommunicator.cxx, line 220
This operation not yet supported for more than 2147483647 objects "
Using a multiblock structured mesh with about 1.3e9 cells. We have implemented a parallel reader for HDF5 into paraview using vtkMultiBlockDataSet and everything runs smoothly for smaller meshes in client-server sessions. However, similar to the issue from 2017 with this larger mesh the file can be opened and even the data loads, such that the outlines are displayed. However, when switching to âsurfaceâ-view I get thrown the error above. This is independent of the number of cores I use on the server. We use ParaView 5.8.1.
Has anyone experienced the same and might have a remedy?
I am getting a similar issue when trying to read a VTM file output from pvbatch (on 1008 cores) into ParaView 5.9.1. It reads fine into a serial instance of ParaView, but when AutoMPI is turned on it hangs (on Windows the pvservers crash, on linux just keeps outputting error messages). 2.36GB of data and 3,018 VTU objects.
( 74.052s) [paraview ] vtkOutputWindow.cxx:86 WARN| Generic Warning: In /builds/gitlab-kitware-sciviz-ci/build/superbuild/paraview/src/VTK/Parallel/MPI/vtkMPICommunicator.cxx, line 206
This operation not yet supported for more than 2147483647 objects
( 74.066s) [paraview ] vtkOutputWindow.cxx:76 ERR| ERROR: In /builds/gitlab-kitware-sciviz-ci/build/superbuild/paraview/src/VTK/IO/Legacy/vtkDataReader.cxx, line 544
vtkGenericDataObjectReader (0x172c32a0): Unrecognized file type: for file: (Null FileName)
The latest release doesnât even have AutoMPI enabled (and it may be deprecated?). Also in 5.9.1 and 5.10 when you try and set up a localhost client server you canât enter a command into the server setup dialog without ParaView instantly crashingâŚon both Windows and Linux. I had to write an external batch/bash script to start a server and automatically connect to it. The files then read in successfully.
Also in 5.9.1 and 5.10 when you try and set up a localhost client server you canât enter a command into the server setup dialog without ParaView instantly crashingâŚon both Windows and Linux.
This issue has been fixed already in master and will not be present in ParaView 5.10
I had to write an external batch/bash script to start a server and automatically connect to it. The files then read in successfully.
Good news about the setup server dialog being fixed, thank you. As to MPI not being required, I think that is just not true. Creating contours, slices, stream tracers all speed up with the benefit of MPI. None of those calculations are threaded that I can tell? The only obviously threaded process is rendering, using OSPRay.
Angus,
MPI is not required, as in the filters you mentioned work. If it is desired, you can always just build and connect to a remote server, and enjoy the performance improvements thereof. With regards to builtin server (i.e., it just works out of the box), the solution is threading. This is on the plate, and will be done as time and resources are available.
Alan
I want to echo my colleague @vimalrajâs questions on facing the same error. We would greatly appreciate your thoughts on the matter. I want to add some more details that may help.
Many thanks in advance!
ParaView installation
I made an install of paraview using spack. The installed version is 5.11.0. spack install paraview +qt +python use_vtkm=on +mpi ^openmpi@4.1.5
ParaView execution
Our HPC cluster has the usual network of login nodes and compute nodes.
The model runs fine when run on login nodes without MPI.
We want to run ParaView in a client-server mode, so we did the following:
Started pv-server on a remote compute node: mpirun pvserver --mpi --server-port=11111
Connected to the server from a client by loading a server from a .pvsc file.
Error log excerpt
I cannot share the exact model that was used, but as may have already noticed in @vimalrajâs error message above, it is an âa.foamâ ParaFoam file.
Loading paraview-5.11.1-gcc-11.3.0-rztrj5t
Loading requirement: <loads dependencies>
Waiting for client...
Connection URL: cs://lvpn1071:11111
Accepting connection(s): lvpn1071:11111
Client connected.
( 301.212s) [pvserver.0 ] vtkMPICommunicator.cxx:204 WARN| This operation not yet supported for more than 2147483647 objects
( 303.685s) [pvserver.0 ] vtkDataReader.cxx:566 ERR| vtkGenericDataObjectReader (0x110f346b0): Unrecognized file type: for file: (Null FileName)
( 306.166s) [pvserver.0 ] vtkDataReader.cxx:566 ERR| vtkGenericDataObjectReader (0x110f346b0): Unrecognized file type: for file: (Null FileName)
( 308.637s) [pvserver.0 ] vtkDataReader.cxx:566 ERR| vtkGenericDataObjectReader (0x110f346b0): Unrecognized file type: for file: (Null FileName)
( 308.637s) [pvserver.0 ]vtkGenericDataObjectRea:356 ERR| vtkGenericDataObjectReader (0x110f346b0): Could not read file
( 308.638s) [pvserver.0 ] vtkExecutive.cxx:740 ERR| vtkCompositeDataPipeline (0x10f6e8c50): Algorithm vtkGenericDataObjectReader (0x110f346b0) returned failure for request: vtkInformation (0x110f286b0)
Debug: Off
Modified Time: 14876267
Reference Count: 1
Registered Events: (none)
Request: REQUEST_DATA
FROM_OUTPUT_PORT: 0
ALGORITHM_AFTER_FORWARD: 1
FORWARD_DIRECTION: 0
.
.
.
( 308.644s) [pvserver.0 ] vtkExecutive.cxx:740 ERR| vtkCompositeDataPipeline (0x10f71c930): Algorithm vtkAppendCompositeDataLeaves (0x110f28520) returned failure for request: vtkInformation (0x10f71c660)
Debug: Off
Modified Time: 14876512
Reference Count: 1
Registered Events: (none)
Request: REQUEST_DATA_OBJECT
FROM_OUTPUT_PORT: 0
ALGORITHM_AFTER_FORWARD: 1
FORWARD_DIRECTION: 0
.
.
.
Questions
We see that we are reaching the vtkMPICommunicator limit on the integer size, and this was circumvented for previous users by upgrading the Spack version.
What could be a possible alternative method to fix this issue?
If this is a limit imposed by the MPI Communicator, can you suggest a workaround?
Attachments
The Spack dependency list of the ParaView installation
The full error log (I ran the workload on two processors to simplify the trace) paraview_spack_info (5.5 KB) error log.txt (9.5 KB)
Could you try with ParaView 5.12.0-RC3? We added support for longer messages available with MPI 4.0 and the binaries from www.paraview.org/download are build with mpich 4.1.2.