MPI Integer Limit Error

Hi everyone,

I come back to an older issue regarding the MPI error

"vtkOutputWindow.cxx:86 WARN| Generic Warning: In /home/paraview_5.8.1/ParaView-v5.8.1/VTK/Parallel/MPI/vtkMPICommunicator.cxx, line 220
This operation not yet supported for more than 2147483647 objects "

that was discussed by someone else back in 2017 (https://public.kitware.com/pipermail/paraview/2017-June/040405.html)

Using a multiblock structured mesh with about 1.3e9 cells. We have implemented a parallel reader for HDF5 into paraview using vtkMultiBlockDataSet and everything runs smoothly for smaller meshes in client-server sessions. However, similar to the issue from 2017 with this larger mesh the file can be opened and even the data loads, such that the outlines are displayed. However, when switching to ‘surface’-view I get thrown the error above. This is independent of the number of cores I use on the server. We use ParaView 5.8.1.

Has anyone experienced the same and might have a remedy?

Best wishes
Marian

Please try with the last release of ParaView

That fixed it, thank you!

I am getting a similar issue when trying to read a VTM file output from pvbatch (on 1008 cores) into ParaView 5.9.1. It reads fine into a serial instance of ParaView, but when AutoMPI is turned on it hangs (on Windows the pvservers crash, on linux just keeps outputting error messages). 2.36GB of data and 3,018 VTU objects.

( 74.052s) [paraview ] vtkOutputWindow.cxx:86 WARN| Generic Warning: In /builds/gitlab-kitware-sciviz-ci/build/superbuild/paraview/src/VTK/Parallel/MPI/vtkMPICommunicator.cxx, line 206
This operation not yet supported for more than 2147483647 objects
( 74.066s) [paraview ] vtkOutputWindow.cxx:76 ERR| ERROR: In /builds/gitlab-kitware-sciviz-ci/build/superbuild/paraview/src/VTK/IO/Legacy/vtkDataReader.cxx, line 544
vtkGenericDataObjectReader (0x172c32a0): Unrecognized file type: for file: (Null FileName)

Please try with the last release of ParaView, 5.10-RC1

The latest release doesn’t even have AutoMPI enabled (and it may be deprecated?). Also in 5.9.1 and 5.10 when you try and set up a localhost client server you can’t enter a command into the server setup dialog without ParaView instantly crashing…on both Windows and Linux. I had to write an external batch/bash script to start a server and automatically connect to it. The files then read in successfully.

The discussion about removal happened here:

Also in 5.9.1 and 5.10 when you try and set up a localhost client server you can’t enter a command into the server setup dialog without ParaView instantly crashing…on both Windows and Linux.

This issue has been fixed already in master and will not be present in ParaView 5.10

I had to write an external batch/bash script to start a server and automatically connect to it. The files then read in successfully.

You can write a .pvsc file instead

Good news about the setup server dialog being fixed, thank you. As to MPI not being required, I think that is just not true. Creating contours, slices, stream tracers all speed up with the benefit of MPI. None of those calculations are threaded that I can tell? The only obviously threaded process is rendering, using OSPRay.

Angus,
MPI is not required, as in the filters you mentioned work. If it is desired, you can always just build and connect to a remote server, and enjoy the performance improvements thereof. With regards to builtin server (i.e., it just works out of the box), the solution is threading. This is on the plate, and will be done as time and resources are available.
Alan

Also MPI has not been removed, AutoMPI has been removed, not the same thing.

The only obviously threaded process is rendering, using OSPRay.

This is defnitely not true.

I am facing a similar error using version 5.11.0.
How to get around this limitation ?

  ( 289.635s) [pvserver.0      ]vtkPVDataDeliveryManage:262   INFO| .   } 0.010 s: move-data: a.foam(UnstructuredGridRepresentation)/SelectionRepresentation/Geometry
  ( 289.635s) [pvserver.0      ]vtkPVDataDeliveryManage:262   INFO| .   { move-data: a.foam(UnstructuredGridRepresentation)/SurfaceRepresentation
  ( 289.635s) [pvserver.0      ]     vtkMPIMoveData.cxx:628   INFO| .   .   { gather-to-0
  ( 312.227s) [pvserver.0      ] vtkMPICommunicator.cxx:204   WARN| .   .   .   This operation not yet supported for more than 2147483647 objects
  ( 314.992s) [pvserver.0      ]      vtkDataReader.cxx:566    ERR| .   .   .   vtkGenericDataObjectReader (0x111193240): Unrecognized file type:  for file: (Null FileName)
  ( 317.729s) [pvserver.0      ]      vtkDataReader.cxx:566    ERR| .   .   .   vtkGenericDataObjectReader (0x111193240): Unrecognized file type:  for file: (Null FileName)
  ( 320.527s) [pvserver.0      ]      vtkDataReader.cxx:566    ERR| .   .   .   vtkGenericDataObjectReader (0x111193240): Unrecognized file type:  for file: (Null FileName)
  ( 320.528s) [pvserver.0      ]vtkGenericDataObjectRea:356    ERR| .   .   .   vtkGenericDataObjectReader (0x111193240): Could not read file
  ( 320.528s) [pvserver.0      ]       vtkExecutive.cxx:740    ERR| .   .   .   vtkCompositeDataPipeline (0x1129bdc00): Algorithm vtkGenericDataObjectReader (0x111193240) returned failure for request: vtkInformation (0x111187a70)
    Debug: Off
    Modified Time: 14900719
    Reference Count: 1
    Registered Events: (none)
    Request: REQUEST_DATA
    FROM_OUTPUT_PORT: 0
    ALGORITHM_AFTER_FORWARD: 1
    FORWARD_DIRECTION: 0

Hi @mwestphal!

I want to echo my colleague @vimalraj’s questions on facing the same error. We would greatly appreciate your thoughts on the matter. I want to add some more details that may help.

Many thanks in advance!

ParaView installation

I made an install of paraview using spack. The installed version is 5.11.0.
spack install paraview +qt +python use_vtkm=on +mpi ^openmpi@4.1.5

ParaView execution

Our HPC cluster has the usual network of login nodes and compute nodes.
The model runs fine when run on login nodes without MPI.
We want to run ParaView in a client-server mode, so we did the following:

  1. Started pv-server on a remote compute node:
    mpirun pvserver --mpi --server-port=11111
  2. Connected to the server from a client by loading a server from a .pvsc file.

Error log excerpt

I cannot share the exact model that was used, but as may have already noticed in @vimalraj’s error message above, it is an “a.foam” ParaFoam file.

Loading paraview-5.11.1-gcc-11.3.0-rztrj5t
  Loading requirement: <loads dependencies>
Waiting for client...
Connection URL: cs://lvpn1071:11111
Accepting connection(s): lvpn1071:11111
Client connected.
( 301.212s) [pvserver.0      ] vtkMPICommunicator.cxx:204   WARN| This operation not yet supported for more than 2147483647 objects
( 303.685s) [pvserver.0      ]      vtkDataReader.cxx:566    ERR| vtkGenericDataObjectReader (0x110f346b0): Unrecognized file type:  for file: (Null FileName)
( 306.166s) [pvserver.0      ]      vtkDataReader.cxx:566    ERR| vtkGenericDataObjectReader (0x110f346b0): Unrecognized file type:  for file: (Null FileName)
( 308.637s) [pvserver.0      ]      vtkDataReader.cxx:566    ERR| vtkGenericDataObjectReader (0x110f346b0): Unrecognized file type:  for file: (Null FileName)
( 308.637s) [pvserver.0      ]vtkGenericDataObjectRea:356    ERR| vtkGenericDataObjectReader (0x110f346b0): Could not read file 
( 308.638s) [pvserver.0      ]       vtkExecutive.cxx:740    ERR| vtkCompositeDataPipeline (0x10f6e8c50): Algorithm vtkGenericDataObjectReader (0x110f346b0) returned failure for request: vtkInformation (0x110f286b0)
  Debug: Off
  Modified Time: 14876267
  Reference Count: 1
  Registered Events: (none)
  Request: REQUEST_DATA
  FROM_OUTPUT_PORT: 0
  ALGORITHM_AFTER_FORWARD: 1
  FORWARD_DIRECTION: 0

.
.
.

( 308.644s) [pvserver.0      ]       vtkExecutive.cxx:740    ERR| vtkCompositeDataPipeline (0x10f71c930): Algorithm vtkAppendCompositeDataLeaves (0x110f28520) returned failure for request: vtkInformation (0x10f71c660)
  Debug: Off
  Modified Time: 14876512
  Reference Count: 1
  Registered Events: (none)
  Request: REQUEST_DATA_OBJECT
  FROM_OUTPUT_PORT: 0
  ALGORITHM_AFTER_FORWARD: 1
  FORWARD_DIRECTION: 0
.
.
.

Questions

We see that we are reaching the vtkMPICommunicator limit on the integer size, and this was circumvented for previous users by upgrading the Spack version.

  1. What could be a possible alternative method to fix this issue?
  2. If this is a limit imposed by the MPI Communicator, can you suggest a workaround?

Attachments

  1. The Spack dependency list of the ParaView installation
  2. The full error log (I ran the workload on two processors to simplify the trace)
    paraview_spack_info (5.5 KB)
    error log.txt (9.5 KB)

I’m afraid I have no insight, you may want to reach out to Kitware support though: https://www.kitware.eu/get-support/

1 Like

Hi @vimalraj,

Could you try with ParaView 5.12.0-RC3? We added support for longer messages available with MPI 4.0 and the binaries from www.paraview.org/download are build with mpich 4.1.2.

2 Likes