Leveraging MPICH ABI compatibility for distributed binaries

Note: this discussion only applies to Linux, and not Windows or macOS

Anyone who has used ParaView in HPC environments should be very familiar with this MPI specific issue: if want to use the MPI implementation provided by your HPC system (which is most likely tuned for your HPC hardware) you must make sure your application executables are built against the same MPI implementation. For most ParaView HPC site administrators and users, this translates to the aphorism: for HPC, build your own ParaView from source; don’t use ParaView binaries from paraview.org.

The MPICH ABI compatibility initiative announced in SC13 and now supported by several mainstream MPICH implementations has the potential to make this obsolete. In other words, paraview.org binaries can potentially work with your HPC provided MPI implementation without the need to compile from source!

ParaView superbuild (as of ParaView 5.8) is already using a compatible version of MPICH (3.3). Thus, technically, the ParaView binaries already support this. The problem is that since the package includes libmpi.so.12 under the standard library loading search path, the loader loads the libmpi we include in the package rather than the one provided by the platform (one can manually just delete it, but that’s hardly an elegant solution).

One possible solution to address this is to follow the pattern we already use for choosing between Mesa GL and system GL. For GL, by default, the ParaView executables use system GL. To use mesa instead, one runs the launcher paraview-mesa that sets appropriate environment variables to load the mesa GL libraries instead of your system ones. We could follow the same pattern for MPI: let ParaView executables use system MPI by default, but then provide a new launcher, paraview-mpi, that lets one use the MPI implementation packaged with ParaView instead. The advantage of having two launchers is that they can be combined as needed e.g. to launch paraview using mesa and packaged mpi, one can run paraview-mesa --backend swr paraview-mpi paraview or paraview-mpi paraview-mesa --backend swr paraview. The disadvantage is that the command line gets quite confusing.

Second option is we provide a single launcher paraview-launcher that can setup paths needed for using packaged Mesa GL or MPI implementation.

However, both these approaches have a serious disadvantage when used for MPI implementation selection. While with GL, it was okay to expect most systems to have a GL implementation, it’s not reasonable to expect the same for MPI. If no compatible MPI implementation is available on the system, the paraview binaries will simply fail to start unless the launcher executable is used.

Third option is that we change the package so that by default the standard executables i.e. paraview, pvserver, pvpython etc. are launchers themselves. They setup paths such that the packaged MPI is used. By passing optional command line arguments e.g. --system-mpi, they can be made to skip that path setup and use system MPI instead. Similarly it can handle Mesa GL vs System GL, with system GL being the default behavior.

Fourth option is that we simply provide separate downloads: one with mpi libraries included in the package, and another without those for those who want to use compatible system MPI implementation (similar to how we do it for Windows).

Thoughts? Any other suggestions?

Thanks!

1 Like

I think that any of the proposed options would work well.

  • I slightly like number 3 the best. “pvserver”. Hmmm… what switches do I need? OOhh, that one didn’t work, lets reverse that switch.
  • Whatever is done, document well with examples.
  • Realize that generally speaking, users will be figuring out these commands once or twice a year, and putting them into scripts. So, length of command doesn’t matter, clarity does.
  • This part of the client/server connect is so much simpler than the default_servers.pvsc, I don’t think it matters.

+1 for number 3 as well. tbh I’m not sure I follow the reasons that made use switch the mesa flag to a dedicated executable in the first place.

The most important is that default behavior stays the same.

1 Like

@ben.boeckel, do you want to comment on that? Personally, I too prefer the --mesa command line options to the new paraview-mesa executable. If we had continued with the --mesa argument, adding another --system-mpich (or some such) argument would have been a natural extension.

Rather than have a switch to select use MPI or not, would it make sense to have separate binary distributions for MPI and not-MPI? That’s basically what we do for the Windows distributions.

That’s option 4 in utkarsh post. It would get a little bit confusing as it would not be a non-mpi and a mpi version, but a mpi-paraview and a mpi-system version.

While option 4 is indeed a reasonable one, I worry we may end up with too many binary variants. Soon we’ll have binaries with EGL (to support X-less hardware acceleration for rendering). Now we’ll need MPI variants for this version too. Same for OSMesa binaries which, if I am not mistaken, we already distribute.

+1 for option 3.

Command line for option 2 is quite strange, option 4 can get complicated for users to keep track of what binaries they downloaded and need to launch.

+1 for option 3.

Strong objection to option 4. I would take too many build resources, verifying each binary would be too time consuming, and as Dan said it would be confusing to determine which one to download.

Looks like the consensus is option 3. I’ll proceed in that direction. Thanks all!

Sounds good to me.

Glad I put my thumb on the scale early on! The more I read everyone’s posts, I’m also strongly against 2 and 4. Option 3 wins in my mind still.

This was done when we removed the forward executables. Calling it paraview-mesa rather than paraview-launcher could have probably been avoided, but it’s something that seems to have been resolved with the new launchers project in the superbuild. We had issues where the LD_LIBRARY_PATH we used for ParaView itself interfered with the applications ParaView launched (IIRC, the superbuild’s libfontconfig made PDF viewers crash). Without the forward executable, a separate launcher had to be made anyways and to keep it simple, I made it a dedicated tool rather than a full wrapper.

Just to close the loop, these changes have now been implemented in master and will be included in ParaView 5.9. Below is the output generated using --help by pvserver executable from the nightly binaries. The launcher options are list first, followed by the standard executable options (in this case, pvserver options).

> ./bin/pvserver --help
Launcher options:
  --print       Print modified environment.
  --system-mpi  Use MPI implementation available on the system.
  --mesa        Use Mesa GL for rendering.
  --backend <backend>  Specify mesa backend.

Available backends:
    llvmpipe
    swr

pvserver options:

  --client-host=opt
  -ch=opt  Tell the data|render server the host name of the client, use with -rc.

  --connect-id=opt  Set the ID of the server and client to make sure they match. 0 is reserved to imply none specified.
...

You should be able to test these out using the latest nightly binaries for Linux.

1 Like

So using system mpi would be :

mpirun -np 4 ./bin/pvserver --system-mpi

?

1 Like

yes

1 Like