Issues with Paraview 5.9 on HPC

Hi,

I’m setting up Paraview 5.9.1 to run on our HPC systems. I keep getting an error when the server connects to the client indicated that the server display is not accessible. I am using the --force-offscren-rendering flag. I also noted that the --system-mpi flag does not seem to trigger loading a different MPI. Are there additional steps, or does the system MPI need to be located somewhere speciifc?

Thanks,

Erin

Alas, you’ll need to figure out why display is not accessible on server side. Is it expected to be accessible? Does your HPC system have a X server that users can use? If so, you’ll need to figure out how to set that up. Something simple as glxgears can be used to ensure X is setup correctly.

If X is not supposed to be accessible or not available, then you’ll need OSMesa enable ParaView binaries for the server. These are built differently from standard ParaView binaries and don’t need X for rendering instead use OSMesa.

--system-mpi only works with MPICH ABI compatible MPI implementation. What MPI is available on your system? If its not MPICH compatible, you’ll have to build ParaView from source.

Not really, the system MPI should be available standard library loading search paths. So it’s not already, you make need to setup LD_LIBRARY_PATH etc. to ensure when the pvserver executable launches it’ll find the MPI implementation.

@utkarsh.ayachit

Thanks for the fast response! As far as the X Server goes - i am using RGS on the front end with Paraview and OSMesa on the server side. I plan on testing with RDP today. Our HPC system is headless, so that is why I am using the MESA version.

For --system-mpi, i have intelMPI loaded which is MPICH ABI compatible. It is in the LD_LIBRARY_PATH.

Thanks - any additional pointers would be fantastic!

If you’re getting server display is not accessible errors, then I don’t think you’re using the correct binaries. You’ll need the binaries suffixed with -osmesa specifically. Standard linux binaries do come with mesa, but that’s onscreen mesa that requires X.

In that case, LD_DEBUG=libs then run the program and see what it says for MPI libraries. The output should have hits about where it’s looking for MPI (and other) libraries and which ones were found. That should help diagnose this issue.

@utkarsh.ayachit - thanks! I installed the ParaView Server for Headless Machines - OMESA version and non of the binaries are suffixed with -omesa. Is that what i should be seeing?

yes, that’s okay as long as the original tarball was named osmesa.

@utkarsh.ayachit ok - i think i have it now - LD_DEBUG was super helpful - definitely helped me work out the server issues. It appears to be loading intelMPI, however when I set I_MPI_DEBUG=5, it does not pull up the logging information i would expect to see. Is there anything else I should look for…

sorry, I am not too familiar with the specific of IntelMPI so not sure how to proceed. Maybe someone else with the knowledge can chime in.

@utkarsh.ayachit, I_MPI_DEBUG is a logging flag for intelMPI. With many applications, the logging just appears in standard out, but perhaps Paraview is doing something to redirect or turn off logging?

@utkarsh.ayachit so from a stack trace it looks like intel mpi isn’t being picked up despite using --system-mpi and placing it first in the LD_LIBRARY_PATH. Do i need to get it into LD_PRELOAD?

NM - got it working!

What was the issue? Would be good to share if you can for future reference.