Memory inspector not reflecting what is requested on Pvserver

Hello,

I am using reverse connection to connect to server using:

mpirun -np 16 --hostfile /etc/openmpi/openmpi-default-hostfile --host 141.217.21.148: ./pvserver -ch=localhost -rc --force-offscreen-rendering

It’s supposed to request 16 processors but the the actual use (in memory inspector) is less.

Screenshot from 2022-10-17 13-45-49

What could be the cause of that?

Hi @Hussein_Kokash ,

Which version of ParaView are you using ?

Did you check that there is actually 16 pverserver running on your hosts ?

Was pvserver built with MPI support ?

What does the Help → About → Remote shows once connected ?

Best,

The version is 5.9.1
Yes I checked (on server through glances) and it is not showing that pvserver is using those 16 processors.
The third question am not sure about, but I used the MPI folder (ParaView-5.9.1-MPI-Linux-Python3.8-64bit) on the server.

This is the remote connection from About:

Client Information:
Version: 5.9.1
*VTK Version: *
Qt Version: 5.12.9
vtkIdType size: 64bits
Embedded Python: On
Python Library Path: /home/hf9098/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/lib/python3.8
Python Library Version: 3.8.8 (default, May 17 2021, 15:58:51) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
Python Numpy Support: On
Python Numpy Path: /home/hf9098/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/lib/python3.8/site-packages/numpy
Python Numpy Version: 1.19.2
Python Matplotlib Support: On
Python Matplotlib Path: /home/hf9098/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/lib/python3.8/site-packages/matplotlib
Python Matplotlib Version: 3.2.1
Python Testing: Off
MPI Enabled: On
Disable Registry: Off
*Test Directory: *
*Data Directory: *
OpenGL Vendor: Intel Open Source Technology Center
OpenGL Version: 4.6 (Core Profile) Mesa 20.0.8
OpenGL Renderer: Mesa DRI Intel(R) UHD Graphics (CML GT2)

Connection Information:
Remote Connection: Yes
Separate Render Server: No
Reverse Connection: Yes
Number of Processes: 1
Disable Remote Rendering: Off
IceT: Off
Tile Display: Off
vtkIdType size: 64bits
Embedded Python: On
Python Library Path: /home/hf9098/ParaView-5.9.1-osmesa-MPI-Linux-Python3.8-64bit/lib/python3.8
Python Library Version: 3.8.8 (default, May 17 2021, 15:56:25) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
Python Numpy Support: On
Python Numpy Path: /home/hf9098/ParaView-5.9.1-osmesa-MPI-Linux-Python3.8-64bit/lib/python3.8/site-packages/numpy
Python Numpy Version: 1.19.2
Python Matplotlib Support: On
Python Matplotlib Path: /home/hf9098/ParaView-5.9.1-osmesa-MPI-Linux-Python3.8-64bit/lib/python3.8/site-packages/matplotlib
Python Matplotlib Version: 3.2.1
OpenGL Vendor: VMware, Inc.
OpenGL Version: 3.3 (Core Profile) Mesa 18.2.2
OpenGL Renderer: llvmpipe (LLVM 7.0, 256 bits)
Headless support: OSMesa

Then you are either not using the mpirun command you shared above or you have a big issue with your mpi installation.

For comparison on my machine:

$ mpirun -np 4 ./bin/pvserver &
[1] 7048
Waiting for client...
Connection URL: cs://frollo:11111
Accepting connection(s): frollo:11111
$ pf -ef | grep pvserver
glow        7048    1498  1 09:08 pts/0    00:00:00 mpirun -np 4 ./bin/pvserver
glow        7053    7048  4 09:08 pts/0    00:00:00 ./bin/pvserver
glow        7054    7048 89 09:08 pts/0    00:00:05 ./bin/pvserver
glow        7055    7048 90 09:08 pts/0    00:00:05 ./bin/pvserver
glow        7056    7048 89 09:08 pts/0    00:00:05 ./bin/pvserver
glow        7071    1498  0 09:08 pts/0    00:00:00 grep pvserver

Best,

Hello Mathieu,

What makes it confusing is that when I use the mpirun command with ./pvserver, it connects to paraview on my local machine (reverse connection) so when I the connection is accepted, It makes it even more confusing on why it didn’t reserve those processors.

I will keep looking into this and see what could be the cause.

before doing any kind of connection, you want to check that your pvserver are actually distributed.

mpirun -np 4 ./pvserver

Should not output any error and you should be able to see the four processes running, in active waiting.

It seems that there is indeed a problem with setting up parallel processing.
When this command is used “./pvserver -ch=localhost -rc --server-port=11111 --force-offscreen-rendering” there is absolutely no problem.

I want to ask something, I have completed the setup on my local machine (in terms of paraview build), while on server I just copied the folder “ParaView-5.9.1-MPI-Linux-Python3.8-64bit” and paste it there.

Is there a compilation process needed to be done on the server side, knowing that MPI and OSMesa are working there.

Thanks again Mathieu!

use the mpiexec provided in the package

./mpiexe -np 4 ./pvervser

Finally!!!

Thank you Mathieu!

Screenshot from 2022-10-22 00-15-01