The version is 5.9.1
Yes I checked (on server through glances) and it is not showing that pvserver is using those 16 processors.
The third question am not sure about, but I used the MPI folder (ParaView-5.9.1-MPI-Linux-Python3.8-64bit) on the server.
This is the remote connection from About:
Client Information: Version: 5.9.1
*VTK Version: * Qt Version: 5.12.9 vtkIdType size: 64bits Embedded Python: On Python Library Path: /home/hf9098/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/lib/python3.8 Python Library Version: 3.8.8 (default, May 17 2021, 15:58:51) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] Python Numpy Support: On Python Numpy Path: /home/hf9098/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/lib/python3.8/site-packages/numpy Python Numpy Version: 1.19.2 Python Matplotlib Support: On Python Matplotlib Path: /home/hf9098/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/lib/python3.8/site-packages/matplotlib Python Matplotlib Version: 3.2.1 Python Testing: Off MPI Enabled: On Disable Registry: Off
*Test Directory: *
*Data Directory: * OpenGL Vendor: Intel Open Source Technology Center OpenGL Version: 4.6 (Core Profile) Mesa 20.0.8 OpenGL Renderer: Mesa DRI Intel(R) UHD Graphics (CML GT2)
Connection Information: Remote Connection: Yes Separate Render Server: No Reverse Connection: Yes Number of Processes: 1 Disable Remote Rendering: Off IceT: Off Tile Display: Off vtkIdType size: 64bits Embedded Python: On Python Library Path: /home/hf9098/ParaView-5.9.1-osmesa-MPI-Linux-Python3.8-64bit/lib/python3.8 Python Library Version: 3.8.8 (default, May 17 2021, 15:56:25) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] Python Numpy Support: On Python Numpy Path: /home/hf9098/ParaView-5.9.1-osmesa-MPI-Linux-Python3.8-64bit/lib/python3.8/site-packages/numpy Python Numpy Version: 1.19.2 Python Matplotlib Support: On Python Matplotlib Path: /home/hf9098/ParaView-5.9.1-osmesa-MPI-Linux-Python3.8-64bit/lib/python3.8/site-packages/matplotlib Python Matplotlib Version: 3.2.1 OpenGL Vendor: VMware, Inc. OpenGL Version: 3.3 (Core Profile) Mesa 18.2.2 OpenGL Renderer: llvmpipe (LLVM 7.0, 256 bits) Headless support: OSMesa
What makes it confusing is that when I use the mpirun command with ./pvserver, it connects to paraview on my local machine (reverse connection) so when I the connection is accepted, It makes it even more confusing on why it didn’t reserve those processors.
I will keep looking into this and see what could be the cause.
It seems that there is indeed a problem with setting up parallel processing.
When this command is used “./pvserver -ch=localhost -rc --server-port=11111 --force-offscreen-rendering” there is absolutely no problem.
I want to ask something, I have completed the setup on my local machine (in terms of paraview build), while on server I just copied the folder “ParaView-5.9.1-MPI-Linux-Python3.8-64bit” and paste it there.
Is there a compilation process needed to be done on the server side, knowing that MPI and OSMesa are working there.