Currently I’m using ParaView through an SSH tunnel that I make on the terminal. It works, but I know ParaView has several options to make its own tunnel, so I’m checking if I can reach a more streamlined flow that I could share with colleagues in the same institution.
In my institution first I have to request a visualization node from the HPC, which is unavoidable. This I have to make by myself and wait for the resources. Let’s assume from my resource request I can visualization nodes named either viznode1 or viznode2.
Then I have to open the terminal and create an ssh tunnel through a login node that we’ll call loginnode (for which I already pushed a public key inside and added to ssh config file to simplify ssh access) in the following way:
ssh -L 11111:viznode1:11111 loginnode
or ssh -L 11111:viznode2:11111 loginnode
depending on which visualization node I’m given.
Once this is done, I still have to go into the viznode1 (or viznode2) and run “module load paraview” and run “pvserver --mesa”, or to fully benefit from the visualization node I should run an mpi counterpart “mpiexec pvserver --mesa”.
Now I can open ParaView in my computer and connect to cs://localhost:11111 to access the server. I can see in the memory inspector that I’m connected to 24 processes. This works awesome! It’s much snappier than accessing viznode1 interactively over the web. It’s a really great feature that I regret I ignored until now.
I saw on the docs that it seems like it would be possible to greatly simplify this process using a .pvsc file, however, I’m stuck in how to achieve it. In particular, I didn’t figure out from the guidelines how to achieve an effect equivalent to my ssh tunnel, where I go through loginnode and make a tunnel to viznode (this could be a case 18 I suppose).
If some ParaView guru can help me with this, it would be greatly appreciated.