How to ensure remote rendering is done by server-side GPU


I have access to a remote server with a high-end GPU, which I’m using for visualization. I already set up the pvserver and can connect to it via client-side ParaView. The only issue is that, when I connect to pvserver, I get a warning on my ParaView client indicating that remote rendering is disabled, or something to that effect. Yet, everything works fine and I can use ParaView. I just wanted to make sure that the server-side GPU is indeed being used for rendering.

On a relevant note, I have a deeper question on how rendering works in ParaView. More specifically, I wanted to know if ParaView automatically takes advantage of the available GPU(s) or if some manual configuration needs to be made. And if the former, how does ParaView balance the work between CPU and GPU? The reason I’m asking is that my dataset is very large, and although it fits into the memory of the server, I’d rather have the dataset entirely on the GPU memory. Since the server GPU has ~ 50GB of memory, that’s sufficient for my cases.

Sorry for the long thread, I would really appreciate it if someone could shed some light.


Edit -> Settings -> Render View -> Advanced (cogwheel top right) -> Show Annotations.

You will have a feedback in the render view if remote rendering is currently used.

There is a lot of options related to remote rendering, but the default ones are fine.