Deselect a single GPU not to be used when starting Paraview

Paraview 5.8.1 out of the compiled binaries distributed from the Paraview website works like a breeze in the computer I installed it in.
However, at start-up, Paraview launches an application VisRTX and lets it grab all GPUs in the computer as graphics devices. The screen output is:

VisRTX 0.1.6, using devices:
0: Quadro GP100 (Total: 17.1 GB, Available: 3.8 GB)
1: Quadro GP100 (Total: 17.1 GB, Available: 13.0 GB)
2: Quadro P600 (Total: 2.1 GB, Available: 2.1 GB)

Consistenly, the CPU management interface nvidia-smi signals that the Paraview processes have taken memory in devices 0 and 1. (Device 2 is used as a graphics card for local screen rendering, I presume; I am connected to the computer in point via a network.)

For reasons of task management, I want to keep one GPU only for computing (device 1, in the example above). I have not spotted in the paraview command line (paraview --help) a command instructing ParaView about which GPU to use.

How can I launch Paraview and tell it to only use device 0?

Naive attempt, taking inspiration from https://github.com/NVIDIA/VisRTX#multi-gpu, If I launch Paraview with the augmented command

CUDA_VISIBLE_DEVICES='0' [path to paraview binaries]/paraview

the response is

VisRTX 0.1.6, using devices:
0: Quadro GP100 (Total: 17.1 GB, Available: 3.2 GB)

as wished. However, unlike in the first launch, nvidia-smi shows that the GPU did not not load any ParaView process and memory share. So I wonder whether fixing the one has spoiled the other: I can see the ParaView process working on the CPU, alas.

Take note. This post is the opposite of Select which GPU to use when starting ParaView in which the fellow poster wanted to enable a device. I want to disable one.

Try setting the enviroment variable VTK_DEBUG_SKIP_VISRTX_CHECK (ref). That should skip VisRTX and hopefully avoid this “device-grabbing”.

Thanks for the tip.
I have set

export VTK_DEBUG_SKIP_VISRTX_CHECK=1

and launched Paraview. No uploading of VisRTX as expected.

However, the system falls back to CPUs, regardless of whether pvINVIAIndex plugin (version 2.4) is autoloaded. The ParaView tasks are multithreaded CPU processes visible with pstree or the like; no show on nvidia-smi.

Please note that the aim is not to turn off using the GPUs but to steer ParaView to a specific device out of several, so as not interfere with calculations going on there. The greedy behaviour of VisRTX ought to be tamed, not suppressed, ideally.

If ParaView falls back to the CPU this is suboptimal in another way.

Note, VisRTX only comes into play when you’re using ray tracing…are you doing that? If not, whether VisRTX is enabled or not should not have any effect on whether GPU is used on not.

Whether GPU is used for rasterized rendering depends on your OpenGL drivers. Check the Help > About dialog. It should show the OpenGL vendor. It should show something like NVIdia.

image

Also, just loading a plugin has no effect. Are you showing any rendering that is using IndeX in your visualization setup? If not, it need not reflect in nvidia-smi.

Thanks for bringing in more context.

I am not using ray tracing.
This is the information on the OpneGL drivers:
image
and this is the process overview according to nvidia-smi:


after starting Paraview without flag specifications.

As long as Paraview guarantees me not to erode memory from GPU 1 nor impose transactions with the CPU there, I am fine.
However, when I used the same set of GPU devices as servers of a remote connection, I could steer the rendering jobs towards either GPU and see from nvidia-smi the pvserver working hard on one device and leaving the other be. I felt this was a guarantee.

By linking the two situations, am I comparing apples and pears, perhaps? The objective is having ParaView use one GPU out of two. Thanks for additional clarifications.

From the about dialog information, your OpenGL drivers are indeed not GPU-enabled. ParaView is not using GPU for anything from what I can tell. I do not know what nvidia-smi is referring to. Maybe VisRTX allocates some buffers during init,I do not now. In anycase, like I said before, set the VisRTX skip flag as you are not using raytracing at all. So it has no impact whatsoever.

Next, fix your OpenGL drivers so that ParaView can use a GPU for rendering. Currently, it’s not. It’s not using any GPU at all since the OpenGL driver is Mesa / software.

Thanks. This clarifies any equivocation. So Paraview is not able to recognise out of the box that the GPUs are available in that machine. I will study the matter further.

Note that’s not specific to ParaView. Any rendering-capable application you may be running on your system that uses OpenGL for rendering is not using the GPU. Try glxinfo or glxgears for example. It should indicate the same thing.

The fix is typically simply installing OpenGL drivers provided by your GPU vendor – in your case NVidia.