I make animations from calculations with ParaView and save the images as **.png files.
My computer has 2 graphics cards:
-Nvidia Quadro K620 for the screen.
-Nvidia Tesla K40c for cuda calculations without screen.
For cuda calculations, the Tesla K40c card is always used automatically. ParaView, on the other hand, always uses the Quadro K620.
This is much slower and makes the screen very sluggish.
So far I have been using a script via Pvserver as a workaround:
unfortunately the link doesn’t help me since I want to render locally. I found another similar link:
I can’t do anything with this link either. As I wrote in the first post I can use the workaround via pvserver and use egl.
But this is exactly what I want to avoid because of the overhead.
The compiled binaries with “egl” do not include a “paraview” client. The versions with “paraview” client do not contain egl.
Therefore I need the exact procedure to start (or configure) the “paraview” client so that the paraview client can choose the graphics card to be used locally for rendering.