Now I use ubuntu 18.04 + openfoam 4.1 + paraview 5.0.1 (compile installation), and I can run the python script using pvpython XX.py, but the interactive window would appear during the running.
So I want to use the headless version of paraview, I find some methods to build it, like directly download headless version form offical website, or compile the headless version by yourself using EGL or OSMESA. But I am still confused about how to use it.
Does anyone give me some suggestion? My local graphics device is NVIDIA GeForce GTX 1660.
Thanks for your information! My current paraview 5.0.1 is compiled installed together with openfoam 4.1 in ubuntu 18.04. Can I just upgrade it? Or I need to download the latest version paraview and reinstall it in linux?
Thanks! I will download the latest version from official website.
No, you want to use offscreen rendering, with the latest paraview this is pvpython --force-offscreen-rendering
By saying this sentence, do you mean the headless paraview (without graphics and X server) and the paraview with --force-offscreen-rendering is the same thing?
Hi all,
I just would like to jump into this question since I have a similar issue that I don’t want the interactive window appear during when I run pvpython XXX.py. However, I am running Paraview 5.10.0 on cluster/supercomputer, which has strict limitation on rendering. So I wonder what steps I should take to build headless paraview that generate offscreen rendering without requiring an accessible X server. Thank you!
You have a graphical environnement (Xorg) but do not want the window to show → offscreen
You do not have a graphical env or even no GPU at all → headless
To render offscreen, use the standard paraview release but instead of pvpython, run with pvbatch.
To render headless, use the EGL (need a GPU) and OSMESA (no GPU needed) release.
Dear other paraview-on-supercomputers-users,
dear @catiocean ,
just fool paraview and pretend it uses an x-server
xvfb-run -a -s "-screen 0 3840x2160x24" pvpython myScript.py
The tool xvfb-run is not uncommon on our machines. In case, you want to run many scripts in parallel (e.g. with a & at the end of above line), this might cause problems. Just have a look xvfb-run unreliable when multiple instances invoked in parallel - Stack Overflow
The following command, however lets you run many post-processing scripts in parallel.
xvfb-run-safe -a -s "-screen 0 3840x2160x24" pvpython myScript.py
Edit: Of course other resolutions than 4K are possible as well
And get very bad rendering performance because you are using mesa instead of GPU accelerated egl.
It could be considered an alternative to osmesa but I would not expect good performance compared to osmesa either.
The performance is bad. I agree.
Reading my own answers below I can state: Performance is good
I don’t have GPU on compute nodes (most of them)
Typically on a supercomputer you submit a compute job and have to wait a couple of hours/days, till the job starts. The additional time delay due to performance does absolutely not hurt
BTW: performance is actually not that bad! Depending on the viz-pipeline reading result files from the file system does consume more time than the actual rendering job. So file system IO is the bottleneck here.
I simply start many jobs in parallel if I have a couple of different visualization tasks
I can create images on a per-frame basis. So I can additionally introduce parallelism by creating one image per CPU core. RAM usage does not hurt me as I have enough of that.