How to build headless paraview in local?

Hi all,

Now I use ubuntu 18.04 + openfoam 4.1 + paraview 5.0.1 (compile installation), and I can run the python script using pvpython XX.py, but the interactive window would appear during the running.

So I want to use the headless version of paraview, I find some methods to build it, like directly download headless version form offical website, or compile the headless version by yourself using EGL or OSMESA. But I am still confused about how to use it.

Does anyone give me some suggestion? My local graphics device is NVIDIA GeForce GTX 1660.

Thanks!

Hi @Ying ,

Super old, please use the last release from https://www.paraview.org/download/

I want to use the headless version of paraview,

No, you want to use offscreen rendering, with the latest paraview this is pvpython --force-offscreen-rendering

Hi @mwestphal,

Thanks for your information! My current paraview 5.0.1 is compiled installed together with openfoam 4.1 in ubuntu 18.04. Can I just upgrade it? Or I need to download the latest version paraview and reinstall it in linux?

Thanks!

I dont think so. There is no reason to use the version of ParaView built with openfoam anyway.

Thanks! I will download the latest version from official website.

No, you want to use offscreen rendering, with the latest paraview this is pvpython --force-offscreen-rendering

By saying this sentence, do you mean the headless paraview (without graphics and X server) and the paraview with --force-offscreen-rendering is the same thing?

I mean that they are not the same thing at all. Read this:

https://kitware.github.io/paraview-docs/v5.9.0/cxx/Offscreen.html

Hi all,
I just would like to jump into this question since I have a similar issue that I don’t want the interactive window appear during when I run pvpython XXX.py. However, I am running Paraview 5.10.0 on cluster/supercomputer, which has strict limitation on rendering. So I wonder what steps I should take to build headless paraview that generate offscreen rendering without requiring an accessible X server. Thank you!

Offscreen and headless are not the same thing.

You have a graphical environnement (Xorg) but do not want the window to show → offscreen
You do not have a graphical env or even no GPU at all → headless

To render offscreen, use the standard paraview release but instead of pvpython, run with pvbatch.
To render headless, use the EGL (need a GPU) and OSMESA (no GPU needed) release.

Thank you very much Mathieu! We will explore the suitable solution for our case.

Dear other paraview-on-supercomputers-users,
dear @catiocean ,
just fool paraview and pretend it uses an x-server :wink:

xvfb-run -a -s "-screen 0 3840x2160x24" pvpython myScript.py

The tool xvfb-run is not uncommon on our machines. In case, you want to run many scripts in parallel (e.g. with a & at the end of above line), this might cause problems. Just have a look xvfb-run unreliable when multiple instances invoked in parallel - Stack Overflow
The following command, however lets you run many post-processing scripts in parallel.

xvfb-run-safe -a -s "-screen 0 3840x2160x24" pvpython myScript.py

Edit: Of course other resolutions than 4K are possible as well :wink:

And get very bad rendering performance because you are using mesa instead of GPU accelerated egl.
It could be considered an alternative to osmesa but I would not expect good performance compared to osmesa either.

The performance is bad. I agree.
Reading my own answers below I can state: Performance is good

  1. I don’t have GPU on compute nodes (most of them)
  2. Typically on a supercomputer you submit a compute job and have to wait a couple of hours/days, till the job starts. The additional time delay due to performance does absolutely not hurt :wink:
    BTW: performance is actually not that bad! Depending on the viz-pipeline reading result files from the file system does consume more time than the actual rendering job. So file system IO is the bottleneck here.
  3. I simply start many jobs in parallel if I have a couple of different visualization tasks
  4. I can create images on a per-frame basis. So I can additionally introduce parallelism by creating one image per CPU core. RAM usage does not hurt me as I have enough of that.