Paraview benchmarks without display

Hi all,

I’m trying to run the Paraview benchmarks on a Rocky Linux 8 server that has Nvidia GPUs. They run great if I have a display set, but crash if I don’t. This makes sense as a display appears with some rendering in it. I was wondering if there was a way to run these benchmarks without a display available or if there are other benchmarks that can be run only from the command line.

The issue script crashes on the Render() function. So if I run the following script with pvpython it will also fail if a display is not configured:

from paraview.simple import *

This seems to be a decently common question according to google searches, but a lot of the results are at least a few years old so I was curious if there were any updates or additional benchmarks that don’t need the Render() function.

Maybe there is a View that I can pass to the Render() function?

Things I’ve tried

One person pointed out that headless and off-screen are different. This was helpful. How to build headless paraview in local? - #8 by mwestphal however this appears to require code changes with the benchmarks which I was hoping to prevent if possible. I’d rather use the official benchmarks rather than write my own. Especially since I’m far from a Paraview expert.

Does anyone have any thoughts or suggestions on benchmarking without a display? I have a bunch of compute nodes that I’d like to test and don’t want to setup a display on every one of them each time I want to run the benchmark/testing suite. Thanks!

You never mention in your post how did you install ParaView but to me, it looks like you are using the binary release of ParaView from

In that page, you may notice the " ParaView Server for Headless Machines " Section, use that, the EGL version if you have a GPU or osmesa version if you dont.


I’ll try that. Thanks Mathieu.

So I have a pvserver launched, but I’m not sure how to use it to do headless rendering. pvpython has the -s/--server flag, so I tried that, but it didn’t seem to change anything.

I did also try via the interactive pvpython console. I was able to connect, but I was still getting a segfault.

>>> from paraview.simple import *
>>> Connect("localhost")
Connection (cs://localhost:11111) [2]
>>> Sphere()
<paraview.servermanager.Sphere object at 0x7fafb4bf2610>
>>> Show()
VisRTX 0.1.6, using devices:
 0: Quadro P6000 (Total: 25.6 GB, Available: 25.4 GB)
 1: Quadro P6000 (Total: 25.6 GB, Available: 25.4 GB)
<paraview.servermanager.GeometryRepresentation object at 0x7fafb4e9ebb0>
>>> Render()
error: exception occurred: Segmentation fault

I must be missing something here.

Maybe this issue: I'm using Ubuntu 22 and ParaView osmesa binary crashes on rendering ?

Are you using the osmesa or EGL version ?

$ /lib64/
GNU C Library (GNU libc) stable release version 2.28.

I have GPUs and believe I’m using the EGL version.

Can you try this:

Using the EGL binary release I get the following:

‘NVIDIA Corporation’
‘4.6.0 NVIDIA 550.54.14’
‘NVIDIA GeForce GTX 1660/PCIe/SSE2’

What do you get ?

Looks like we’re running Mesa. I thought we were running EGL since we’ve got GPUs for acceleration. I’ll go back through the notes and try things based on Mesa instead of EGL.

What makes you say that ?

I ran your commands and I was told it was Mesa :slight_smile:

>>> from paraview.simple import *
>>> openGLInfo = GetOpenGLInformation()
>>> openGLInfo.GetVendor()
>>> openGLInfo.GetVersion()
'4.5 (Core Profile) Mesa 23.1.4'
>>> openGLInfo.GetRenderer()
'llvmpipe (LLVM 16.0.6, 256 bits)'

Then you need to fix that, you are currently not using your GPUs at all.
I suppose you used pvpython from the EGL binary release ?

We’ve been using VirtualGL to intercept paraview and do some acceleration. I’m not totally sure why we haven’t been using the EGL version. I’ll have to ask around.

I’ll have to ask around.

Indeed, the title of this thread mentionned “without display”, which does not seem to be the case here.

I think I meant headless. I won’t have a graphical display when I run the benchmarks.

Yes ? but you use VirtualGL anyway. I’m a bit confused here.

Sorry for the confusion, I’m starting to put the pieces together as we’ve been talking. I think I understand what you’re saying. I need to go back to the drawing board and see if I can better outline how to do what we want to do. The benchmarks aren’t a very good test if we’re running Mesa without Virtual GL.

1 Like