I use a set of scripts to export a standard set of pictures for specific simulation types. The scripts run just fine and produce the expected pictures, which is as intended (so great job on that to devs!)
The number of simulations has recently increased, so I tought I would try to include the post-processing inside the job running on our cluster by using the osmesa build available from the downloads page.
Unfortunately, that did not give the expected result. It produces a SIGSEGV when the script creates a new render view, with the message
[pvbatch ]vtkOpenGLRenderWindow.c:458 ERR| vtkOSOpenGLRenderWindow (0x16807780): GLEW could not be initialized: Missing GL version
I tried on a compute node (so no x server I guess), but also on another node with a x server present, where the regular paraview version works.
Not sure what I am missing. I thought the osmesa build was supposed to be made for such cases ?
Can anyone give a clue to what I am doing wrong ?
Is there a subtility about the osmesa build that I am missing that could help fix this issue ?
I just tried ParaView 5.9.1 osmesa binary release without X server and it is working great.
Please share how you run it ?
I am not really doing anything special:
Download osmesa build.
Unzip on the cluster I use.
run the pvbatch binary from the bin folder with my python script.
I do not know if it is of interest, but here is the the output of ldd:
libvtksys-pv5.9.so.1 => /cm/shared/apps/paraview/5.9.0-osmesa/bin/../lib/libvtksys-pv5.9.so.1 (0x0000155554b58000)
libdl.so.2 => /lib64/libdl.so.2 (0x0000155554954000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00001555545bf000)
libm.so.6 => /lib64/libm.so.6 (0x000015555423d000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x0000155554025000)
libc.so.6 => /lib64/libc.so.6 (0x0000155553c63000)
libGL.so.1 => /lib64/libGL.so.1 (0x00001555539d0000)
libX11.so.6 => /lib64/libX11.so.6 (0x000015555368c000)
libXext.so.6 => /lib64/libXext.so.6 (0x0000155553479000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x0000155553259000)
libGLX.so.0 => /lib64/libGLX.so.0 (0x0000155553027000)
libGLdispatch.so.0 => /lib64/libGLdispatch.so.0 (0x0000155552d6b000)
libxcb.so.1 => /lib64/libxcb.so.1 (0x0000155552b42000)
libXau.so.6 => /lib64/libXau.so.6 (0x000015555293e000)
Definitely not the right binary:
[glow@arch ~/work/paraview/others/ParaView-5.9.1-osmesa-MPI-Linux-Python3.8-64bit/bin]$ ldd pvbatch
libvtksys-pv5.9.so.1 => /home/glow/work/paraview/others/ParaView-5.9.1-osmesa-MPI-Linux-Python3.8-64bit/bin/./../lib/libvtksys-pv5.9.so.1 (0x00007f343ba0b000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f343b9e5000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f343b7cf000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007f343b68b000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f343b670000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f343b4a4000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f343bc68000)
[glow@arch ~/work/paraview/others/ParaView-5.9.0-osmesa-MPI-Linux-Python3.8-64bit/bin]$ ldd pvbatch
libvtksys-pv5.9.so.1 => /home/glow/work/paraview/others/ParaView-5.9.0-osmesa-MPI-Linux-Python3.8-64bit/bin/./../lib/libvtksys-pv5.9.so.1 (0x00007f6b163f3000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f6b163cd000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f6b161b7000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007f6b16073000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f6b16058000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f6b15e8c000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f6b16650000)
Did you download it from the headless section here:
That’s supposed to be the case.
I just checked the md5sum so the archive I used:
that’s the same as the one available from the downloads:
The output I copied came from a machine with an xserver installed. could it be that that’s the cause of the different ldd output ?
Can you try downloading and extracting again ?
I’ve also have a Xorg server running when running ldd.
I’ve re-downloaded 5.9.0 and 5.9.1. Checked the md5sums, extracted all, but the ldd still shows the same libraries. I do not know why.
the md5sums of pvbatch itself:
I have the same md5sum, checked on another computer and I have the same ldd output. No idea what is happening. Please try on anothere computer.
I tried again a couple things this morning to get this to work and succeeded!
VirtualGL is used on our cluster to accelerate 3D rendering.
are explicitely given declared in the LD_PRELOAD variables.
before starting pvbatch, that fixed the ldd output, and the binary too!
Right, I should have though of it.