I’m trying to compare the GPU-accelerated Paraview 5.11.0 in client/server mode to the CPU version under various conditions (number of GPUs, number of cores, etc). The test server is a dual-GPU Linux machine (Ubuntu 20.0.4 LTS) with two Xeon processors and 256 GiB of RAM.
The 5.11.0 EGL version (downloaded from the ParaView website) seems to work fine on my ~15 GiB volumetric test data. However, I can’t seem to find a command line switch to disable GPU acceleration to test the “pure CPU” performance to compare to the GPU accelerated timings.
I therefore tried the 5.11.0 Mesa variant (again downloaded from the ParaView website) and when I try to load the test data I get the following message:
ERROR: OpenGL MAX_3D_TEXTURE_SIZE is 2048 ( 107.550s) [pvserver ] vtkVolumeTexture.cxx:943 ERR| vtkVolumeTexture (0x15847020): Invalid texture dimensions [6180, 4135, 623]
Given that I’m faking the graphics hardware via Mesa, is there a way to disable this limitation? I’ve got plenty of RAM to use, after all!