I’m trying to compare the GPU-accelerated Paraview 5.11.0 in client/server mode to the CPU version under various conditions (number of GPUs, number of cores, etc). The test server is a dual-GPU Linux machine (Ubuntu 20.0.4 LTS) with two Xeon processors and 256 GiB of RAM.
The 5.11.0 EGL version (downloaded from the ParaView website) seems to work fine on my ~15 GiB volumetric test data. However, I can’t seem to find a command line switch to disable GPU acceleration to test the “pure CPU” performance to compare to the GPU accelerated timings.
I therefore tried the 5.11.0 Mesa variant (again downloaded from the ParaView website) and when I try to load the test data I get the following message:
If you wanted to, you could raise the limit either by submitting a change to the mesa code base or making a local edit to mesa and compiling both it and ParaView yourself. That’s the only way to raise this limit.
@cory.quammen can you point out to the specific place this can be changed? I am trying the openSWR on a CPU only system and getting this error. I tried modifying this section, but no luck so far?
A timely question, I was just looking into this recently. The limit is sadly not just an arbitrarily hard-coded limit, as I had hoped, but is rather set to account for the indexing limits supported by 32-bit integers used in the llvmpipe driver. Note the helpful comment above the code you linked to:
* 2GB is the actual max currently (we always use 32bit offsets, and both
* llvm GEP as well as avx2 gather use signed offsets).
So I’m afraid it would take far deeper changes in this driver to support image volumes larger than 2GB in size.
Ok, thanks. I wasn’t sure how to interpret the comment. So that 2GB limit is quite limiting for anything to do. It seems the individual texture dimensions can be set higher with #define LP_MAX_TEXTURE_3D_LEVELS 12 /* 2K x 2K x 2K for now */
and I successully managed to render volumes 2560x2560x130 in dimension, but hard memory limitation is rather blocking.