What are the current (known) limitations for volume rendering (VR) of voxel data in PV?
For example, a dataset of ours has about 23e9 voxels (3000 x 3000 x 2607, i.e. ~ 23.4GB) which I cannot get a VR of (all blank). With PV-5.4.1 something is visible when I reduce the extent to 2000 x 1000 x 2607 (i.e. ~ 5.2GB) but only if I use OSPray as rendering backend and then still the visualization contrast is wrong (inverted, nearly all black). With 1000 x 1000 x 2607 (i.e. ~ 2.6GB) the contrast is correct but only if I restart PV after it had rendered with wrong contrast. Smart, GPU and “Ray Cast Only” only work with 1000 x 1000 x 1000 and below, but “Ray Cast Only” has a very bad quality (low res).
Using PV-5.6.0, I do not see anything at all with any of the above made reductions.
Same problems exist even when using pv-python with (https://github.com/romangrothausmann/ParaView_scripts/blob/master/render-view.py)
We have 512GB of RAM and a TitanX with 12GB of VRAM.
Is “Ray Cast Only” still the CPU volume renderer from VTK?
Shouldn’t at least the CPU VR work for the full dataset as it would have 512GB of RAM for use?
Any help or hints are very much appreciated.