With 5.11.1 (official binaries) on a 72-core (Xeon(R) Platinum 8360Y CPU @ 2.40GHz) Linux 4.18 x86_64 node with 512 GB memory I get extremely sluggish slice plane filter performance, or at least it doesn’t seem to do anything for a long time (see below). This is with a uniform rectilinear dataset of dimensions 862x759x1526, containing 4 float32 arrays, loaded from a 41GB .vti file.
Applying a slice filter, touching no options and pressing Apply takes roughly 19 seconds. The weird thing is that for more or less the first 18 seconds no updates happen in the UI, and only in the final second is the progress bar updated and the slice plane shown. The same behaviour happens when moving the slice plane and pressing Apply.
Looking at some other operations on the data, applying a contour filter on the same dataset takes roughly 11 seconds, with the progress bar continuously updating from the start of pressing Apply. This I find bizarre that it is so much faster, given that it computes a whole lot more than a simple slice.
A Clip to half the dataset takes around 5 seconds.
Any idea why the slice filter can be so slow? Would it matter that one of the data arrays is a 6-dimensional float array?
I’m the only user on the node, btw. I am running this in a VNC server under VirtualGL (as I don’t have any other GPU server with the required amount of memory), but the OpenGL info in Help > About is correct and shows an A100 being used, and the four A100s in the node are properly detected judging from the console output. Also, when I use a Python trace to apply a slice filter and run it under pvbatch
I see the same slow times.