Unable to get D3 filter to work on even modest size unstructured meshes

I have a large unstructured volume mesh (~400M points) in cgns format that I want to visualize in parallel on a distributed HPC cluster. I have no problem launching an MPI pvserver and connecting to it. The cgns file is 270GB, while I have 500GB of memory available; however, for reasons I do not understand, I am exceeding 500GB just trying to initially load the mesh in paraview.

So just out of curiosity, I tried my workflow on a much smaller mesh (available here: https://hlpw5.s3.amazonaws.com/hlpw5_grids/re_study/3.R.01/CDS_PW_3p0_Lvl-A-ReyNo-1p05M.cgns.gz) that is only 3 GB after unzipping. I was able to easily load this mesh in, and the memory inspector shows the load requires about 3GB of memory, which is what I would expect but this doesn’t help my confusion for why the big mesh requires so much more memory than the size of the cgns file itself. Anyway, after loading in this smaller mesh, I clipped a subset and tried to run the D3 filter on that clip to partition the data and distribute it to the parallel processes I have running; however, the status bar showed “Compute spatial partitioning (33%)” for ten hours before the server scheduler killed my job for exceeding the wall time limit.

Can anyone shed any light on these issues? Is there any alternative to using the D3 filter to partition a mesh?

Redistribute DataSet is preferred over D3 in current versions of ParaView to repartition datasets.

That’s good to know, thanks. Any thoughts on why a 270 GB mesh would require >500GB of memory to just load into paraview?

Somewhere in the loading there must be duplication of data or auxiliary data structures created, perhaps in the reader? I don’t know the CGNS reader internals so can only speculate. It would take some memory profiling to identify where the memory pressure is highest to pinpoint what is going on.

I’ve been able to run Redistribute DataSet on multiple test grids, and was able to get it to work properly; however, it is seg faulting on my big 400M point grid. The pvserver is killed with a generic message:

error: exception occurred: Segmentation fault

I’m trying this on a high memory node with 2 TB. I am monitoring my memory and nowhere near capacity (it dies at 500GB/2000GB). Is there a known memory bug with this filter?

I’m not aware of a known issue.