I have a large unstructured volume mesh (~400M points) in cgns format that I want to visualize in parallel on a distributed HPC cluster. I have no problem launching an MPI pvserver and connecting to it. The cgns file is 270GB, while I have 500GB of memory available; however, for reasons I do not understand, I am exceeding 500GB just trying to initially load the mesh in paraview.
So just out of curiosity, I tried my workflow on a much smaller mesh (available here: https://hlpw5.s3.amazonaws.com/hlpw5_grids/re_study/3.R.01/CDS_PW_3p0_Lvl-A-ReyNo-1p05M.cgns.gz) that is only 3 GB after unzipping. I was able to easily load this mesh in, and the memory inspector shows the load requires about 3GB of memory, which is what I would expect but this doesn’t help my confusion for why the big mesh requires so much more memory than the size of the cgns file itself. Anyway, after loading in this smaller mesh, I clipped a subset and tried to run the D3 filter on that clip to partition the data and distribute it to the parallel processes I have running; however, the status bar showed “Compute spatial partitioning (33%)” for ten hours before the server scheduler killed my job for exceeding the wall time limit.
Can anyone shed any light on these issues? Is there any alternative to using the D3 filter to partition a mesh?