The operation works well for a small dataset, but it fails on larger data, even with very large memory systems. It fails much more quickly when using distributed parallelism (2 nodes).
I load the data from exodus files, about 500M cells and points, for a total of 85 GB. I use MergeBlocks to convert it to UnstructuredGrid, on the theory that most filters work well with that.
I load the surface (PolyData, saved as .pvd, 282 MB without any fields), run AppendReduce (18GB), then Extract Surface (because AppendReduce returns an UnstructuredGrid). The result has 475M cells, 17.6 GB.
I’m running this on two large-memory nodes (~2TB each).
I would appreciate any advice on how to get this to work. It fails with out-of-memory errors.
Thanks for sharing the screenshot with memory inspector. It looks like you’ve got plenty of RAM available before the Programmable Filter. I wouldn’t suspect the Programmable Filter running Python itself is the cause of the OOM error. Does ParaView’s Clip filter with a Plane or other implicit function complete on this same volume dataset? If not, then the problem is with clipping in general. The output of clipping is tetrahedra, so if you start with hexes, for instance, and those get converted to tetrahedra, your memory use will go way up for the additional cell connectivity data.
If Clip works otherwise, then my suspicion is that the additional cell locator objects allocated in vtkImplicitPolyData might be spiking memory usage.
Try replacing
function = vtk.vtkImplicitPolyDataDistance()
function.SetInput(inp)
with
function = vtk.vtkPlane()
function.SetOrigin(0,0,0)
function.SetNormal(0,1,0)
and see if that completes. If so, then we’ve got the problem narrowed down to excessive memory usage in vtkImplicitPolyDataDistance.