I’m working with a dataset that’s a multiblock of rectilinear grids with 1024 blocks and blanking that has two cell data arrays (one is an unsigned char array and another is a float array). The total number of cells in this dataset is 8,245,157,400 and I’m running pvserver with 32 MPI processes. The memory inspector reports that pvserver is taking up less than 6%/6 GiB of the system memory on each of the 8 nodes I’m using (I’m running 4 MPI processes per node). Now when I use the Resample to Image filter with Sampling Dimensions of [1500, 1200, 1200] resulting in 2,154,963,899 cells the memory inspector reports that pvserver is now taking up between 27%/34 GiB and 52%/66 GiB of memory. Note that I’m still using the outline representation throughout so there shouldn’t be any rendering type operations that’s taking any significant amount of memory anywhere. Any ideas on ways to reduce the memory footprint of the Resample to Image filter?