SPHVolumeInterpolator, big dataset and parallel rendering

Hi,

I’m using paraview to view outputs from Smoothed Particle Hydrodynamics (SPH) simulations. The data is in h5part format, containing arrays of X, Y, Z and other physical characteristics of our SPH particles. I’m able to visualise some of the files on my machine, either as particles, or interpolating over a volume with SPHVolumeInterpolator, and rendering the volume or isosurfaces.

Now, for simulations with higher resolution, where I’d want more resolution (i.e. more particles), and/or a higher resolution for the volume I interpolate on, the data doesn’t fit in the computer memory.

To get around this, I thought I could run paraview on the cluster on multiple nodes, to spread the memory usage. But I’m not sure if I do it correctly and if it’s actually possible.

To test it, here is what I tried (simply on my machine to validate the setup first, the goal being to run a pvbatch script on the cluster later on with the big dataset):

  1. run mpiexec -np 4 pvserver to run multiple parallel processes.
  2. run paraview and connect to the pverver.
  3. open the “h5part”
  4. display the processID: it looks like the data is already spread over the various processes
  5. (optionally?) use a RedistributeDataset to have contiguous particles on the same node
  6. (optionally?) use a Ghost Cells
  7. set up a SPHVolumeInterpolator

Trying different configurations of 5/6/7, it seems to me that SPHVolumeInterpolator supports multithreading but not distributed parallelism, is this correct?

In this case, is there a way to achieve what I am trying to do? The other option being to get access to another machine with a lot more memory than the one I have at hand, where I might be able to pull this off locally.

Thanks

@Francois_Mazen

Hi @nabajour!

Yes you are correct, distributed parallelism is not supported yet, so running the process with MPI would not help.

To limit memory overhead, you may try to use VTK filters outside of ParaView via python or C++ in batch mode, and save the resampled result to vti file which you can open in ParaView later.

However, we can definitely improve these SPH interpolation filters with distributed parallelism. Please reach Kitware if you are interested!