How to pass different blocks of memory to ParaView Catalyst?


(Chou) #1

Hi everyone,

I am working on a simulation to make enough changes in order to migrate from VisIt libsim to ParaView catalyst. In this simulation, each process manages a set of blocks (called domains in VisIt) that are usually non-contiguous. VisIt libsim has a native support for this case and allows a grid to introduce a set of blocks.

I was reading the ParaView catalyst user guide to find the proper solution for this case in ParaView. What I have found up to now is to use a MultiPiece or MultiBlock grid and put each block in a separate grid.

I just want to ask is using MutiBlock and MultiSet datasets a good practice in this case? Is there any other features in ParaView catalyst to handle such cases?

Thanks in Advance,
Chou


(Andy Bauer) #2

Hi Chou,

It depends on what your data looks like. If each block has the same set of arrays and you only have a single block per process (also assuming you don’t have AMR type data) then I would suggest going with the most appropriate class that derives from vtkDataSet. The reason for this is that it then allows you to directly use certain filters that don’t work on multiblock datasets (the Ghost Cells Generator filter for example) because they don’t can’t make assumptions about relations between the different blocks of a multiblock dataset.

Best,
Andy


(Chou) #3

HI Andy,

Thanks for you response.
As I told in my question, the problem is that every process manages more than one block and the blocks are not contiguous. The global grid is rectilinear. To me, this means that I cannot use a rectilinear grid in this case and I should go for a multi-piece data set. Am I right?

Thanks
Chou,


(Andy Bauer) #4

Yes, you’ll want to first create a vtkMultiBlockDataSet which will have one block. That block will be a vtkMultiPieceDataSet which will contain your vtkRectilinearGrids.


(Chou) #5

Thanks Andy,

I implemented the case and everything is working fine. But there is till one issue that I cannot understand. I wonder why when setting the number of blocks to the vtkMultiBlock datasets, the number of blocks should be the number of local items (i.e. 1 in my case), but when working with vtkMultiPiece datasets, the number of pieces should be equal to the number of ALL pieces for ALL processes?

p.s. The example in page 33 of Catalyst User Guide is written exactly in this way.

Chou


(Mark) #6

For vtkMultiPiece data, each piece corresponds to an MPI rank. So you fill in data for your particular process into the proper location and VTK knows that it should manage the parallel synchronization of the rest.

For vtkMultiBlock, the blocks represent a higher level organization of the data structure. In our own particular catalyst adaptor we can have top-level blocks representing different parts of the simulation domain (fluid, solid, some other fluid). Each of the top-level blocks contains sub-blocks (sub-block 0: internal mesh, sub-block 1: boundaries). The boundaries are further organized into sub-sub-blocks for each boundary region.

The lowest level blocks are the ones that actually contain data (think of a file system). These are the ones that then multi-piece data, to allow for the multi-process source.

  • mesh region0 [block]
    • internal mesh [block], using multi-piece within the block
    • boundary part [block]
      • boundary0 [block], using multi-piece within the sub-block
  • mesh region1 [block]
    - …

(Chou) #7

Thanks Mark,

Is it necessary for Piece indexes to be consecutive globally? Finding a consecutive index for each block when each process handles different number of blocks is not that easy.

p.s. In VisIt the piece indexes should just be unique, not consecutive.

Chou


(Andy Bauer) #8

Chou, the piece index for the vtkMultiPiece has to be unique and will start at 0 and be less than the total number of global pieces. They do not need to be consecutively numbered on an MPI process but it often works out to be the easiest to do it that way. For the multiblock dataset, the tree hierarchy needs to look the same on each MPI process with the only difference that the leaf-nodes will be non-empty on exactly a single process.

Mark, FYI there can be more than one piece per MPI rank.