2 processes read 3 files into vtkMultiBlockDataSet cannot display correctly in server-client mode

(sharon) #1

Dear Experts,
I distribute 3 files to 2 process to generate 3 vtkMultiBlockDataSet, but paraview UI cannot display successfully…
I tried two situations:
<1> Situation 1

Set numberOfBlock=3 for both process_0 and process_1
process_0 read file_0 and file_1 and generate block_0 and block_1
process_1 read file_2 and generate block_2

In this situation, i cannot see block_2 on the render window, only multiBlockInspector can display 3 blocks, but when click block_2, there is nothing happened…

<2> Situation 2

process_0 read file_0 and file_1 and generate block_0 and block_1
process_1 read file_2 and generate block_0 (*** here set block number = 0, different with <1> ***)

In this situation, i can see there are 3 blocks in render window, BUT multiBlockInspector only display block_0 and block_1, when toggle block_0, seems it controls both block_0 and block_2
Paraview also pop up below error message:
ERROR: In paraview\VTK\Common\DataModel\vtkDataObjectTree.cxx, line 377
vtkMultiBlockDataSet (000001D72A9B4E20): Structure does not match. You must use CopyStructure before calling this method.

I am confused about this result…
How to implement parallel read correctly when using vtkMultiBlockDataSet?
Could you please help me out!

Thanks and Best Regards,
Sharon

(Mathieu Westphal (Kitware)) #2

There is clearly a problem in the block structures, but we do not have enough information to help you here.

I would suggest trying to show us what structure you want to achieve first, then compare it to the code you use to create the structure.

(sharon) #3

Hi Mathieu,
Thank you for your reply!

After some tries, i found the problem:
in parallel mode, it is require to have the same profile for vtkMultiBlockDataSet for all the CPUs, or else there will be display issues.
I fixed this issues by adding the same number of block for all the CPUs, then let the CPU know which block it should deal with, which block it just ignore.

Best Regards,
Sharon