Paraview extremely slow to load vtk multiblock files with many blocks

I’m working with a dataset containing 160,000 blocks in ParaView and have experimented with various VTK formats, including VTM, VTPC, and VTKHDF. Running pvserver on multiple cores hasn’t improved performance with any of these formats, even though parallelization should be straightforward with VTM and VTPC. ParaView remains extremely slow with all of them, and I don’t understand why.

During this time, the CPU usage is near 0%, RAM usage is less than 30%, and the disk is idle. What is ParaView doing while I’m waiting for hours?

To ensure it’s not a rendering issue, I’m loading the files in spreadsheet view. Additionally, the total size of the dataset is quite small, around 1.5 GB.

1 Like

even though parallelization should be straightforward with VTM and VTPC.

Did you distribute the data ?

What is ParaView doing while I’m waiting for hours?
Additionally, the total size of the dataset is quite small, around 1.5 GB.

Please share data and steps to reproduce

Did you distribute the data?

I have a name.vtpc file along with a name folder containing all the vtp files. Doesn’t ParaView automatically distribute the reading of these files across different cores?

Please share data and steps to reproduce

Alright, I’ll create a shareable and reproducible example and then share it with you.

1 Like

Mathieu, I suspect it is a problem with the number of blocks. I know that adding blocks slows down ParaView. I don’t have a replicator.

You can create a simple VTM file with multiple blocks using the following code:

import pyvista as pv
import numpy as np

# Create a MultiBlock dataset
multi_block = pv.MultiBlock()

# Number of spheres
num_spheres = 60000

# Add spheres with random centers and sizes to the MultiBlock dataset
for i in range(num_spheres):
    center = np.random.rand(3) * 100  # Random center within a 100x100x100 cube
    radius = np.random.rand()
    sphere = pv.Sphere(center=center, radius=radius)
    multi_block.append(sphere)

# Save the MultiBlock dataset to a file
multi_block.save("multiblock_spheres_random.vtm")

I’m not uploading the zip file because it is large and takes a long time to deal with (compressing/extracting).

Based on CPU load observations, approximately 40% of the time is spent by the server loading the data, while 60% is utilized by the client populating the interface.

hello, I also am working with multiblock files containing a very large number of blocks and I face the same issue.

Both with my multiblocks and with the random bunch of blocks from @wilove ‘s code, Paraview is very slow to open the vtm, taking multiple hours, while CPU usage is basically zero.

@wascott , @mwestphal , would you have any insight on what is making Paraview this slow?

1 Like

We are aware that datasets with 10s of thousands of blocks or more can make ParaView crawl. Fortunately, we are currently investigating this issue and have identified and corrected two bottlenecks so far that will speed up ParaView’s processing of composite datasets. We’ll be working to further improve that prior in the upcoming ParaView 6.1 due out early next year.

3 Likes

@cory.quammen thank you for the swift update! Looking forward to playing with 6.1.

@cory.quammen If you notify me when the fix is in the nightly builds, I’ll be happy to test it out :slight_smile: