Paraview 5.6.0 - distribute volume rendering


Our system has Paraview installed in a client/server distributed rendering cluster and set it up to do parallel rendering on 69 nodes ( we followed all the instructions in
It works for isosurfaces. However, we found out that it doesnt render volume data properly.

When Paraview is not connected to the server, it renders the data as showed here:


But when it’s connected to the server, we get these outlined transparent voxels:


Our theory is, there are problems related on how the data is handle on the different nodes. Maybe paraview sends all the data to every node instead of dividing it into chunks and distribute them to all the system.
The data is only 6.8mb, so personally I dont think is related to ram /vram memory. Maybe there is an option we need to enable or an extra library Paraview needs to be built with.
Also, this a Linux environment and every node has a Quadro K5000 card.

Are we missing something?
Is there a best practice manual on how to handle this distribute rendering case?


Are you getting any dialog boxes that popup when you are connected to the server that indicates that the remote rendering is not possible since display is inaccessible? If remote rendering is not being used, you’ll see only outlines on the client for Volume since we don’t deliver the volume data to client for rendering. To confirm that remote rendering is indeed being used, in the Settings dialog, on the Render View tab, check the Show Annotation box. If remote rendering is being used, you should see Remote/parallel rendering: yes text on the view.

Secondly, are you in CAVE mode, by any chance, or just a regular client-server mode where your server connection is just being named cave. If you’re in CAVE mode, then the visualization shown in the application is only a representative and hence you won’t see the volume rendering in the desktop application.

Hello, Thanks for replying.

This is the generated output when the system connects from the desktop to the server:

Generic Warning: In /gpfs/runtime/opt/paraview/5.6.0_OpenGL2/src/Paraview/ParaViewCore/ClientServerCore/Core/vtkPVServerInformation.cxx, line 784

vtkPVServerInformation::GetOGVSupport was deprecated for ParaView 5.5 and will be removed in a future version.

The other nodes dont throw any error messages related to displays and connections.

If you’re in CAVE mode, then the visualization shown in the application is only a representative and hence you won’t see the volume rendering in the desktop application.

I see, that’d explain the outline voxels on the desktop application! In the cave display nodes we see parts of the volume distributed all over the place, and some parts are not even renderer, it’s really hard to take a picture of it to show you.
So it seems the description I gave in the main post is wrong. We are running a CAVE system using paraview immersive. I apologize for the mistake.

I enabled the Show Annotation option, and I noticed two things. When loading vertex data(or just displaying the default empty scene) it prints : Remote/parallel rendering: no. On the other hand, when loading volume data, and switching the data representation from outline to volume, it prints Remote/parallel rendering: yes



Is there anything we should try?

Additionally, we just realized there is an nvidia plugin for paraview:

By default this plugin is not build with paraview source, Is it a must for cave systems ??

Thanks for the help.

In CAVE mode, I am afraid volume rendering is not supported currently. CAVE more requires non-distributed rendering which requires all data to be cloned on all ranks. Without going into implementation details, the code that moves data around doesn’t support moving whole volumes between ranks and since CAVE mode requires cloning of data, you’re see the issue where only part of the data is shown on difference screen.

Thank you for the info!
Is this ‘move data across the system’ issue at the VTK level ? or just in Paraview?
If the later, then we can write a c++ application and use the VTK library to render volumes, right ?
Do you have any suggestions on how we can approach volume rendering on a cave system?

Again, thank you.

It would be most helpful to get any suggestions on how to do direct volume rendering on a multi-display machine like a cave. We understand that cave mode is not supported in paraview.

Is there any way we can add code to replicate our volume data so that it will work?

If we use vtk to volume render, will that have the same limitation in terms of not replicating the data? We run VTK using our own port to a cave-like display, so I’m pretty sure that isn’t subject to the same limitation.

We are evaluating paraview as a primary tool for our visualization needs in our cave. This initial attempt with volume data is a bit disappointing. Are there other significant gaps in cave-mode functionality?


-David Laidlaw
Brown University
Computer Science Dept

@dhl let me ping the VTK folks who may have ideas here.

@Aashish_Chaudhary, @martink any suggestions ?

Take a look at

This uses Vrui, but you can adapt it to another immersive toolkit.

The following Programmable filter script can be used to force ParaView to duplicate the data on all ranks. For that, create the Programmable Filter connected to your ctVolume.vti source and add the following scripts for the 3 inputs on the Properties panel (you’ll need to click the image on the Properties panel to see all properties) in addition to checking the Copy Arrays checkbox.


# just for debugging, can be left empty
input0 = inputs[0]
print ("local extents are: ", input0.GetExtent())

RequestInformation Script

executive = self.GetExecutive()
outInfo = executive.GetOutputInformation(0)

inInfo = executive.GetInputInformation(0, 0)
we = inInfo.Get(executive.WHOLE_EXTENT())
outInfo.Set(executive.WHOLE_EXTENT(), we[0], we[1], we[2], we[3], we[4], we[5])

RequestUpdateExtent Script

executive = self.GetExecutive()
inInfo = executive.GetInputInformation(0, 0)
we = inInfo.Get(executive.WHOLE_EXTENT())
inInfo.Set(executive.UPDATE_EXTENT(), we[0], we[1], we[2], we[3], we[4], we[5])
inInfo.Set(executive.UPDATE_NUMBER_OF_PIECES(), 1)
inInfo.Set(executive.UPDATE_PIECE_NUMBER(), 0)
inInfo.Set(executive.EXACT_EXTENT(), 1)

In my test done using this cave.pvx (413 Bytes) and the Wavelet source or ironProtein dataset from ParaView data, I can see the volume mostly okay except for a few angles – I suspect it the clipping planes are not setup correctly due to bounds mismatch or something like that. I can debug that further, but want to see if this is a reasonable approach before investigating it much further.

Here is the state file and the dataset I used for the test: ironprotein.7z (68.4 KB)

To clarify, in CAVE mode, the client will never show the volume rendering, only outlines. But you should be able to see the full volume rendering on the server windows.

Hello again @utkarsh.ayachit,

Sorry it took me a while to reply this thread.
So, we were able to load volume data in our cave system, but it seems we got the same errors you mentioned. Part of the data gets clipped out when you are outside the volume, and moving the headtracker has the same effects as slicing the data.

Please check the image and the video (they are rotated 90 degrees :confused: ) :

and the following video:


Do you know what might be happening ?
Is there something we can do form our side? (writing a plugin, modifying paraview source)


Hi @utkarsh.ayachit

I had some trouble reading the video, so I wanted to describe what’s going on. The volume rendering is being moved across boundaries between projectors. As it moves across those blended regions, you can see that the front clipping is different on the two projectors. The still image shows that same issue with the volume rendering straddling two projector images.

Is this consistent with what you were seeing? Any suggestions? Ben and Camilo tried changing the front and back clipping, but that didn’t seem to have any effect.



I am afraid not. My issue was simply that I could rotate the volume, in certain rotation angles, it would just get clipped incorrectly.

Where was the plane changed? Try hard-coding the bounds to the dataset bounds in vtkPVRenderView::ResetCameraClippingRange directly. Does that change anything?

Hi Utkarsh

That sounds like what we are seeing. The volume is being clipped incorrectly. And the incorrectness is different on different nodes, which have different camera settings to match where their monitors are relative to the viewer.

I’ve asked Ben Knorlein to explain the clipping edits he tried.

Your approach to copy the data, though, seems like it worked perfectly. It just doesn’t quite render correctly because of the clipping issue. Any further thoughts on that?



@dhl, my intuition tells me it’s indeed the clip planes. we just need to make sure we pass the same bounds on all ranks. let’s try hard coding it, as I mentioned earlier. If that didn’t help, we can see what else could it be.


We changed the clipping planes at vtkCamera::ComputeOffAxisProjectionFrustum(), when the projection matrix is being defined:

// Back and front are not traditional near and far.
// Front (aka near)
double F = E[2] - 10000.0;
// Back (aka far)
double B = E[2] - .1;

As you suggested it, I harcoded the bounds in the vtkPVRenderView::ResetCameraClippingRange method as follows:

double bounds[6];
bounds[0] = 0;
bounds[1] = 10000;
bounds[2] = 0;
bounds[3] = 10000;
bounds[4] = 0;
bounds[5] = 10000;

This is the result we got:

So, it seems working (with/out the tracker). However, when we tried to apply transformations on the volume (we wanted to move it to the center of the scene) we got this:
And sometimes the volume disappears from the scene (I assume it just got culled out).

Are those the right values to test with?
Without them we can translate and rotate the data without any problem, but we got the clipping planes problem again.

Thanks a lot for the feedback.

If you transform the volume, you’ll have to apply the same transformation to the bounds being passed to the ResetCameraClippingRange calls.

Folks, any update on this. I am in the middle of revamping a lot of code dealing with remove rendering/tile displays and CAVE. May be easier to track any issues now since it’s fresh in my mind.

Hi Utkarsh

We are stuck, and I’m not sure how best to proceed. Thanks for the ping!

Let me go over the history and then suggest a path forward. Here’s my very terse summary:

  1. Brown opened this thread because cave volume rendering didn’t work
  2. PV shared a way to force data to all nodes
  3. Brown tried that with our own dataset and got some promising results, but had trouble with disappearing pieces, mismatched transforms on different displays, and other visual artifacts
  4. PV suggested clipping planes/extents
  5. Brown tried that, and had better results, but transforming the camera or volume led to problems. Since we are in a head-tracked environment, the camera is being moved and rotated constantly.

It is not clear to me that we are managing the clipping planes or extents correctly. It is also not clear to me that we are doing whatever transformations are necessary when the volume and/or camera are moved.

I’m not sure what the best next step is. Does this thread have enough details for you, Utkarsh, to provide any more input? Do you need us to provide video or image documentation of problems that we are seeing? Should we try to create a test case that fails that you would be able to reproduce? Would you want to share a simple case with us that should work, and we can try it and document any issues?

I should also mention that Camilo was in touch with someone else at Kitware a couple weeks ago. I believe it was Cory Quammen. I (David) was not involved, and I’m not sure what was discussed except for a time logger that might help with performance issues.

I also have what may be a completely different issue, but it would be super helpful to be able to manipulate objects in the cave using the tracked hand controllers. Right now we holler back to someone at the console to move, rotate, or scale what we are looking at. A few minutes later, sometimes something happens. It’s not exactly interactive.

Please let us know what you think would be the most productive/helpful thing for us to try.



@dhl, can you send me your pvx file. Let me see if I can reproduce with a standard dataset like the Wavelet locally.

Hi Utkarsh

I will see if we can create something with a provided volume data set. Do you have a multi-display set up that you can use interactively, perhaps with head tracking or other control over the camera or objects?