Object sized screenshot

Dear everyone,

If I have a plane (vtkImageData) which is exactly X cells by Y cells, is it possible to export a screenshot of exactly X pixels by Y pixels, such that each cell in the vtk object corresponds to a pixel in the screenshot?

For example in this picture, I have a renderView with 300x300 resolution and the object is a vtkImageData of 300x300 cells, is there any way to fit the vtkImageData to the view such that the corresponding screenshot is a 300x300 resolution picture with each pixel corresponding to the cells of the vtkImageData?

You can try vtkPNGWriter or vtkJPEGWriter and pass your vtkImageData as input.

Thank you! I am currently trying, but since I am building the object with a programmable source I kind of don’t know how to access the vtkImageData to pass as an input

I tried looking at the types of the object in the python script to see which one of them is a vtkImageData but couldn’t find any, probably it needs to be done outside the programmable source (?)

I also tried the Save Data button, which should call vtkPNGWriter as well but it gives a warning about the inputs not being unsigned short ints or chars, I suppose it’s because it expects values from 0 to 255 for the rgb values. Is there an additional passage to export a view in such a format?

Load Image → CopyStructure(), GetArray() and AddArray() → ‘Save Data…’ cannot work?? (#17151) · Issues · ParaView / ParaView · GitLab (kitware.com) this old github issue may actually solve the problem, I am having a bit difficulty on implementing it as I think that I set active scalars correctly

What you are showing is the result of the surface LIC rendering. Since the LIC representation is computed at render time the only way to export the result is indeed to do a screenshot, image writers will not be able to help you there.

I think the way to go would be to use the Python Shell to :

  1. Get the screenspace bounding box of the representation of your plane (I don’t know how to do it)
  2. Make the camera zoom to this screenspace box (it is possible using the Zoom to Box feature interactively, but I don’t know if it’s possible in Python).

TLDR I’m not sure it is possible to do it. Maybe @mwestphal have an idea.

You will need to position the camera perfectly and save a screenshot.

Thanks you all!

Well it doesn’t sound unfeasible, probably it will need some trial and error, since my camera will probably always need to stay at the same place there should be three parameters to tune to get it right I guess…

Is there a way to get the borders of an object or the size of it to offset the camera properly?

Is there a way to get the borders of an object or the size of it to offset the camera properly?

No but it should be easy to compute using the extent of the data and the size of the window.

Fixing the view size to exactly the size of the object and pointing the camera to the center does the job!
I still need to tweak the zoom, which apparently is the difficult part, but it seems that it’s going to work, thank you!

I might have found something to fix the zoom, having the position fixed with

renderView1.CameraPosition = [x, y, z]
renderView1.CameraParallelScale = scale

Where x and y are pointed to half of the size of the object

Now I would have questions on how does the CameraParallelScale work, just by looking at the default values paraview puts in when you take the trace, it seems kind of a weird scala

For a 500x500 pixel grid, it is quite close using a scale of ~250, so I kind of suppose it has also to be lied to the half of the grid.

Does anyone have an hint on how to set it/how it works that specific line?
I couldn’t find anything, from the source code I just found that it is here:

void pqRenderView::setParallelScale(double scale)
  vtkSMProxy* viewproxy = this->getProxy();
  pqSMAdaptor::setElementProperty(viewproxy->GetProperty("CameraParallelScale"), scale);

here for the source code

From the Cpp documentation :

   * Set/Get the scaling used for a parallel projection, i.e. the half of the height
   * of the viewport in world-coordinate distances. The default is 1.
   * Note that the "scale" parameter works as an "inverse scale" ---
   * larger numbers produce smaller images.
   * This method has no effect in perspective projection mode.


As to how to set it I did not find any GUI interface for setting this property, probably because it is handled internally by ParaView. But you should be able to set it manually using the Python Shell