Hello and thanks! I want to map/warp/wrap image data to a mesh surface (temperature from an infrared camera), and am looking for the best approach.
I have the exact spatial camera position relative to the mesh, and can get an ‘undistorted’ resampled image using OpenCV tools. I’m thinking there could be a couple computational ways to tackle this, but don’t know if they are implemented/possible using existing paraview filters.
Some ideas were to:
- Render the surface mesh into a virtual camera with depth from the camera as a scalar, at the same resolution/coordinates as the real camera. Then use openCV’s reprojectimageto3d method to get XYZ+T
- Calculate a ray for each XY pixel in the camera, and find the first point intersection with a mesh facet, then add it to an array of XYZ + T
- Place the 2D image in space, then extrude it to a truncated pyramid volume, which might allow for resampling with the resampleWithDataset filter on the mesh coordinates
Let me know if you think of things to try or can comment on the viability of these ideas, really appreciate the help!
Could you use the
Programmable Filter (or some other method) to compute texture coordinates on the mesh surface points that correspond to where each point in the mesh projects to the camera? From there you can just use the
Display parameters to map the camera image onto the surface.
Oh that looks really promising, all the examples are simple iterations over points or points of cells. I should be able to do idea #1 with this (map facets in the image), maybe not #2 (radiate rays to mesh) without writing code to check for triangle intersections. Any intuition on how fast these filters run? My image is small relative to my mesh, so I think I’d prefer #2 if there’s an efficient intersection method.
I’ll mark as solved once I’ve got an example to post, I’d appreciate other answers that use built in filters if there are any that work for this. Thanks!
See “Image On Topography Source” ParaView plugin here: GitHub - mobigroup/ParaView-plugins: ParaView plugins
Thanks MBG, this has been a great resource for learning about how to interact with VTK and write plugins. It doesn’t directly solve my problem because it’s just interpolating with xarray, but after exploring the options I think I can use the vtkProjectedTexture class. I saw an example from this thread but it appears to have been taken down: thread. I’m still learning about how to navigate the docs and feed this class the data it needs, any help or other examples are welcome; I’ll post what works.
I also got the built in vtkCellLocator to work per this example to cast rays out from a virtual camera and get mesh intersections. However, it’s horribly slow because I’m looping over 300k pixels, and I had to downsample by 64x. Line on mesh example
@j-c The plugin produces an image on 3D surface like to this:
So you are able to map your image to a sphere or any other surface and do it fast. Maybe, I didn’t understand your question.
Sorry if I wasn’t clear - the mesh you import is already coregistered with the image, it just needs to be interpolated (i.e. both have xy coordinates that map to one another) - what I’m trying to do is take an image with a camera and then ‘reverse ray trace’ it back onto mesh objects in a virtual scene. This means I need a fustrum perspective projection like the vtkProjectedTexture.
Actually, as I look at other vtk examples, it looks like they’re down as well. Is there a site issue? other example