Hello and thanks! I want to map/warp/wrap image data to a mesh surface (temperature from an infrared camera), and am looking for the best approach.
I have the exact spatial camera position relative to the mesh, and can get an ‘undistorted’ resampled image using OpenCV tools. I’m thinking there could be a couple computational ways to tackle this, but don’t know if they are implemented/possible using existing paraview filters.
Some ideas were to:
- Render the surface mesh into a virtual camera with depth from the camera as a scalar, at the same resolution/coordinates as the real camera. Then use openCV’s reprojectimageto3d method to get XYZ+T
- Calculate a ray for each XY pixel in the camera, and find the first point intersection with a mesh facet, then add it to an array of XYZ + T
- Place the 2D image in space, then extrude it to a truncated pyramid volume, which might allow for resampling with the resampleWithDataset filter on the mesh coordinates
Let me know if you think of things to try or can comment on the viability of these ideas, really appreciate the help!