Hi everyone,

my teammate & I have an idea for FE postprocessing plugin for Paraview, and we’d like to hear from you if this is at all feasible.

A little background first: the problem concerns comparing results obtained from FE numerical computations. Analysis engineers who intend to use Paraview need to compare the results of many computations performed on similar FE models, but not exactly identical. Sometimes the FE model changes slightly only in terms of mesh (different element size, local mesh refinement, etc) but as you know this is enough for the results to differ. Other times the changes are more pronounced, e.g. the boundary conditions change, or even the geometry (it might vary slightly between versions of the model due to design optimizations and such).

The usual, low-tech, naïve solution is to try & “eyeball” the difference in results, possibly by putting two different models / sets of results in two separate Paraview layouts with linked cameras, so the user can view both models from the same vantage point. But this gives you only general idea on the difference between two sets of results.

The perfect solution would be to perform an explicit difference computation between two meshes (akin to ANSYS load case operation), but this is out of the question due to the differences in mesh or even in the underlying geometry. Our initial idea was to map the results from the “differing” mesh onto the mesh that is an exact copy of the first mesh, and then perform straightforward node by node difference computation. But this proves to be more difficult. Paraview doesn’t like to map results between meshes that are not congruent (maybe there is a way of doing this but we simply do not know how? Any suggestions?).

The approach that we’d like to try is something that’s been implemented in our in-house developed viewer/postprocessor. The trick used in our old tool is based on mesh rendering, and the process of visualizing the difference goes like this: the user would load two different models and then position them in space so the both models overlap. This would be done by hand and of course would never be perfect (due to sometimes big – but local – differences in geometry). Then the results from these two models would be displayed in a normal manner, but with a catch: the final stage is an image processing trick. The difference presented to the user is simply a difference in pixel value at a given location of a projection plane. The difference in pixel value would be mapped so the resulting numerical value makes physical sense to the user (e.g. the difference in the surface temperature). This approach brings forward obvious problems (like the changing value of computed difference when the model is rotated and the surfaces do not overlap, because all of this was being done in real-time) but it turned out to be quite useful.

The question is – could we replicate something like this in Paraview? Our idea is to load two models, allow the user to apply transformations to make the models overlap. Then, the results could be displayed “at the same time”, but we’d use some kind of image processing filter already available in Paraview, that would perform the actual “difference computation”. We have vtkImageDifference filter in mind. Would this work? What’s important to our users, is it possible to run this filter in “real-time”, meaning is it possible to update the visualization pipeline simply by rotating the camera, and produce new difference image each time the camera orientation is updated?

We’d really appreciate any comments or suggestions how to approach this problem. My teammate & I have some knowledge regarding building Paraview plugins. We just don’t know if our solution would work (the real-time aspect of difference generation is a crucial one). Thank you!