Idea for plugin - difference of results between non-identical meshes

Hi everyone,
my teammate & I have an idea for FE postprocessing plugin for Paraview, and we’d like to hear from you if this is at all feasible.

A little background first: the problem concerns comparing results obtained from FE numerical computations. Analysis engineers who intend to use Paraview need to compare the results of many computations performed on similar FE models, but not exactly identical. Sometimes the FE model changes slightly only in terms of mesh (different element size, local mesh refinement, etc) but as you know this is enough for the results to differ. Other times the changes are more pronounced, e.g. the boundary conditions change, or even the geometry (it might vary slightly between versions of the model due to design optimizations and such).

The usual, low-tech, naïve solution is to try & “eyeball” the difference in results, possibly by putting two different models / sets of results in two separate Paraview layouts with linked cameras, so the user can view both models from the same vantage point. But this gives you only general idea on the difference between two sets of results.
The perfect solution would be to perform an explicit difference computation between two meshes (akin to ANSYS load case operation), but this is out of the question due to the differences in mesh or even in the underlying geometry. Our initial idea was to map the results from the “differing” mesh onto the mesh that is an exact copy of the first mesh, and then perform straightforward node by node difference computation. But this proves to be more difficult. Paraview doesn’t like to map results between meshes that are not congruent (maybe there is a way of doing this but we simply do not know how? Any suggestions?).

The approach that we’d like to try is something that’s been implemented in our in-house developed viewer/postprocessor. The trick used in our old tool is based on mesh rendering, and the process of visualizing the difference goes like this: the user would load two different models and then position them in space so the both models overlap. This would be done by hand and of course would never be perfect (due to sometimes big – but local – differences in geometry). Then the results from these two models would be displayed in a normal manner, but with a catch: the final stage is an image processing trick. The difference presented to the user is simply a difference in pixel value at a given location of a projection plane. The difference in pixel value would be mapped so the resulting numerical value makes physical sense to the user (e.g. the difference in the surface temperature). This approach brings forward obvious problems (like the changing value of computed difference when the model is rotated and the surfaces do not overlap, because all of this was being done in real-time) but it turned out to be quite useful.

The question is – could we replicate something like this in Paraview? Our idea is to load two models, allow the user to apply transformations to make the models overlap. Then, the results could be displayed “at the same time”, but we’d use some kind of image processing filter already available in Paraview, that would perform the actual “difference computation”. We have vtkImageDifference filter in mind. Would this work? What’s important to our users, is it possible to run this filter in “real-time”, meaning is it possible to update the visualization pipeline simply by rotating the camera, and produce new difference image each time the camera orientation is updated?

We’d really appreciate any comments or suggestions how to approach this problem. My teammate & I have some knowledge regarding building Paraview plugins. We just don’t know if our solution would work (the real-time aspect of difference generation is a crucial one). Thank you!

This is a very hard problem to solve in the generic sense, maybe @Charles_Gueunet or @jfausty can help.

In any case, I want to mention the FlipBookPlugin (distributed in ParaView), the takes the “eyeball” idea to the next level and lets you use retinal persistence to find differences visually.

Hth,

Hi @k_herdzik77,

Thanks for bringing up this idea. Indeed, it seems like something that could be of use to a lot of different users.

From the mathematical perspective, it seems to me that the most natural approach is to find a parameterization of both meshes from the same base space (i.e. find a mapping from some subset of \mathbb{R}^n onto mesh 1 and mesh 2) and then perform an L2 difference calculation by subtracting the relevant fields and performing an integral to get a global value or visualize the difference in that base space to get a sense of the localization of variations.

As you aptly described, the difficulty is in algorithmically finding this parameterization/mapping between both meshes. Using the rendering process as an effective 2D parameterization is an interesting idea indeed. It should be relatively easy to perform and you can probably even prototype it by taking screenshots and subtracting them from one another.

A caveat that I can think of right now is that, while this would work well for surface meshes, it seems that most FE meshes of practical use are 3D and this methodology would not allow for visualizing important differences that might be internal to the fields.

If your meshes are indeed very close to one another geometrically, have you thought of trying to warp one onto the other by deforming the mesh using its surface normal field? Also, if the meshes represent the exact same geometry, but have different cell connectivities / numbers of nodes, using a more refined comparison mesh and using the resampling capabilities of ParaView to effectively interpolate your fields of interest can also be a worthwhile approach. You can actually resample your data sets onto any other data set and this by itself might also provide some differentiation of your results.

Hope that helps!

Regards,

Julien

Hi everyone, and thanks a lot for your answers and insights into this problem.
Just wanted to give you a quick update on the status of the plugin.
We decided to pursue the 2D rendering method first, and my colleague & I are currently working on it. In principle it looks easy but actually requires some trickery (i.e. using your own custom event processing loop) if you want this process to run in (quasi) real time. Since we’re no experts, the progress is slow. Nevertheless it seems that we’re on a right track and I hope we’ll have something that we can show you pretty soon.

Regards,
Krzysztof Herdzik

2 Likes