Size-dependent computational costs of Delaunay3D


I have two XMLUnstructuredGrid data sets, each with a set of points (a regular grid actually…) carrying a scalar field.

The two data sets are nearly homothetical one with respect to another: they include the same number of points/cells (within a 10% variation, and have the same memory footprint) but with different inter-points spacings. For instance 1 for one data set, and 3 for the other

Yet, the computational costs using Delaunay3D filter (to turn each discrete data set into a continuous one, before extracting iso-surfaces) in ParaView 5.4.1 are completely differents.

There is a 10 ratio in the memory footprint of created Delaunay3D objects in the pipeline (5.1 MB for the one stemming from the “small” grid, 52 MB for the “big” one), together with a 26* increase in the number of cells (but the same number of points)

And there is a ~ 200 ratio in execution time cost of my workflow…

(Good news is the final iso-surfaces are pretty much homothetic, as expected)

I’m kind of a beginner and run Delaunay3D with default parameters. Maybe there is some dimensional attribute, but I could not find it… (I understand from the doc Tolerance attribute is not a possible one because it is a ratio ?)

What is the reason for this size (or unit)-dependent behavior ?


A VTK code example is at

Turning the initial data set from an Unstructured Grid into a Structured one (as it should be from the start…) actually cancelled the need for a Delaunay3D filter before applying Contour filter (which runs smoothly on structured grid, unlike unstructured).

All in all, my problem is solved, even though this Delaunay3D behavior (on a case outside its validity domain, though) remains a little obscure to me.