Operations on multi-block dataset with Programmable Filter

Hi Everyone,

A post-processing script I made redefines a variable that may be wrongly defined by my solver. However, this value depends on an other scalar also present in the dataset.

For example:

if scalar2 < 0.8:
    scalar1 = 50 * scalar3
else:
    scalar1 = 6000 * scalar3

I initially tried to do this using the calculator filter:

if(scalar2<0.8,50*scalar3,6000*scalar3)

But I turned out that the variable created may disappear from the dataset … (I have to report this bug)

So I tried to do that using a programmable filter, but I am getting stuck when I want to overwrite the values in the array. My dataset is usually a multi-block dataset and this seems not to be helping.

here is the script I used so far:

import vtk
import vtk.numpy_interface.dataset_adapter as dsa
import vtk.numpy_interface.algorithms as algs
import numpy as np

data = dsa.WrapDataObject(inputs[0])

scalar1 = data.CellData["scalar1"]
scalar2 = data.CellData["scalar2"]
scalar3 = data.CellData["scalar3"]

newArray = scalar1 * 0
for i, val in enumerate(scalar1):
    if scalar2[i] < 0.8:
        newArray[i] = 50*scalar3[i]
    else:
        newArray[i] = 6000*scalar3[i]

output.CellData.append(newArray, 'newVal')

This obsvioulsy fails. Partly because my “newArray” is a composite array…

Anyone could help on this ?

Thanks in advance

I think you can do this with a Python Calculator filter. Set the expression to

np.where(scalar2 < 0.8, 50 * scalar3, 6000*scalar3)

Change the Array Name parameter to “scalar1”, and you should get the desired behavior.

I tried your suggestion, but unfortunately, I have feeling that the condition I give always returns True whatever I am asking …

np.where(scalar2 < 0.8, 50 * scalar3, 6000*scalar3)
gives the same result as
np.where(scalar2 > 0.8, 50 * scalar3, 6000*scalar3)

But that should not be the case …
scalar2 is a scalar defined between 0 and 1 by the way.

There should definitely be a change when you switch from < to >. What is your output array named and how are you checking the results?

I am creating a new variable to check my result.

I made another check: merging all the blocks together. and this leads to the expected result!
So there seem to be something going on with multi-block datasets.

Ah, indeed. The usual numpy functions do not work for VTKCompositeDataArrays that are available from multiblock datasets, but the numpy-like algorithms provided by vtk.numpy_interface.algorithms do. However, the where function provided in vtk.numpy_interface.algorithms is only the single-argument version, not the three-argument version I suggested, so merging the blocks is the only way to use this numpy expression currently.

Here’s a way to fix your Python Programmable filter script to work with composite arrays:

import paraview.vtk.numpy_interface.dataset_adapter as dsa
import paraview.vtk.numpy_interface.algorithms as algs
import numpy as np

data = dsa.WrapDataObject(inputs[0])

cd_scalar1 = data.CellData["scalar1"]
cd_scalar2 = data.CellData["scalar2"]
cd_scalar3 = data.CellData["scalar3"]

cd_newArray = cd_scalar1 * 0

for scalar1, scalar2, scalar3, newArray in zip(cd_scalar1.Arrays, cd_scalar2.Arrays, cd_scalar3.Arrays, cd_newArray.Arrays):
  # you may check any of the scalar*/newArray with `foo is dsa.NoneArray` to
  # handle blocks with the array missing, if any.
  for i, val in enumerate(scalar1):
      if scalar2[i] < 0.8:
          newArray[i] = 50*scalar3[i]
      else:
          newArray[i] = 6000*scalar3[i]

output.CellData.append(cd_newArray, 'newVal')