I’m trying to add a new field data from a text file using a programmable filter.
My ‘Programmable Filter (below)’ shows the following errors when it runs under parallel using ‘mpiexec -np 4 pvserver &’ while it runs well if disconnected to the server.

Traceback (most recent call last):
File “”, line 22, in
File “”, line 69, in RequestData
IndexError: index 6495 is out of bounds for axis 0 with size 6495
(‘g_x’,)

“”"
import numpy as np
kk = 0
s_file_data = f’/home/sbkim/Work/urban/case6/results/gx_z00175_a{int(kk*10):04d}.csv’

data = np.genfromtxt( s_file_data,
dtype=None, names=True, delimiter=‘,’, autostrip=True)

input0 = inputs[0]
u = input0.PointData[“U”]
g_x = u/2

n_x0 = g_x.GetSize()
n_x = len(data[‘g_x’])
for i in range(n_x):
g_x.Arrays[0][i] = [data[‘g_x’][i],0,0]

Welcome to the ParaView discourse and thanks for posting!

It looks to me like your input data to the programmable filter is distributed when run in parallel and the number of components of the array in your text file is the total number of components.

Allow me to elaborate:

When not running in parallel: your data set exists as a whole and the U array has as many values as points in your total data set. When you read from your csv file, your g_x data is the correct size also having one element per point

When running in parallel: your input data set is most likely distributed on your 4 running processes and, therefore, so is the U array (n_{x0} \simeq n_x/4 on average). When you read your csv file, you read it on all 4 processes at the same time and so there are 4 copies of your data['g_x'] numpy array, one on each process. When you try to add then into your data set, your get an out of bounds error because there are less components locally on at least one of your processes than there are globally.

If you want to run your filter and do parallel IO, you will most likely need to take this into account in your process and write a parallel algorithm.