Partition specific global variables (Exodus)

Dear All,
I am a developer of a parallel CFD code and I am using Exodus II format for my outputs (e.g. in files output.e.0000 to output.e.__n).

I have certain variables that are partition-specific and I would like to visualize them as well. For example, I would like to plot the MPI wait times in each partition to guide a more balanced domain decomposition. I tried using the “Global Variables” option in Exodus and write these partition specific variables in each of the exodus files in the series. However, when I load the file series in ParaView (I am using v4.3.0) and plot this global variable, only the value from the first partition (output.e.0000) is used for the entire domain. Is this a bug or the intended behavior?

I could store these partition-specific variables as element/node level data but that seems like lot of overhead. Any guidance is much appreciated.

Thank you very much!
Best,
Badri

You might try asking this on the SEACAS project page, which is the project responsible for Exodus.

Thanks Greg. I will ask Greg Sjaardema (SEACAS) as well. But I think this is related to the ExodusIIReader in ParaView. When I used the SEACAS tool grope to query the individual exodus files in the series, I do see different values stored. It seems that when ParaView loads the global variables, it ignores values from partitions other than root.

Badri

I just asked Greg to take a look at this thread. He appears to still be on vacation.

Alan

If using the file-per-processor option, then in theory there can be a different value for a global variable on each individual file. However, all of the re-combination programs that I am aware of (epu, nem_join) will assume that the global variables are the same on all partitions and will take the value from processor 0 as the “canonical” value. This seems to be the same behavior that ParaView is exhibiting.

This would also be the behavior in a program that did N->1 output (all processors writing to a single file) or N->M (N processors writing to M files).

There is not really a good method to store processor-specific data in the exodus file as it is designed to behave as a “single model” and the parallel “file-per-processor” was an expediency to provide better IO speeds in the cases where parallel IO to/from a single file was not performant.

There isn’t currently a good way to store the processor-specific “global” data which would survive recombination or parallel-IO (N->1) use cases. I can see the usefulness, but don’t have any good workarounds with the current format.

Greg,
Thank you for your prompt response and for the explanation. I will store the partition-specific global data at the element level and move on. It’s not very efficient or elegant but seems to be the only way forward at the moment.
Best,
Badri Hiriyur