Information is out of date when using get_execute_params() inside a python trigger for extractors

A few years ago, we implemented python triggers to ParaView Catalyst extractors. Extractors can run a piece of code that returns True or False to determine whether they should be triggered. We also implemented paraview.simple.catalyst.get_execute_params(), which reads the state/parameters node in the Conduit tree.

These two features were intended to be used together: The simulation solver passes trigger information through the parameters node, and the extractor trigger script fetches that information using get_execute_params().

Here’s the thing though: When the extractor evaluates the trigger script, the pipeline is of of date. That’s actually the whole point, since we are still trying to figure out whether we need to update. But if the pipeline is out of date, then ultimately the Conduit source is out of date and get_execute_params() fetches the previous parameters node, not the current one. The work around is to force an UpdatePipeline() in catalyst_execute(info) on the Conduit source.

Is this an issue? As in, did we miss this during the implementation? Or, are we happy with this workaround? I only just realize I’ve been using these two features incorrectly for the past two years or so. It’s a pretty sneaky subetlty.

By the way, the issue the combination of these features is trying to solve is that there is no way to call catalyst_execute() form the simulation code for only some of the scripts or pipelines. catalyst_execute() always runs all the scripts registered in catalyst_initialize().

I’m curious on the community’s thoughts on this.

Alexandre

FYI @Louis_Gombert @Francois_Mazen @nicolas.vuaille

I have what I think is a similar need for Catalyst. That is, before executing Catalyst for a given timestep I want to be able to check with the script what is needed that time step. Things I’m interested in are figuring out are if anything in the Catalyst scripts will actually get computed that time step and if yes then what channels and potentially what fields in each of those channels. This way before catalyst_execute() is called from the adaptor for a given time step I know what information (e.g. channels, fields, etc.) need to be constructed by the adaptor and passed in to catalyst_execute. Is this similar to the functionality that you’re thinking about? This type of functionality was available in Catalyst V1 but not yet in the V2 API.

My need is the other way around. My adaptor knows exactly what needs to be done and will construct only a few channels. However, I have no way to tell ParaView Catalyst which pipeline to run. It will run all the pipelines even, though some pipeline don’t make sense for the channels the adaptor sent. ParaView Catalyst always runs SaveAndExtract on all pipelines, so the only way for this to end up not triggering a pipeline is to disable all extractors on that pipeline.

It would be much easier if we could pass which pipelines need to be executed when the adaptor calls catalyst_execute().

Ah, that I think is available in ParaView Catalyst V2. If you do something like:

      node["catalyst/scripts/filename1"].set_string(catalyst_script1);
      node["catalyst/scripts/filename2"].set_string(catalyst_script2);

During your catalyst_initialize() step and then the following when calling catalyst_execute() from the adaptor:

  node["catalyst/state/pipelines/0"] = "filename1";

that should only execute the extractors specified in catalyst_script1. I may be slightly wrong with the syntax but that’s the general idea.

Hum, I did not know that. I’ll check it out, thanks.