New Catalyst API

We have been mulling updates to Catalyst API – the API and patterns developers use to when instrumenting a simulation to work with Catalyst – for a while now and we’ve had several discussions with several folks. This is proposal that puts together some of the ideas discussed.


Looking at various existing adaptor implementations, it becomes clear that most of these are an arbitrary collection of C functions that are passed initialization parameters and simulation data-structures and meta-data. This has a tendency for creating functions with a large set of arguments that can get very confusing very quickly. For example, here’s a function from a real life CTH Catalyst Adaptor

void pvspy_sta(int block_id, int allocated, int active, int level, int max_level, int bxbot,
  int bxtop, int bybot, int bytop, int bzbot, int bztop, int npxma11, int npxma21, int npxma12,
  int npxma22, int npyma11, int npyma21, int npyma12, int npyma22, int npzma11, int npzma21,
  int npzma12, int npzma22, int npxpa11, int npxpa21, int npxpa12, int npxpa22, int npypa11,
  int npypa21, int npypa12, int npypa22, int npzpa11, int npzpa21, int npzpa12, int npzpa22,

  int nbxma11, int nbxma21, int nbxma12, int nbxma22, int nbyma11, int nbyma21, int nbyma12,
  int nbyma22, int nbzma11, int nbzma21, int nbzma12, int nbzma22, int nbxpa11, int nbxpa21,
  int nbxpa12, int nbxpa22, int nbypa11, int nbypa21, int nbypa12, int nbypa22, int nbzpa11,
  int nbzpa21, int nbzpa12, int nbzpa22)

So the first question becomes can we give the API some structure so that we avoid running into this kind of adaptor code which is tedious to maintain and debug.

It’s fair to say that ParaView/Catalyst changes more frequently than the simulation code. That being the case, for each release of ParaView, the simulation needs to be rebuilt with an updated version of ParaView. This is burdensome. Can we support a use-case where the simulation doesn’t need to rebuild / re-link whenever there’s a new version of ParaView? A corollary of this is can we support run-time selection of which version of ParaView/Catalyst to use. That way it’s easy to try multiple version of ParaView. This enables simulations to easily update to latest version of ParaView and go back to an earlier stable version in case of regressions.

Another common challenge encountered when using Catalyst is debugging. Sometimes a filter (or some other component in the in situ analysis and viz. pipeline) fails when running in situ with the simulation but the problem it hard to reproduce using just ParaView or pvbatch. Can we simplify this debugging use-case i.e. make it possible to recreate the environment within the simulation without having to run the simulation?


With these questions in mind, let’s enumerate the key aspects of a design that can address them:

  • Use a data-structure to pass data / meta-data from simulation to the adaptor. Something that lets the simulation pass named parameters by value or reference to the adaptor e.g. a dictionary. In that case, the pvspy_sta(...) function in the example above, could be rewritten as a pvspy_sta(params). Instead of every adaptor instrumentation defining its own API, this also helps us standardize the adaptor interface. The API, for example, can comprise of just 3 calls: catalyst_initialize(params), calatayst_execute(params), and catalyst_finalize(params) where parameters for each of the calls are passed through that dictionary-like data-structure.
  • With the aforementioned change, the API that simulation will use to setup and execute Catalyst is fixed and limited. It only consists of the API related to creation/assignment/cleanup of the dictionary and the 3 catalyst_.. calls. If we keep the dictionary data-structure opaque, we can provide a ABI-stable adaptor API. Make it a C API, instead of C++, and we make it even more stable and easier to use from multitude of languages. We can then provide a stub implementation of this API that is trivial, with no external dependencies, that simulations can link against. This stub will do nothing by default – thus introduce no overhead for simulations. At runtime, one can swap this stub adaptor implementation with a custom adaptor implementation that is specific for the simulation and uses a chosen version of ParaView. Since the adaptor will be ABI compatible, it should be easy to implement using standard environment modules or updating LD_LIBRARY_PATH (or DYLD_LIBRARY_PATH).
  • This standardized adaptor API will be provided in its own separate source repository / package with no external dependencies. Linking and building against this standardized adaptor API will be kept simple. One should not require CMake-based build system or anything fancy at all. Something as simple as -I $root/include -L $root/bin -l catalyst_adaptor added to the compiler/linker should suffice.
  • Finally, since all exchange between simulation and adaptor happens via the params dictionary, if we serialize the dictionary, it should be possible to recreate the state for debugging purposes. To support this, we can provide an implementation in the stub adaptor to dump out all data passed to the 3 catalyst_... calls. Now, all we need a small miniapp / driver that can load these dumps and play them back to recreate the state for debugging later on.


  • The dictionary can represent a hierarchical structure where the key is path rather than just a string. Thus makes it possible for the adaptor developer to device a schema to conveniently pass simulation data-structures and meta-data to the adaptor. One possible choice for this dictionary datastructure is Conduit.
  • ParaView can define a standard schema for all supported VTK data types. For simple simulations, they can use this standard schema and one doesn’t need to write a custom adaptor at all.
  • A typical adaptor implementation in catalyst_execute(..) will take the params dictionary passed to it and pass it to an data-produce vtkAlgorithm subclass that can be connected in Catalyst pipeline. This vtkAlgorithm subclass will implement a RequestData where the creation of VTK data objects using the simulation provided paramaters will happen. Since the VTK data object creation will happen in RequestData, there will be no conversion of simulation data-structures to VTK until requested by the analysis pipeline. Thus for timesteps where the analysis pipeline is not executed, we won’t be wasting any cycles converting simulation data to VTK.

Thoughts? Comments?

1 Like

We are generally taking a minimalist approach to upgrading/maintaining the adaptor at ./Adaptors/Pagosa. This means that having figured out how to build the adaptor, and run it from a fortran simulation code, the job is done (and usable by any other simulation code). Under the current Catalyst configuration the change with new versions is almost invisible to us so there is no trouble keeping up with new releases. Hence, as long at the merge process ensures backward compatibility, there is no maintenance issue for us.

So, in our very narrow (Pagosa-centric) opinion the present Catalyst API is perfectly fine. If, however, an overhaul of the Catalyst API somehow makes it easier for others to use the power of Catalyst then by all means please go ahead with it and we will be sure to help with any Pagosa-Specific stuff that may arise.

1 Like

I suggest taking a good look at how the ASCENT project allows simulations to define data (publish in ASCENT-speak).

I don’t think we necessarily should grab the conduit API and mimic them exactly. (Conduit uses C++, which is probably contrary to the goal of making it easier to bind directly to Fortran and other languages.) But there has been a lot of thought that has gone into the protocol for the specification of data.

What I particularly like about it is that you specify a mesh with a series of calls where you specify simple attributes. The attribute names are just strings, but have special separator characters (e.g. /) that allow you to create hierarchies of attributes. So a complex structure could be built by calling the same function multiple times. That way you can have a small set of simple functions rather than several complex functions trying to capture every possible way to specify a mesh.

So let’s say the API has a simple function named catalyst_set_attribute that takes a name and a value. Borrowing from an example for ASCENT, an unstructured mesh might be defined like this.

catalyst_set_attribute("coordinates/type", "explicit");
catalyst_set_attribute("coordinates/values/x", x_coords);
catalyst_set_attribute("coordinates/values/y", y_coords);
catalyst_set_attribute("coordinates/values/z", z_coords);
catalyst_set_attribute("topologies/mesh/type", "unstructured");
catalyst_set_attribute("topologies/mesh/elements/shape", "hex8");
catalyst_set_attribute("topologies/mesh/elements/connectivity", cell_array);
catalyst_set_attribute("fields/pressure/topology", "mesh");
catalyst_set_attribute("fields/pressure/association", "point");
catalyst_set_attribute("fields/pressure/values", pressure_array);

Another nice benefit of declaring meshes as a set of attributes with hierarchical names is that it is easy to replicate the interface across language boundaries and you can even specify the same information in data formats like yaml, json, or XML.

  type: "explicit"
    x: [...]
    y: [...]
    z: [...]
    type: "unstructured"
      shape: "hex8"
      connectivity: [...]
    topology: "mesh"
    association: "points"
    values: [...]

Indeed, that’s exactly what I was thinking for the “standard schema” that we could support out of the box. Adopting the mesh-blueprint (or a subset of it) as the schema is definitely a good idea.

Conduit does have a C API too. Our public interface could indeed be limited to the C API or catalysized version of it that internally forward to Conduit. So simply using Conduit as the parameter-collection is indeed viable option to explore.

For the most part, I haven’t seen a lot of changes in the Catalyst adaptors myself either. Basically most of that is due to the adaptors being about 90% focused on creating a vtkDataObject which hasn’t changed too much. Sure, there’s been parts there for improvements to zero-copy but that hasn’t been necessary, only beneficial improvements. The other bigger changes, which I haven’t seen too much of would be:

  • ghost information and compatibility between simulation and use inside of Catalyst
  • on-node parallelism compatibility between simulation and use inside of Catalyst

These are things that I just haven’t had enough experience with at this point to feel comfortable with. My point here being though that something as “simple” as vtkImageData can still get quite complex when you figure in which order to loop through the points/cells, e.g. X then Y then Z, like VTK, or maybe Z then Y then X, then with potentially ghost cells, and then with differing ways of storing tensor quantities, e.g. for a vector storing all of the X components, then all of the Y components, then all of the Z components, or if it tuple-based like VTK but all of the arrays are stored together, e.g. velocity and density such that the tuple contains both velocity and density, i.e. a 4-component tuple, things start getting really messy for describing how to iterate over the zero-copied array.

As far as zero-copying unstructured grid cell connectivity, at this point I just assume that it won’t be done.

The next thing for being able to switch between Catalyst versions – I do like this in concept but there needs to be an improvement in the Catalyst pipeline information to also be compatible between versions. Generally if you’re just doing data extracts it will likely work. If you’re doing image output and trying to use the Python Catalyst scripts, in the current setting it won’t. If you use the state way of going from an older version to a newer version that can “fix up” inconsistencies between parameters between ParaView versions this could make it functional but often the resulting view seems to be off by just a little bit when doing this. That’s ok, IMHO, when working in the GUI since you can correct these issues for a nice, final screenshot output if that’s what you want but committing significant resources to a simulation run and having outputted screenshots being a bit “off” would be quite disappointing to me. Doing this for Cinema would probably be even worse.

I guess what I’m saying is that I’d be worried about the API being too complex in order to support all of the edge use cases. Or, maybe it’s a compact API but doesn’t make it easy to do zero-copy. And then there’s now two interfaces to Catalyst to support. I’m assuming here that the old interface isn’t going away, right?

It would lower the barrier to entry though which would be a good thing. There certainly were mistakes made on the first pass and incremental fixes but those were somewhat hard with a decided on interface to work from.

Good points @Andy_Bauer, however, I’d argue they are largely independent of the Catalyst API itself.

Indeed. Here we are talking of capabilities within the Catalyst library itself. It’s probably best to explicit report these as issues so we can tackle them one at a time.

This is where adopting something like Conduit to describe the mesh may be extremely handy. Conduit, from what I understand, already supports transforms which can help with such conversions. Of course, transformations are not zero-copies, however they do make it easy to get things plugged together. Ideally these could be VTK-m arrays since they support more flexible memory layouts making transformations obsolete in future. The beauty of this design would be that when that happens, the simulation codes will simply get the optimizations for free without having to worry about updating any adaptor code – if they adopt the standard mesh protocol.

Sure, but this again a separate question: how do we maintain reproducibility between ParaView versions for state files; not impacted by Catalyst API itself.

I do not follow. In fact, the current catalyst API is susceptible to such changes. For example, in a current adaptor, if you wanted to now pass ghost array information, you had to change the API to accept pointers for ghost arrays, in the proposed API, you are simply adding that to the params dictionary (or Conduit node) thus API is not impacted at all. The nice thing is even if you update your simulation to add this new ghost array, you can still use older versions of the adaptor that didn’t do anything with these new arrays as well as a newer version to compare and contrast with ease.

The biggest advantage of this new approach is the ability to generate a in situ trace, if you will, that can re-run in post-hoc fashion to debug issues.

Ok, just wasn’t 100% clear that this was a non-zero copy solution. In that case that clears up a lot of my questions and can ignore issues about the API and edge uses cases and stuff like that.

Other issues I brought up are certainly orthogonal to the API.

This is not necessarily true. This supports zero-copy. A closer look at the Conduit API reveals that you can indeed pass raw array pointers for passing things like field arrays, connectivity etc. Whether the adaptor actually uses these raw-pointers to pass on to the internal VTK data object or has to do a deep copy is independent of the API itself. This is an advantage, since that allows for the adaptor code to evolve over time to support zero-copy as the VTK data model gets richer over time without having to change anything in the simulation code.

I did some investigation and Conduit fits the bill for a mechanism to describe the “params”. Here’s what I am thinking currently:

  • We create a new repo named “Catalyst”. This project will have conduit as an internal third-party package. By default this project will build the ABI-stable stub library. This project will also provide CMake files that other projects can use to build implementations of the Catalyst library API.
  • ParaView will add this Catalyst project as a submodule (or a thrid-party import, not really sure yet) and then use the CMake-glue to implement the Catalyst API that basically supports the Conduit-defined mesh protocol or a subset or extension of it. We can incorporate other schema specifications e.g. adis, but that’s not necessary to get us started, so we can ignore it for now. Now, when ParaView is built, it will include a (following the pattern from MPICH ABI Compatibility Initiative). This library will be ABI compatible with the library built by the stub “Catalyst” project itself and hence can be used in its stead.
  • A simulation that does not want to use (or can’t use) the standard schemas/protocols supported by ParaView by default can indeed define its own schema/protocol. For that, you write develop a new implementation for the Catalyst API using the building blocks provided. Most likely, you’d just do a find_package(ParaView COMPONENTS Catalyst) or something like that and then develop the custom Catalyst API implementation – an example will help iron out the details and make things clear.

I have a first draft of the prototype implementation here along with some initial documentation.

The next step is to implement the ParaView’s Catalyst API implementation (or as the docs call it, ParaView Catalyst library) that handles the mesh blueprint and ADIS to support various schemas for describing data.

As a reference of blueprint to VTK conversion you can take a look at the Ascent ParaView support:

Ah nice! Exactly what we need soon enough. I am right now working on creating the code ParaViewCatalyst library. Trying to decide if makes sense to piggy back on the vtkCP* classes or just create a new set. Once that’s sorted out, the next step would be to create a reader that’s given a Node with Mesh blueprint. I’ll ping you when I get to that, and we can figure out the next steps.


Catalyst API along with the “stub” implementaton is now available in this project with docs.

ParaView Catalyst – the Catalyst API implementation that uses ParaView for in situ processing is under development here. The vtkMeshBlueprintSource – a data-producer that takes a conduit::Node describing a computational mesh as per Conduit Mesh Blueprint is very basic right now. I only have it working for an insanely trivial case. That needs to be filled in and then examples updated. I also need to add validation and docs describing the blueprint/schema supported by ParaView Catalyst to communicate which scripts to load, etc.

Should the producer be called vtkConduitSource rather than vtkMeshBlueprintSource? I see the rational behind the current name, but I had to do a double-take before I understood the meaning.

Sure…I doubt there’ll be competing Conduit blueprints for defining meshes so just calling it vtkConduitSource should be sufficient.

Update: converted an existing Catalyst example to the new API here. It’s quite trivial to describe unstructured grids the way that example does. I also like the fact that Conduit makes it super easy to describe SOA and AOS arrays.

The merge-request is pretty close. The vtkConduitSource now supports a reasonable set of mesh types to call it a “preview” release. I’ve also converted a few C/C++ examples to use the new style – to me they look much simpler than before especially since you don’t have to worry about creating various vtkCP... classes nor create VTK datasets and arrays explicitly, instead, just describe the arrays using a fairly intuitive Conduit mesh description.

A few build related things to iron out and then we should be able to merge this (pending dashboard cleanups, of course).