However, my simulation doesn’t use MPI, as we parallelize on a single node using TBB directly. When I try to run my simulation code with catalyst support, I get the following error:
*** The MPI_Comm_rank() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[222f3f06c5dd:33189] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
Is it possible to run AdiosCatalyst without MPI? How should I do that?
In your case, you may want to use ParaViewCatalyst directly. This can be built from ParaView sources.
The (only) advantage of AdiosCatalyst over ParaViewCatalyst is to provide the in-transit capabilities, i.e. to run analysis on a seperate MPI node. So it does not make sense for a single-node run.
That said, I think only one thread should communicate with Catalyst, but I may be wrong here.
I have the simulation code running on a machine and paraview gui running on another machine and I am using Live Visualization. On that setup, its still recommended to use catalyst paraview direcly?
If you have only one (node on one) machine to run the simulation, then ParaView Catalyst is the way to go: adios catalyst was definitely designed for multi node analysis with MPI.
AdiosCatalyst and ParaViewCatalyst are (some kind of) paraview servers running a pipeline fed by the simulation (like a pvbatch executable).
The Live Visualisation allows your paraview client to connect to this remote server to see the running pipeline.
Thanks for the explanation. I was actually quite confused, and I thought Catalyst, running on my remote simulation machine, was sending the data to Paraview, which is running on my local machine, and it was in Paraview where the pipeline was executed.
But I see that the simulation and paraview pipeline are both executed on the remote machine, and in the local machine I can just open a Paraview GUI and connect to that remote instance.
In my simulation I just want to dump the meshes (vtkUnstructuredGrid) and see how they deform. Maybe store them in disk.
Probably out of the scope of the original question, but how would you recommend to store the data in disk? Right now I am using an HDF5 file with a custom format, similar to what conduit does with the mesh blueprints. Can I just store conduit nodes to some format and then read them back in paraview?
Write a vtkHDF file from your simulation code. You will need to implement the specification from here: VTK File Formats - VTK documentation. This is the rising format for ParaView. So no need for catalyst here.
Setup a Catalyst script with a Data Extractor (VTU for unstructured grid). Of course, this will requires to build and run with Catalyst, but this may allow some pre-computing (like cleaning or so) and live visu.
I think there is some ParaViewCatalyst option to configure the io directly from the conduit node (ie without python script) but I do not find the resources now.
When I started developing my own format, I had a look at VTKHDF and I discarded because it didn’t support temporal data or hierarchies (vtkMultiBlockDataSet). I see it supports temporal data already, but it still lacks hierarchies, so I guess I am still out of luck.
TBH, I am not even sure if Catalyst supports hierarchies, or if the Conduit hierarchy is correctly converted to vtkMultiBlockDataSets. Ideally, I would have Catalyst and an Extractor saving to disk in that VTKHDF, but I guess support its not yet there.
Indeed actually only the support of temporal data is supported with VTKHDF. For the missing hierarchy support, we proposed, few weeks ago, a design : Composite Data Sets for the VTKHDF format - Development - VTK. I started to implement it so a first version will be available in a near future.
For Catalyst:
I am not even sure if Catalyst supports hierarchies
you can use hierarchical dataset with Catalyst2 thanks to the multimesh protocol :