@Andy_Bauer @utkarsh.ayachit @mwestphal @Francois_Mazen @nicolas.vuaille @cory.quammen
A huge shoutout to the people in ParaView discourse for all the help. I have been able to implement Catalyst for in-situ visualization in PLUTO. This is a fixed grid finite volume astrophysical Magnetohydrodynamics code. I’m a beginner on Catalyst, Conduit and Ascent frameworks and this was an exciting one week of learning to be able to get this working. It wouldn’t have been possible without the timely help from the people here.
What I had to do in
CatalystAdaptor.h (10.1 KB):
- Rectilinear and uniform mesh blueprint for conduit were good in setting up grid data in Cartesian coordinates (I’ll check after vtk bug with conduit io rectilinear grid gets fixed in the next paraview release).
- With spherical/cylindrical coordinates, I had to use unstructured grid after transformation to Cartesian (code attached). The standard example provided assumed x-y-z data order to define the connectivity. I had rewrite that for z-y-x order as highlighted below. As long as the grid is static (or simply expanding/contracting) this connectivity needs to be calculated only once for the entire simulation duration and stored as a static variable (array) saving redundant calculations.
for(k=0; k<grid->np_tot[KDIR]; k++)
{
for(j=0; j<grid->np_tot[JDIR]; j++)
{
for(i=0; i<grid->np_tot[IDIR]; i++)
{
//cell_count = k * grid->np_tot[IDIR] * grid->np_tot[JDIR] + j * grid->np_tot[IDIR] + i;
CellConn[counter++] = k * numPoints[JDIR]*numPoints[IDIR] + j * numPoints[IDIR] + i;
CellConn[counter++] = (k+1) * numPoints[JDIR]*numPoints[IDIR] + j * numPoints[IDIR] + i;
CellConn[counter++] = (k+1) * numPoints[JDIR]*numPoints[IDIR] + (j+1) * numPoints[IDIR] + i;
CellConn[counter++] = k * numPoints[JDIR]*numPoints[IDIR] + (j+1) * numPoints[IDIR] + i;;
CellConn[counter++] = k * numPoints[JDIR]*numPoints[IDIR] + j * numPoints[IDIR] + i+1;
CellConn[counter++] = (k+1) * numPoints[JDIR]*numPoints[IDIR] + j * numPoints[IDIR] + i+1;
CellConn[counter++] = (k+1) * numPoints[JDIR]*numPoints[IDIR] + (j+1) * numPoints[IDIR] + i+1;
CellConn[counter++] = k * numPoints[JDIR]*numPoints[IDIR] + (j+1) * numPoints[IDIR] + i+1;
}
}
}
/* Defining the mesh intersection points */
for (k = 0; k<=grid->np_tot[KDIR]; k++){
for (j = 0; j<=grid->np_tot[JDIR]; j++){
for (i = 0; i<=grid->np_tot[IDIR]; i++){
x1 = (i!=grid->np_tot[IDIR])?grid->xl[IDIR][i]:grid->xr[IDIR][i-1];
x2 = (j!=grid->np_tot[JDIR])?grid->xl[JDIR][j]:grid->xr[JDIR][j-1];
x3 = (k!=grid->np_tot[KDIR])?grid->xl[KDIR][k]:grid->xr[KDIR][k-1];
xp[counter] = x1*sin(x2)*cos(x3);
yp[counter] = x1*sin(x2)*sin(x3);
zp[counter++] = x1*cos(x2);
}
}
}
- The easiest way to skip data copy and use the pre-existing pointers in Catalyst was to include the ghost cells as well. Since these overlapping cells across multiple processors contain the same MPI communicated value. This ensures minimum computational overhead and doesn’t change anything as far as the visualization is concerned.
Here’s what I got with slice plots on spherical geometry (cartesian data shared in this previous post) for different setups I was trying. There is a minor typo in the colorbar label that I need to fix (bad find and replace pitfall ):
However I have one small doubt. When in the python Catalyst state script I want to keep the view of the 3D data on as an outline representation to give a better context to the slices, instead of having one outline for the entire domain, I get outlines for each subdomain in each MPI process. This makes the visualization look very cluttered especially when the simulation is run on a large number of processors on hpc. Is there a way around this to just have an outline for the global 3D data? To make it clear what I mean I’m attaching two more videos where local processes have sub-domains outlined.