To add some precision about what Zhe is doing: we wanted to (1) check whether we could provide our own implementation of a VTK communicator and (2) check which collective communication functions would be called when using some Catalyst scripts.
We have copied the vtkMPICommunicator and vtkMPIController implementation from the VTK source, renamed them into MochiCommunicator and MochiController (and replaced “vtkMPI” with “Mochi” everywhere in the source), then in our Catalyst adaptor we do the following before creating the vtkCPProcessor:
MochiCommunicator *communicator = MochiCommunicator::New();
MochiController *controller = MochiController::New();
controller->SetCommunicator(communicator);
controller->Initialize(nullptr, nullptr, 1);
vtkMultiProcessController::SetGlobalController(controller);
Each function in these two classes has a print statement so we could see which ones are called.
The Python script that Zhe is using does an isocontour rendering of a Mandelbulb fractal. When distributed across multiple processes, each process computes a region of this fractal.
What we noticed is that although we see calls to functions that initialize the controller and communicator, as well as calls to functions such as Duplicate, all of which correspond to our initialization of the controller and communicator. But when doing the actual co-processing, we don’t see any actual calls to communication functions such as send receives. Our guess is that either (1) the rendering algorithm doesn’t care and uses MPI functions directly, or (2 - more likely) somewhere down the line, something (maybe the Python script itself?) resets the global controller to an MPI one, overriding the one we have installed.