I’ve been working on putting together a containerized version of paraview using apptainer, based on the old singularity build script. It runs into trouble when trying to actually build paraview:
[ 96%] Performing configure step for 'paraview'
Reaping winning child 0x5603fb382160 PID 272611
Live child 0x5603fb382160 (superbuild/paraview/stamp/paraview-configure) PID 272612
Not searching for unused variables given on the command line.
Ignoring extra path from command line:
CMake Error: The source directory "/home/pv-user/pvsb/build/superbuild/paraview/src" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
CMake Error at /home/pv-user/pvsb/build/superbuild/sb-paraview-configure.cmake:47 (message):
Failed with exit code 1
The CMake command used is
cmake --debug-output -C /home/pv-user/pvsb/src/cmake/sites/Apptainer-Rocky8_6.cmake "-GUnix Makefiles" ../src
The referenced .cmake file is located here: https://gitlab.kitware.com/woodscn/paraview-superbuild/-/blob/v5.11.1_apptainer/cmake/sites/Apptainer-Rocky8_6.cmake, but I think it’s just a clone of the Docker one at this point.
I’d love some insight into how to go about debugging this.
Potentially relevant variables from the build script:
Out of curiosity, why don’t you bundle our EGL build into Apptainer directly.
That is how we’ve been dealing with docker images and ParaView recently and it is working great.
Also, the pre-built version allow you to provide your own MPI implementation via ENV setup.
Quick thing, I’ve noticed you were trying to build with Python 2. Is that expected?
Thanks for getting back to me. The general guidance I’ve received has been that almost everyone should be building their server implementations of Paraview (for instance: 4. Visualizing Large Models — ParaView Documentation 5.11.0 documentation) It would be great if that’s no longer necessary.
Python 2 was used in the older example ( Scripts/singularity/Singularity.egl · master · ParaView / ParaView-Superbuild · GitLab (kitware.com)). I don’t know that there was a better reason than that.
If you have a need for building your own server yes. But if you can use the prebuilt one, then not really.
The main reason why someone would need to build his own server is/was:
- Use the proper MPI library (but if it can be injected this fall apart)
- Custom code build for catalyst (And with catalyst v2, that needs has disappeared)
- Need to use specific HDF libraries or other IO library (Or specific system library).
I might be missing something, but that was why you would bother building ParaView to get the most out of your system. But in your case, since you are wrapping it inside a singularity container, I don’t see why the built-in EGL/OSMesa won’t be enough. Unless you can not use your HPC MPI lib because it is not ABI compatible with ours.
It sounds like we can probably use the pre-built stuff, which is nice. I wish I hadn’t spent as much time as I did down that rabbit hole, though.
FYI, I recently played around with prebuilt ParaView in containers with slrum + enroot + MPI + pmix. Here’s a repo with container build scripts and docs which you may find useful:
repo: GitHub - utkarshayachit/container-playground: Experiments with Containers + MPI + GPU
docs: MPI + GPUs in Containerized Applications on AzHOP | container-playground
Unfortunately, our principal MPI is OpenMPI, which isn’t compatible with MPICH. Still, we do have others available, so I’ll pursue both paths for now.
I’ll take a look at your repo. I guess this will be my incentive to figure out how Dockerfiles work, since I started learning in Singularity.