Quick guide to using PyTorch in ParaView

When it comes to using Machine Learning Frameworks alongside ParaView, PyTorch has proven itself to be much more flexible than others like Tensorflow.

This post will guide you through the steps of using the PyTorch Machine Learning framework with ParaView (tested on Linux and Windows). It will also cover how to use modules that are in a virtual environment thanks to a useful script included in ParaView.

Being able to use PyTorch in a ParaView plugin, or custom filter is for the most part equivalent to being able to successfully execute import torch in the ParaView embedded Python shell, which is itself equivalent to executing import torch in the pvpython interpreter

Prerequisites

ParaView

To let ParaView know which modules are available on your system, it has to be built against the right version of Python, typically the python that sits on your system under usr/bin/python. The pre-compiled binaries will not be able to let you import your own modules because it was compiled against a very basic version of Python containing only numpy.

This is why building from source is essential to let ParaView use the Python interpreter that exists on your system. To build ParaView from source, follow the guide and make sure that the right CMake options are activated :

PARAVIEW_USE_PYTHON needs to be set to ON

Python3_EXECUTABLE must point to the desired Python interpreter on your system

CUDA

Cuda is the powerful computing library that allows PyTorch to run on your GPU. To make sure that Cuda is available on your system, run the following command :

python -c "import torch; print(torch.cuda.is_available())"

Output should be True. Otherwise, make sure that you’re running a Cuda capable GPU on your system.

When building ParaView, the CMake option PARAVIEW_USE_CUDA might catch your attention. Note that this option does not need to be set to ON to be able to use Cuda with a filter that uses pytorch.

Using a Virtual Environment

A very large set of projects use virtual environments to take care of module dependencies, and it is crucial to be able to use them in ParaView. Thankfully, it is not necessary to rebuild ParaView every time you want to change your virtual environment settings. A handy script is included by default in ParaView. There are two ways to indicate which virtual environment you wish to use :

  • Set an environment variable PV_VENV = /path/to/venv/
  • Add the --venv /path/to/venv/ argument when executing ParaView

Then, open ParaView, and type the following command in the Python shell :
from paraview.web import venv

The output should be : ParaView is using venv : /path/to/venv/

Done!

You now have everything setup to use PyTorch in ParaView. If you need inspiration from projects which have used PyTorch or other ML frameworks in ParaView in the past, feel free to take a look at these

2 Likes

This extensive guide has helped me enough to properly learn how to use PyTorch in the ParaView. The detailed instructions have given me a lot of help throughout the whole process and it has been quite fruitful for me.

1 Like

It also worked on my side.

First, make sure to enable paraview web when compiling paraview before using this, because by default it is not enabled.
cmake -DPARAVIEW_ENABLE_WEB=ON.

I created an issue regarding that: https://gitlab.kitware.com/paraview/paraview/-/issues/21797

It works on my size with python 3.9 and paraview 5.11.1 on Ubuntu 22.04
For more specific details in the Cmake commands when building the paraview , add argument in cmake with -DPARAVIEW_ENABLE_WEB=ON -DPython3_EXECUTABLE=/home/…/anaconda3/envs/your_conda_env/bin/python3.9, following the comment of Loic that paraview_enable_web is necessary.

Another thing need to be noticed is that when you use anaconda as your venv, you need to link the libstdc++.so.6 manually to you venv before you build the paraview.

sudo ln -sf /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /home/…/anaconda3/envs/paraview/lib/libstdc++.so.6

the command above works for me.