Timing performance through python?

I have an application that uses paraview through python via the simple module, and I am trying to determine where the performance bottlenecks are. Is there a way to access the filter timings logs in python or some other way?

import paraview.benchmarks and logbase.py inside will give you access to the internal timer logs which you can serialize and print and such.

Be warned that the the logparse.py functions, which used to interpret the contents into a nice summary of the per frame and per filter stats, seem to have degraded over the years and are not currently usable.

Another option, perhaps more verbose, is as follows:

> mpirun -np  2 ./bin/pvbatch -l /tmp/logs.txt,TRACE -- /tmp/sample.py

this will generate /tmp/logs.txt.0, /tmp/logs.txt.1, … which give more info about various inner workings. You can qualify the log to only print filter execution info, for example, using something like the following:

> env PARAVIEW_LOG_EXECUTION_VERBOSITY=INFO \
   mpirun -np  2 ./bin/pvbatch -l /tmp/logs.txt -- /tmp/sample.py

Available variables are documented here.

Thank you Dave and thank you Utkarsh! This sounds like exactly what I was looking for and I will investigate.

I have been playing with the benchmark.logbase.print_logs, but I don’t understand how to use it. If print the logs, then do work, I see the same log (identical data) as the first print (i.e., there isn’t any additional data). Any clues?

The trace is way more verbose, but because its verbose, its hard to read.

Did you try limiting to a specific category using environment variables? I tend to prefer these since they do give a good sense of the bottlenecks.

I was able to add the env var export to our launching. Its nice to see it in the console since you can match what you see with what you did. Thanks!