Release testing

We need better testing procedures for releases. This would replace our current ambiguous statement to test some of the features in the classroom tutorials. These tests are designed to catch issues that are not found in the nightly testing. Here is my proposal. I figured I would post it here before adding it as a task.

For every major release (5.*.0), minor release (5.13.0) AND every RC1,

1 For every version of ParaView,

  • Help/ Example Visualizations. Open every example
  • disk_out_ref. Temp. Volume Render.
  • Can.exo. Save state. Load state. Save Screenshot (.png). Save Animation (.avi)
  • Help/ About

2 For one version of ParaView,

  • Disk_out_ref. Plot over line.
  • Disk_out_ref. Select a group of cells, extract selection
  • Disk_out_ref. Find dialog. Find cell 100.
  • Start trace. Open disk_out_ref. Clip.Create Screenshot. Create Animation. Stop trace. Save macro. Reset Session. Delete screenshot and animation. Run macro. Check screenshots and animations

3 For one version of ParaView, run remote server.

  • Help/ About. Client and server side.
  • Disk_out_ref. Memory Inspector.
  • Disk_out_ref. Change opacity to 0.3.
  • Can. Save Animation (.avi)
  • Settings/ Rendering threshold == 0
  • Disk_out_ref. Change opacity to 0.3.
  • Can. Save Animation (.avi)

4 For Linux version of ParaView

  • Pvpython. Create a cone.
  • PvBatch. Run the trace created above. Run non MPI and MPI.

For every major release and minor release, (No release candidates, as links aren’t in place)

1 For every version of ParaView

  • Help/ Click on every line in help menu.

For every point release (..*)

1 For every version of ParaView,

  • Help/ Example Visualizations. Open every example

@mwestphal @dcthomp @cory.quammen @spyridon97 @johnt @utkarsh.ayachit @Kenneth_Moreland @boonth

@wascott A lot of these tests seem like they should be automated, image-based tests. What is it they test that we cannot automate? The kind of things we cannot currently automate are things like testing Qt is actually displaying the UI elements that the automations activate by function calls. If that is what you intend to cover (which would be good), we should document what testers should be looking for in particular (missing/disabled buttons, glitchy behavior for small variations, etc.).

Looking through this list, I would say that about half could be (and probably already are) automated. But many probably cannot be easily automated. For example, everything that writes out a screenshot or animation would be tricky to test that the output is as expected. When adding a macro, it’s hard to check that the UI updates as expected. It’s hard to check that clicking on help brings up the expected help.

Although a lot of these tasks are simple and already automated, I think the point is to verify that something doesn’t go wonky with the application. I think @wascott’s intention is to run through some basic operation with a human watching as a final sanity check before release.

1 Like

Ken nailed it. The point is to basically install every download for release (the file, not just a nightly build), check Help menu links, check that it works remote server, make sure volume rendering and opacity works, make sure the trace recorder works correctly, make sure save and load state works, make sure Python works (it silently “hides” remote server unless you run Help/About, or run something pithony). If these can be automated, great. That’s an implementation detail. But they should still be tested.

Did we miss specific issue without these tests in the past ?
In any case, what can be automated should be automated, and certain test should be put in the superbuild in order to test the actual released packages.

Manual test should focus on what cannot be tested automatically.

Regarding the list above, I see almost only thing that can be tested automatically, either in CI or in the superbuild CI. Some are probably already tested, but I did not check.

  • Example Visualizations: Superbuild CI
  • VolumeRender: CI
  • Save/Load: CI
  • Help/About: CI
  • PlotOverLine: CI
  • Selection: CI
  • FindData: Worth testing in superbuild CI
  • Trace: Worth testing in superbuild CI
  • Client/Server: CI
  • MemoryInspector: CI
  • Remote/Local rendering with opacity: CI
  • pvpython/pvbatch: superbuild CI
  • Help: superbuild CI

However, what is missing IMO is the testing of the integration of the release in the desktop, especially on Windows and MacOS which comes as installers. We can see in the “release” issue that this is tested though:

https://gitlab.kitware.com/paraview/paraview/-/blob/master/.gitlab/issue_templates/new-release.md?ref_type=heads#validating-binaries

As we can see, there is even some specific steps to check some of the scenarios you are highlighting.

So unless I’m missing something, the way forward is to patch testing holes that may be present in the CI or superbuild CI, but not to add more manual testing.