Paraview texture buffer size (or driver) issue

Hello,

I installed paraview using “sudo apt install paraview” command on ubuntu 20.04 which is installed on windows 11 pro wsl. It was working well for several weeks and since today when I try to load an openfoam case, paraview gives me the following error: “Attempt to use a texture buffer exceeding your hardware’s limits. This can happen when trying to color by cell data with a large dataset. Hardware limit is 65536 values while 161058 was requested.”

I assumed it is a driver issue and I must say that I’m fairly new to ubuntu and paraview. I checked my drivers without knowing exactly what I’m doing. I ran the command:
glxinfo|egrep “OpenGL version|OpenGL vendor|OpenGL renderer”

The output is:

OpenGL vendor string: Microsoft Corporation
OpenGL renderer string: D3D12 (NVIDIA GeForce RTX 3050 Ti Laptop GPU)
OpenGL version string: 3.3 (Compatibility Profile) Mesa 22.0.3 - kisak-mesa PPA

inxi -F commands gives the following output about graphics:

Graphics:
Device-1: Microsoft driver: dxgkrnl v: N/A
Device-2: Microsoft driver: dxgkrnl v: N/A
Display: wayland server: Microsoft Corporation X.org driver:
gpu: dxgkrnl,dxgkrnl resolution: 1920x1080~60Hz
OpenGL: renderer: D3D12 (NVIDIA GeForce RTX 3050 Ti Laptop GPU)
v: 3.3 Mesa 22.0.3 - kisak-mesa PPA

Also following command gives no output:

sudo ubuntu-drivers devices

I also tried update, upgrade and even upgrading to ubuntu 22.04. Problem still stands. I know it is not “unsupported graphics card” issu since it was working fine until now.

I kindly request some urgent help to solve this issue.

I would like to revive this thread, because I am getting the same error now with Paraview v5.12rc1 on Windows 11 and an Nvidia A4500, for which I installed the latest driver.

I get this when turning on “surface with edges” representation for a large dataset (about 200 million points). What is the cause of this, and how can we avoid this?

You are getting this same error above? That’s mysterious for two reasons:

1). Your NVIDIA driver seems to not be used as this was a hardware limit in the mesa implementation

2). The bug in mesa was fixed prior to ParaView 5.12.0’s release.

Running through WSL could be causing some resource size limitations like this. We have seen such issues when running CUDA programs with pinned memory on WSL. I consider this an experimental setup that we don’t officially support.

Thanks for the reply, Cory. I am running native on Windows 11, just through the ordinary installer, (with MPI and Python if I remember correctly). I have access to the Optix Pathtracer ray tracer in the tracing men, is that a good indication that my Nvidia card is picked up?