I am currently using Paraview 5.8.1 on a Windows 10 laptop. The computer is a Dell Precision 3551, which has the following specifications:
Windows 10 Pro 64 bits
32Gb RAM (Intel Core i7, 2 x 16Gb RAM)
Nvidia Quadro P620 (4 Go GDDR5 SDRAM)
The solution I want to open is a *vtu file from CONVERGE software.
It has a size of approx 5Gb, and counts approx 5millions cells.
When I click File>Open, and tick the field I want to see to see, Paraview starts to load the data before crashing at approx 40% of progess bar (I just tick one field at a time).
I have no idea about the problem, please help!
Best regards
Maxime
Precision: I could perfectly open this solution from my former company supercomputing server, meaning that the solution is probably not bugged (no NaN or things like that…)
Unfortunately, I cannot access it anymore
I can see memory usage through Memory Inspector which tells me that Paraview uses 3% of total memory when I open my field.
How can I monitor the memory?
I do not recall perfectly the exact version, but I’m pretty sure it was on a Paraview v5*.
Also, I saw on Paraview Download page that some MPI versions are available. My version (which causes crashes on laptop) is not an MPI one.
What is the difference between an MPI and a non MPI version?
Re:
I launched Paraview from a Linux emulator command line (MobaXterm).
It allowed me to have a feedback from paraview when loading my data, which is the following: Segmentation fault
Hi Dan,
Unfortunately, it did not solve the problem.
After several tests, it apprears that I encounter this problem only with large files…
Smaller files are openable with no problem.
Is it possible to limit the number of digits Paraview reads from the solution, so the memory needed decreases?
Best regards
Maxime
If your data has several attribute arrays you could read fewer to save memory. That is the control there is to save memory when reading a data file. If you have a very large data reading it on a HPC machine is the solution (which results in data being chunked, and individual chunks being read on each node)