I noticed that the ensight reader has trouble (slower and “non-manifold triangulation” error) reading ANSYS Fluent polyhedral meshes.
I use ANSYS Fluent to run my CFD cases, and the mesh in these cases are created with the ANSYS Fluent Mesher polyhexcore method. After my cases are calculated, i export the results to the ensight case gold format which i then read into paraview. I have had no real problems with this workflow for tet and hex meshes.
Recently, i’ve used the polyhexcore meshing scheme in Fluent that adds polyhedral cells in any mesh transition region. When i read in my ensight case gold files using fluent, it takes about an hour for a mesh with 20 million elements. In the past, a >20 million element tet mesh would take about 20 minutes. Is there anything in the ensight case gold reader that could be limiting the speed of reading polyhedra meshes?
I tried to build test cases, one mesh with all tets and another with the polyhexcore meshing. But the small case i built did not show any difference in loading times – i am thinking one would have to scale the mesh up to the 10-20 million element mark to see differences. I am also seeing that some readers have the option to read in polyhedral meshes.
I am TOTALLY not an expert on ANSYS .case files. However, as I read your description, it almost sounded like you were running out of memory. On most computers if you run out of memory you start to swap on/off disk, which is tens to hundreds of times slower. Big data on a single workstation would do this. Check the size of your swap file after loading your data.
by “single” workstation, do you mean a single core? I should have clarified that i configured paraview to work on multiple cores.
You are not on a supercomputer/cluster. You are running on a standalone computer/macbook/woindows/etc.
Configuring ParaView to work on multiple cores can lead to significant slowdowns. Further, newer versions of ParaView are threaded, using available cores. That is why the option has been removed from newer versions of ParaView.
What version of ParaView are you using?
My statement still applies. Are you running out of memory, and starting to thrash in cache?
- I am running paraview on an HPC
- I am paralleliziing paraview to run with 8 cores (
mpiexec -np 8 pvserver)
- I have tested and this slowdown happens on paraview 5.10 and 5.11
I did some research, and i don’t think the issue is i’m running out of RAM. I’m not a system admin/architect so it very well could be that i’m running out of some other type of compute.
Will note one more thing in my original post, i did not have this slowdown problem with a similar sized tet mesh.
From you post, i can try two things:
- investigate if im “starting to thrash in cache”
- Running paraview without the parallelization option