exodus reader fail

I have a user that can open exodus files using his local desktop and using pvbatch/pvpython on a remote supercomputer using 1 process on 1 node. But if he runs pvserver, connects the desktop client to the pvserver (1 proc on 1 node) he gets an error:

Exodus Library Warning/Error: [vtkexodusII_ex_check_valid_file_id]
ERROR: In “ex_inquire_internal”, the file id -1 was not
obtained via a call to “ex_open” or “ex_create”.
It does not refer to a valid open exodus file.
Aborting to avoid file corruption or data loss or other potential problems.

I can’t replicate the error on my own mac or linux client connecting to the same server. He is using the Paraview 5.5.2 client binary download (linux). Breaks for user with ParaView Can dataset as well as his own exodus files. I’ve inspected the local client environment without seeing anything obvious. I don’t have an account or root on his linux box. It sure looks like its associated with his client setup… Any suggestions for how to troubleshoot? What changes the exodus reader behavior? Thanks! --John.

try setting LD_DEBUG=libs and then look at the generated output for any funky libraries. Maybe there are multiple hdf5 or netcdf libraries getting loaded or something like that.

I seem to remember this kind of error when you do module loads of packages in your .bashrc. Some rocket scientist was doing so with cubit, which changed the default system cpp libraries. Maybe have him move his .bashrc or .cshrc file to the side, re login?

I tried both suggestions - by @utkarsh.ayachit and @wascott to no avail

I cleaned the client side up as much as possible - removed old paraview installations, emptied out LD_LIBRARY_PATH settings in any .**shrc files and even moved .bashrc and .bash_profile out of the way. I even skipped the module load and ran it from a local installation of Paraview. On the server side, I removed all my .*shrc files. I still have the problem.

I looked at LD_DEBUG output on client and server side and did not see anything obvious - on the SERVER side hdf5 libs are being loaded from /lib64 (I looked at “calling init” lines) while vtknetcdf* libs are being loaded from the Paraview library path. No raw libnetcdf libs are being loaded on the server only libvtknetcdf* libs. On the CLIENT side, libhdf5* files and libnetcdf* libs are being loaded from its Paraview installation lib directory. On the server side, after I do a module load Paraview, LD_LIBRARY_PATH points gcc/6.4.0, openmpi/2.1.2 and python/2.7.3 library paths. On the client side, LD_LIBRARY_PATH is empty.

The odd thing is that I can do a client/server load if I log in from two other clients (one Redhat Enterprise and one Ubuntu), so it does seem be a problem with THIS SPECIFIC CLIENT. However, I cannot for the life of me figure out how this client is influencing the behavior of the server and its loading of a netcdf file.

I can attach outputs if requested but I didn’t want to be THAT guy who sticks reams of logs into his posts.

Just to be anal, and I bet you did this above, but remove your configuration files. Easiest way (with 5.5.2 and later) is to Edit/ Reset to Default Settings. Be sure to save these files, so if this fixes things, we can look for what went wrong. Exit Paraview. Restart ParaView. Can you remotely open can.exo now?

No, I hadn’t and resetting to default settings fixed the problem (YAY!)!!!

Yesterday I searched high and low among my hidden files for Paraview config files or directory and just couldn’t locate it. So I continued painstakingly searching through reams of output by turning on LD_DEBUG=libs. Doh!

Thanks for your help!
Rao

Awesome it worked!

Cory, you want these configuration files, right?

Could you please send the config files to Kitware? They should be found in ~/.config/ParaView/ParAView5...ini.bak.