Runtime x error

Hello,

I am running into x errors that I have been unable to debug, appreciate any pointers

[paraview-test] [kavalurav@zeus020 cleanslate]$ srun -n1 paraview -v 5
( uptime  ) [ thread name/id ]                   file:line     v| 
(   0.127s) [paraview        ]             loguru.cpp:606      1| arguments: /home/kavalurav/Downloads/Tickets/spack/paraview/cleanslate/spack/var/spack/environments/paraview-test/.spack-env/view/bin/paraview -v 5
(   0.127s) [paraview        ]             loguru.cpp:609      1| Current dir: /home/kavalurav/Downloads/Tickets/spack/paraview/cleanslate
(   0.127s) [paraview        ]             loguru.cpp:611      1| stderr verbosity: 5
(   0.127s) [paraview        ]             loguru.cpp:612      1| -----------------------------------
XIO:  fatal IO error 0 (Success) on X server ":11"
      after 400 requests (400 known processed) with 0 events remaining.
(   0.643s) [paraview        ]             loguru.cpp:485      1| atexit
srun: error: zeus020: task 0: Exited with exit code 1

Welcome to the ParaView community Adityakavalur!

Let me start by saying I have never seen paraview run with an srun command. ParaView isn’t run on a cluster, but on a Linux workstation (or login node). What are you trying to do?

Just type the following on the login node, does this work?

./paraview

Alan

Hi Alan,

Thanks for taking a look at the question, we cgroup and provide a virtual desktop environment so its not a pure login node in the conventional sense. Without srun I get the same error unfortunately.

[paraview-test] [kavalurav@zeus020 cleanslate]$ paraview
XIO:  fatal IO error 0 (Success) on X server ":11"
      after 399 requests (399 known processed) with 0 events remaining.
[paraview-test] [kavalurav@zeus020 cleanslate]$

I should probably add I only see the above XIO errors for the spack built versions. The precompiled binaries provided on paraview.org for the same version (5.11.2) work off the bat.

Unfortunately, you are over my head. Sounds to me like you have a good environment, and a bad Spack build. Why are you trying to use your own builds as opposed to a Kitware build?

Thanks for taking a look at it. I’ll try to see if I can figure out the root cause. We are trying to see the difference in performance between the compiled and pre-built versions. I wonder if combining the GUI of the pre-compiled version and a headless version from Spack might be the only option.

Um … why? What performance difference? Do you think your builds are faster than Kitware’s?

Hi Alan,

Yes, there is always a good chance that building from source will give you better performance than precompiled binaries. For instance when building by hand you can provide the provide appropriate ISA such avx and sse for your CPU, whereas the precompiled binaries probably user a lower set so that it doesnt error out on older hardware. While some of these are possible to compile in a dynamic manner so that a number of possibilities are supported at runtime, it is my understanding that not all of it is supported in this manner. This also does not consider other things that can be unique for machines such as parallel filesystem, burst buffer, high speed network etc.
Of course all this is only important for the server side and not the client side, which is why it might be ok to split the builds between the two like I mentioned above, although not ideal.

Has this not been your experience?

As you say, the vast majority of cpu work ends up on the server side. So, with regards to paraview (client side), if your goal is performance, I would just stick to the Kitware builds. ParaView is not trivial to build, as you are finding out. As an asside, I use Superbuild for all of my builds, and generally speaking it does a great job.
With regards to server side, you are running the wrong binary above. You want to srun the pvserver, not paraview. See the great remote server section of the ParaView User Guide/ Reference manual. Note that generally speaking, from my experience, large ParaView jobs are performance/resource limited by memory, then I/O on/off disk, then interprocess communication (especially IceT image compositing, which has a lot of MPI traffic). You probably want to build your own pvserver, to use a local customized MPI. But again Kitware binaries work fine.