I was testing out the OpenVR plugin in the final release of ParaView 5.8.0, and it seems to be more solid than under 5.7.0. One of the issues we often had under 5.7.0 and earlier is that all of the QT panels and widgets would go black on black making them very hard to use – this would generally happen after going into VR mode – or sometimes after bringing up the control panel in VR. This didn’t happen in the 15-20 minutes of testing I did.
There are a couple of quirks, but overall I think it’s pretty solid and thus usable.
The two quirks I noticed are:
It seems that the grip buttons to scale worked about 80% of the time (not bad, but not sure why it wouldn’t be 100%)
The in-VR panel always displays over the data, even when the data seems to be closer to the viewer. This is especially odd when flying because the panel then moves slower than the data, but appears to be closer to the viewer since it occludes the data.
One other thing I would like to see – that came up in a thread a week or two ago:
[Running a script in the background? / VR black screen] is to have the OpenVR plugin available on MacOS as well. Even if it’s just available as a shared-object library that we download separately and add to our copy of ParaView.
Has anyone successfully compiled ParaView for MacOS with the OpenVR plugin?
Actually, I’m willing to try to do so myself if anyone wants to give hints on the ParaView build process for MacOS. I usually just do builds on Linux myself.
I do have one other feature request for the OpenVR plugin, but I’ll save that for another day.
Thanks Bill! The grip scaling is handled as a multitouch event. It looks at the first bit of movement to decide what is going on (scale versus translate etc). So it may help to start the scale movement before pressing the grips so that it doesn’t confuse it for a translate etc.
The VR panel is a design decision in that I felt it was most import to be able to see the popup gui regardless of where you were in the data. Yes it does create an odd sensation where the panel is drawn on top of objects that may be in front of it. Open to ideas etc of course but that was the thought behind it
Many thanks for this great plugin. It’s amazing.
Yes, I can confirm this observation, not only for scale but also for translation, whereas using grabbing with the trigger works 100% however, only applies to one dataset if multiple are loaded.
It looks at the first bit of movement to decide what is going on (scale versus translate etc).
I wonder if that is documented somewhere? I.e. what else is there (etc)? I just found out these gestures by trial and error and have not found a gesture for rotating the scene content, i.e. the came/all datasets.
A feature we are missing is the ability to save the current VR cam/headset position (including the cam-clipping plane params) in order to reconstruct that view again in PV without VR for the creation of e.g. high res screen shots. Ideally with the trigger of the left controller, i.e. instead of loading a view doing a save. Or some means to take screen shots of the VR view when in VR. Using the SteamVR options for this is very cumbersome.
A feature we are missing is the ability to save the current VR cam/headset position (including the cam-clipping plane params) in order to reconstruct that view again in PV without VR for the creation of e.g. high res screen shots. Ideally with the trigger of the left controller, i.e. instead of loading a view doing a save. Or some means to take screen shots of the VR view when in VR.
If not yet available, how much work would it be to implement this functionality? @martink@mwestphal can you give us some pointers where to start and how to best integrate it into the current sources of the VR-plugin?
We do not yet have a screen shot option in VR but others have requested it. It would be nice to just take a picture of that you are seeing in VR from within VR. So it is on the list but it may be a while. If you want to give it a shot I would suggest adding a button to the Qt panel to take a screen shot that when pressed closes the panel and then takes a screen shot, saving the image as a png. Might be a tad tricky not sure.
Many thanks @martink for your reply. Would it be simpler to add the functionality to save the headset-cam pos when pressing the trigger of the left controller (which currently seems to load cam positions)? That way we could avoid hassle with dumping the VR image nor fiddling around with adding an extra button to the panel. Then we could use those positions to take screen shots outside VR that would correspond to what was seen in VR. The only discrepancy I so far see here is the distance of the cam-clipping plane, which I cannot find in the PV-cam settings.
Can you give us a pointer to where the functionality of the trigger of the left controller is defined in the sources?
Hello everyone,
we recently bought two Oculus headsets to visualize our results in ParaView. Works great in Windows in a single-user mode.
I am wondering though about the collaboration options within OpenVR. I have watched the video ParaView VR development - YouTube
by @martink. These options are mentioned at time 7:05 on of the video.
I am particularly confused about running the vr server. Could you share some more comments on how to achieve this potentially very powerful setup? Is it just connecting to a Paraview session at another machine? Or is it running a server somewhere with two connected clients? Or is it something to do with the pvserver? With that, I am quite familiar from remote visualizations, but never tried to connect more clients to one server. Could you, please, share more hints on this?
Thank you very much!
Thanks again @martink for the detailed guide. We finally gave it a try, following exactly the steps above. It worked like a charm!
The only slight problem we saw was that the data appeared slightly shifted for the two users. As a result, my colleague saw my laser when pointing at something, however, the laser was off in his view, by quite a lot, like the whole model size. And vice versa. What Is causing this? Are we hitting the accuracy of the position sensors of the Oculus Rift S headsets? Or is some kind of calibration needed/possible so that we would make our positions with respect to the data more precise? Thank you very much, I am really excited by these great features of ParaView.
Dear @martink after a longer while, I wanted to view some data in VR in the collaboration mode. However, I cannot connect to the vrserver.kitware.com anymore. Is the above scheme still the way to set-up collaboration in VR? Or is there a new way I am not aware of? It worked perfectly.
Thank you very much!
Hi Jakub,
Unfortunately the project that was supporting the vr collaboration server ended, and that server was taken offline. However, it would certainly be possible to run the server yourself if you are interested.
Basically,
Hi Aron,
thank you very much for this advice! Running the server locally makes a lot of sense.
I had a few issues with the compilation, likely due to a different zeroMQ version. However, with a few minor changes to the source code, I have built the collaboration_server on my linux machine successfully.
I have just tried running it and connecting to it from ParaView at our two workstations. It works great!
So, will the collaboration module make it to the production version of ParaView? Or is there another way to collaboration in VR in ParaView now? I like the way this collaboration module works.
Hey, wanted to close the loop and let you know that we had some others interested externally and at Kitware, and the server at vrserver.kitware.com has been restored, and will likely be supported by another project. We are also going to work on some better instructions, and possibly just including collaboration_server inside the linux distribution of ParaView. We’ll keep the group posted.
Hey, thank you for the descriptions on the collaboration mode. I was wondering if the vr server is still online? I have downloaded and run the binary as described but I am not able to connect to the server.