I have two sets of E and H EM-field data, one file for every timestep.
I want to build an Poynting vector animation, and other analytical animations, based on both datasets.
How can I do this?
Technically, the exact time for H-field is shifted by negligible delay, but ParaView ignores time and loads both sets with exactly the same integer timestep indices.
The data mesh grid seems to be the same, but small shift is also possible.
I guess, first I need to ensure the data resides in the same discrete space, like applying spatial interpolation. Bau again, how can I interpolate one set to the XYZ coordinates extracted from another set? And then cross-product both?
See the Append Attributes filter. This will take the point or cell data from the two data sets and combine them in the output. Append Attributes assumes the mesh topology is the same, so if the points are not exactly co-located, this seems like it will work reasonably well. The output mesh topology will be the same as the first input.
Once your field arrays are combined, you can derive new quantities with the Calculator or Python Calculator filters.
Ah, thank you! Yesterday I got to this filter, but was wondering, what’s the difference with AppendDataset, because common (linguistical) sense tells me the latter is appropriate, but experience tells me opposite…
And I could not confirm it yet myself… Looks like combined data space becomes so huge, that vector visualization filter crashes ParaView
I don’t know what to do. I have no other option than Paraview, But it was not working with Intel-cpu graphics, and still keeps crashing with nVidia 1030 board. Maybe that is because of crappy windows compiler? Are things better in Linux? Or the 5.x.x branch is bad?
And I could not confirm it yet myself… Looks like combined data space becomes so huge, that vector visualization filter crashes ParaView.
I don’t know what to do. I have no other option than Paraview, But it was not working with Intel-cpu graphics, and still keeps crashing with nVidia 1030 board. Maybe that is because of crappy windows compiler? Are things better in Linux? Or the 5.x.x branch is bad?
An NVIDIA 1030 should be reasonably capable with 2GB of RAM. Maybe you could provide some more information about your data. How many points, for example? Can you provide a data set with instructions for reproducing the crash?
ParaView is quite capable of rendering millions of points and cells on laptops these days, and there is no appreciable difference between Visual Studio or gcc compilers on linux in terms of how much data can be rendered.
mmmm… new users can’t upload attachments… here’s some free file-exchange server link: https://dropmefiles.com/yaQ0v
this is just one step. I usually finish with 500-700 steps.
I use Windows 7 and i5-4590 based mini-desktop from Dell with 32Gb of RAM. I really don’t know why is it always crashing, while community people keep saying it is pretty stable. Maybe the problem is Windows? Before, I was using CPU graphics, and it was crashing at startap. With nVidia I can render at least small datasets with just one vector filter.
And of course I tried the same on other Win7 based PCs, a laptop of Nehalem era, a Sandy Bridge-E based workstation. All the same.
Now I try rendering dual data-set (two fields) as I said above.
Did you get the sample and video. Because no one seems to be believing me; and you already know how I feel.
At least some confirmation, or confirmation that the problem is in Windows and I maybe should use Linux(?)
Thanks for sending the video and data. I was able to view the video and see the crash you are experiencing. I was also able to load the data file. However, while replicating the steps in your video, I did not experience a crash. This was using my MacBook Pro with ATI graphics acceleration. It wouldn’t hurt to try a linux system. I can’t think of why it would crash on your Windows system…
One thing to check: make sure ParaView is in fact using your NVIDIA graphics card. To do this, you can go to the NVIDIA Control Panel and under Manage 3D Settings, set your “Preferred graphics processor” to “High-performance NVIDIA processor”. To confirm it is being used by ParaView, click on the Help menu and select “About”. This will display some information about your graphics environment in a dialog box. Feel free to post a screen shot of this dialog.
well, it takes time to install linux, before I can report… BTW, which do you recommend for “best experience” with ParaView? Debian? Kind of Suse? If not Linux, can you recommend FreeBSD?
In windows I don’t see a “Preferred graphics processor”. Probably, you are telling about OGL? here is my screenshot:
P.S.
While it does not display a crash message, I found two in system log:
1)
Faulting application name: paraview.exe, version: 0.0.0.0, time stamp: 0x5b11e122
Faulting module name: MSVCR120.dll, version: 12.0.21005.1, time stamp: 0x524f83ff
Exception code: 0x40000015
Fault offset: 0x0000000000074a46
Faulting process id: 0xa1c
Faulting application start time: 0x01d4076eba6f8f07
Faulting application path: C:\Program Files\ParaView\bin\paraview.exe
Faulting module path: C:\Program Files\ParaView\bin\MSVCR120.dll
Report Id: 0f63a287-7362-11e8-88be-1866da0a996f
Fault bucket 692551478, type 285030038
Event Name: APPCRASH
Response: Not available
Cab Id: 0
These files may be available here:
C:\Users\user\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_paraview.exe_4abd6167dcdb8ffacfe217df62066228af9a38_0452946f
Analysis symbol:
Rechecking for solution: 0
Report Id: 0f63a287-7362-11e8-88be-1866da0a996f
And here’s the mentioned above WER crash report Report.wer (43.5 KB)
There are a lot interesting is written in google on similar crashes https://www.google.com/search?q=MSVCR120.dll+error+40000015
Looks like most cases are with ported/GNU software unaware of MSVC library usage difference between versions.
well, it takes time to install linux, before I can report… BTW, which do you recommend for “best experience” with ParaView? Debian? Kind of Suse? If not Linux, can you recommend FreeBSD?
Ah, I didn’t realize you would be installing it on the same workstation. Indeed, that can take some time. I don’t know if there is a “best” linux for ParaView. I run Ubuntu 16.04, others run RedHat, Arch, CentOS. The important thing is that your OS enable the use of your NVIDIA graphics card.
By the way, thanks for providing the info about your graphics environment. ParaView is indeed using your NVIDIA card, which rules out some unknown problem with your onboard Intel graphics.
The good news is that I can reproduce your crash on Windows 10 using your data set. That means the problem is not with your system, so wouldn’t bother installing linux if I were you.
I have filed a bug report with steps to reproduce the issue
Well, in Linux, at least with software renderer, Paraview do not crash in the same situations, where it always crashes in Windows.
I guess, Paraview team just skips on windows release beta-testing and ignores windows-related bug-reports, “confirming” them in linux as false. I have no other explanation. As I said, I tried several Windows machines, and this situation continues for last two years.
In this case, you may have found a bug in the type of data you use commonly that exposes a gap in our testing. This is not unusual, and I can empathize with your frustration. When time and resources allow, we will investigate this further.
Ohh… today my boss told me he closes the project. I could not deliver the visualization in time… it wasn’t quite futile, but it really took hell lot of time just for simulating, and then the same for rendering, until it collapsed. And today, when I got it working in VBox with soft-render, it is even slower … I hope we can come back to the problem later… if I am still there. Maybe the bug will be eradicated that time.
BTW, now I feel how cool is Paraview, when it is not crashing. Well, a bit slow; multithreading doesn’t work for some reason in my installation. But visualization capabilities are so great!