I’m attempting to use the ParticleTracer filter, coupled with TemporalParticlesToPathlines to trace the path of particles injected into a vector field, and show a short trail behind the particles as they move over time. And this works fine for the toy dataset I am using. However, I want to apply this to a much larger dataset and render an animation. Since the path tracing will take much more time, I will need to stop and restart the rendering (I will be using a scheduled cluster, where I can’t just run until it’s finished). I would guess I’m not the first to want to do this, so I’m wondering if there is a standard approach to doing it?
I tried saving a state file (in pvsm format), so that I can reload the state and pick up where I left off. But this doesn’t seem to work, for a number of reasons. Based on the attached images using my toy dataset, it appears that while it keeps track of the current time step that I am on, it does not keep track of the current position of the particles in the ParticleTracer, but rather recomputes them from the beginning. For these images I used a PointSource to seed the ParticleTracer, which results in different random seeds when the state is reloaded. This can be fixed by using a SphereSource to seed, but still results in the trace being recalculated on the reload.
The other issue is with the TemporalParticlesToPathlines not maintaining the state of the trails when the state was saved, but again starting over.
Is there some better way to achieve what I’m attempting to do here? Any advice would be appreciated.