Hardware Recommendations


 I am putting together components for a workstation and I am at a crossroads on a couple of decisions. I primarily use ParaView to visualize OpenFOAM cases (reconstructed) that have been run on a remote cluster. These cases are rather large (~40-60 million gridpoints and 5 calculated values at each for a couple million timesteps). From what I understand, ParaView utilizes all CPU threads when available, but can use GPU. And, when using a GPU, paraview primarily uses the CPU to load each timestep and original model while the GPU does the heavy lifting of rendering. 

As such, I have tentatively chosen the Ryzen 9 7950x3d for the CPU and RTX 4090 for the GPU. I am trying to decide if it is worth it to instead go for a server CPU with more cores but slower clock speeds (about 2.5 ghz instead of 4.5 base) and more RAM, get an A6000 instead of a 4090, or if the components I have chosen will do the job just fine. I would appreciate any input! 

 For reference, the system I have chosen is: Ryzen 9 7950x3d, RTX 4090, 128gb 5200 mhz ddr5 ram.

The GPU has little/no influence on how fast the data and filters are processed.
It helps with 2 things: how fluid it is to interact with the model. And if you are using ray tracing with OptiX, an Nvidia GPU will handle that. Additionally, the GPU memory determines how big of a model you can have visible. For meshes with 50 million cells, 16GB of VRAM should be plenty. An RTX 4090 has 24GB.
So you could save some money here by getting a lower-tier GPU with enough VRAM. Or if you don’t need to use OptiX, you could also use an AMD GPU.

40-60 million gridpoints and 5 calculated values at each for a couple million timesteps

I hope you don’t intend to do processing for each time step in ParaView. This is about 1GB of raw data for each time step. Even if we are talking about a reasonable interval for saved time steps, a large and fast NVMe SSD should definitely be part of the package.

7950X3D: If we want to stick to consumer-grade hardware for maximum single-threaded performance, I would go with the regular 7950X. Having the extra L3 cache on only one of the 2 dies probably won’t help much here, and the reduced core frequency will hurt more.

Thank you for the reply!

Okay so for my use case, it would be worth it to opt for a server cpu with more than 32 threads and pair that with a 4080 for example?

Also, I have a 4 tb nvme picked out with 12 gbs read - not sure if this will suffice. But if I go with the upgraded CPU I am also going to upgrade the 128 gb of RAM to reduce the bottleneck at the ssd.

Only if you know that whatever you are doing with Paraview, scales exceptionally well with thread count.
You will need to test that first before buying the hardware.

I recently did a quick test with the PV pipeline I typically use. I got a 2.1x speedup on 4 threads, and 3.2x speedup on 16 threads.
So if your pipelines have similar scaling as mine, a desktop CPU with medium core count, but maximum performance per core, would be much better than a server CPU with high core count.

Apart from VERY specific use-cases, increasing the amount of system memory beyond a certain point doesn’t help at all. That point is when all data currently needed fits into physical memory. With 50 million cells, 5 values per cell, 128GB or RAM is more than enough.
One of those cases where more might be better, would be holding results for more than 1 time step in memory. Because you have to compute some quantities that require more than one time step. But realistically: if you need to do that for large cases and many time steps, better do it outside of ParaView :wink:

Thank you very much! I believe I will stick with consumer chip because of the clock speed (and 32 threads is nothing to sneeze at either). I have not specifically checked my use case for parallel scaling, but for my budget and with your input I think the 7950x is the right choice. Thank you again!