I have searched the forum and could not find anything addressing this in detail, I hope I haven’t overlooked anything. If so, please excuse me.
Here goes my question:
Consider a 1D problem, say water flow down a river. To simulate that, I am using a numerical model that also discretizes vertical coordinate (so-called levels). This gives me info about flow parameters along both cross section and depth. The water surface (air/water interface) is free to fluctuate, up and down, depending on some hydraulic constrains.
The model writes all results into one single matlab file (*.mat), containing all modelled variables, all time-steps and all vertical levels (so, strictly speaking, it’s a 2D problem as there are two dimensions, : x and z . I am pre-processing those data in Python and converting to netCDF to be visualized in PV.
For each given quantity (q), and for each layer (k), I find myself having two corresponding layer interfaces, upper and lower, where that q is valid. In other words, q is defined over k layers, but I always have k+1 layer interfaces.
is u-component of flow velocity, at layer 12, for time 12 seconds and 500 millisends.
The structure of my data looks something like:
----------------------- zk_0 qk_1 ----------------------- zk_1 qk_2 ----------------------- zk_2 qk_end ----------------------- zk_end
Now the way i see it, the above means that I have the value defined at the cell center (or cells, along x), and the cell spatial vertical extension is defined by an upper and lower cell interface (or “layer interface”).
My question is how to “bake” my data so to have it defined as cell (instead of vertices).
What I did so far, is:
- for all layers, except bottom and top, define a zTrue = zHigh - zLow (the actual vertical coordinate is sandwiched between higher and lower interface) and assign that vertical coordinate to my quantity q.
- for top and bottom, reassign the same q of, respectively, closest to the highest and lowest value at zTrue, to fill the gap left empty.
In essence, this means that I am adding two artificial layers, one at the top and one at the bottom, to pad my data and fill the gap deriving from the fact that I have more layer interfaces than layers. By so doing, however, I end up with an increased number of layers (2 more than the model output) and I would prefer avoiding that and, instead, persuade PV to read my data as cell-centered values and not as cell edges. Providing PV with the information about vertical position of my interface could help the software to treat my data as cell-centered (since he knows up and down to where my data are valid).
I am sorry for this lengthy post.
Hope someone can enlighten me and solve my doubt.