-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parametrization of Large-Scale Condensation #82
Conversation
src/humidity.jl
Outdated
|
||
Qsat = zeros(nlon, nlat) # Saturation specific humidity | ||
|
||
@inbounds for j = 1:nlat, i = 1:nlon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also need to have the discussion whether for the parametrizations it makes sense to parallelize in the horizontal or in the vertical. Technically all parametrizations should only act on a given atmospheric column, which would motivate a horizontal domain decomposition, however, the arrays are currently layed out in memory as lon,lat,(leapfrog), and then vert such that the dynamical core can act as independent as possible across the vertical. If we want to parallelize single-column style we may want a function instead that takes in a vector which we could obtain from view(variable,i,j,:)
instead of baking in the lat lon loop
src/humidity.jl
Outdated
Compute the saturation specific humidity. | ||
""" | ||
function get_saturation_specific_humidity( | ||
T::Array{NF,3}, # Absolute temperature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My idea was to move every vertical loop to src/distributed_vertical.jl
such that we have one place where in the end the @distributed
will distribute the vertical layers across processors for parallelisation. However, the question whether we actually want to do that or whether the parametrizations should be distributed in the horizontal. In the latter I'd suggest to create a src/distributed_horizontal.jl
as a central place for the horizontal domain decomposition. Tricky!
src/diagnostic_variables.jl
Outdated
cloud_top ::Array{Int,2} # Cloud-top | ||
precip_large_scale ::Array{NF,2} # Large-scale precipitation | ||
humid_tend ::Array{NF,3} # Humidity tendencies due to physics | ||
temp_tend ::Array{NF,3} # Temperature tendencies due to physics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have both humid_tend
and temp_tend
also defined in Tendencies
. Would it be possible to use those instead? I am not quite sure whether it'll be enough to have per prognostic variable one tendency (we probably have to split things into dynamics and physics for example, or we may want to have the tendencies from different terms independently and then a step where we add them all up...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the humid_tend
and temp_tend
which are already defined in Tendencies
are in spectral space.
My original preference for the parameterisations was to simply add up the tendencies as we went along, rather than allocating a new array for each tendency and parameterisation and summing them at the end. However the problem I ran into here (which may be repeated for other parameterisations) is that the humidity tendency due to LSC gets re-used to calculate precipitation. So you have to allocate it either way.
For now, I think I would just pre-allocate a new array for each tendency and parameterisation (humid_tend_lsc
, temp_tend_lsc
, ...) and then at the end we can optimise it, once we have a clearer picture of how the parameterisations interact?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However the problem I ran into here (which may be repeated for other parameterisations) is that the humidity tendency due to LSC gets re-used to calculate precipitation. So you have to allocate it either.
I think this is exactly where we have to start thinking about working in column vectors rather than horizontal fields. That would solve many problems. I reckon it might actually be the best to start very early with this restructuring from vert x lon x lat to lon x lat x vert (outer to inner loop). Because then there's no problem to allocate a bunch of vectors for intermediate calculations that get reused for the next horizontal grid point. That saves memory and gives us more flexibility to reuse stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's a great point, no problem allocating an 8-element vector.
@unpack temp_grid, humid_grid, pres_surf_grid = Diag.grid_variables | ||
@unpack temp_tend, humid_tend, sat_spec_humidity, sat_vap_pressure, cloud_top, precip_large_scale = Diag.parametrization_variables | ||
|
||
pres = exp.(pres_surf_grid) # Normalised surface pressure - TODO(alistair): check pressure units throughout |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ideally we preallocate pres
somewhere else and reuse it here. As pressure is always surface pressure maybe we should also call it pres_log_grid
and pres_grid
to better distinguish between the logarithm of surface pressure and non-log version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be in favour of renaming pres_surf_grid
to pres_log_grid
, because I found it hard to figure out what to call exp.(pres_surf_grid)
😅
Question about units: is pressure always hPa?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, if we preallocate pres_grid
somewhere else, then we have to make sure to update it everytime pres_log_grid
changes. Where exactly should we do that? So we don't accidentally take the exponential twice, or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question about units: is pressure always hPa?
I don't know. For 16 bits we'll probably introduce some scaling anyway, so I wouldn't mind sticking to hPa for the time being.
Also, if we preallocate pres_grid somewhere else, then we have to make sure to update it everytime pres_log_grid changes. Where exactly should we do that?
Given that pressure is a prognostic variable, I'd assume that we calculate at the beginning of each time step the transform to grid space, then exp
for the parametrizations, but maybe that should be done again on a per column basis as I don't think the non-log pressure is used in the dynamics
Sorry @white-alistair, with the merging of #87 there's now a conflict. In |
@milankl another thing which I forgot to mention: you can see that I included references to the formulas in the original Speedy documentation, where applicable. I would expect to remove these in the future, probably to be replaced by more up-to-date documentation. |
Yeah just stick to the existing references for now, one we've documented our speedy version we can update those references in the comments. In the end I expect us to do things somewhat differently to Fortran speedy so there should be a small describing section in the documentation of how we did it. Hence we also have the freedom to rename variables as we go along for consistency/clarity etc, as we'll document it anyway again. |
No description provided.