Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does spreading sea ice freshwater flux over multiple layers reduce sensitivity to resolution? #38

Open
adele-morrison opened this issue Apr 5, 2024 · 49 comments

Comments

@adele-morrison
Copy link
Collaborator

adele-morrison commented Apr 5, 2024

@pedrocol has been finding strong sensitivity of his basal freshwater runs to what depth range the freshwater flux for sea ice formation is extracted from.

Check out this paper that experiments doing the sea ice brine rejection in NEMO over a fixed depth instead of just the top layer, Barthélemy et al. 2015: https://www.sciencedirect.com/science/article/pii/S1463500314001966

I think this could be worth testing. @pedrocol has already implemented this scheme to distribute the sea ice freshwater fluxes over depth, so we could rerun the control in ACCESS-OM2-01 with this vertical distribution, and then repeat the 1m, vs 5m top layer vertical thickness experiments. If the DSW is no longer sensitive to the top vertical thickness when sea ice fluxes are spread over depth, then this could be a nice recommended solution for future model configs to avoid this spurious behaviour we've been seeing.

@adele-morrison
Copy link
Collaborator Author

This paper (Sidorenko et al. 2018) is also relevant.

@pedrocol
Copy link

After conversation we decided to implement the distribution at depth of wfiform array uniformly in the first 5 meters.
Should we limit this in latitude? In my experiments I limit the extent to 60S, which is where the iceberg array is inserted

@pedrocol
Copy link

I think I've seen some Arctic plots, so I didn't limit this to 60S. The executable is

/home/552/pc5520/access-om2/bin/brine5m_wilton/fms_ACCESS-OM_97e3429-modified_libaccessom2_1bb8904-modified.x

You'll need to include these two lines in your diag_table file, 2d and 3d brine rejection arrays.

"ocean_model","brine_fwflx","brine_fwflx", "ocean_monthly_3d_basal","all",.true.,"none",2
"ocean_model","brine_fwflx2d","brine_fwflx2d", "ocean_monthly_2d_basal","all",.true.,"none",2

@willaguiar
Copy link
Owner

That was fast - Thanks! I believe the important regions would be along the coast, where DSW is formed, so 60S would be alright ( I imagine FWF on the northern boundary of the SO sea ice would have very low impact here.

@adele-morrison
Copy link
Collaborator Author

adele-morrison commented Apr 19, 2024 via email

@pedrocol
Copy link

Give it a try for a few months, if you see any differences in the Equator (far from sea-ice formation regions), we should use the exact same code version you are using

@willaguiar
Copy link
Owner

willaguiar commented Apr 19, 2024

@pedrocol , the model has been looking for the file INPUT/ocean_month_output_GPC010_calving_365_final.nc. I imagine that if this is a prescribed FWF from calving (vertically/horizontally distributed) then we don't want these fluxes in our simulations, so we can match them with our OM2 previous case?

@pedrocol
Copy link

They were set to zero, but I forgot to comment the import of the files, it should work now. Still the same filename for the exe file:

/home/552/pc5520/access-om2/bin/brine5m_wilton/fms_ACCESS-OM_97e3429-modified_libaccessom2_1bb8904-modified.x

@willaguiar
Copy link
Owner

Thanks! Looking for INPUT/basal_fw.nc (U might have to comment that one too :) )

@pedrocol
Copy link

Sorry for this, I double checked the logicals and they work fine now, you just need to add the following lines to your ocean namelist:
&ocean_basal_tracer_nml
use_basal_module = .false.
use_icb_module = .false.
use_brine_module = .true.
/
It should work now

@willaguiar
Copy link
Owner

No worry - and thanks!

@pedrocol
Copy link

I just tested the new module and is fine now, there was still a problem in reading the namelist in get_ocean_sbc

         Control                                      -                          Control+brine

check_brine

As we can see control pme (-2.6e11) = pme (8.86e11) + brine (-1.14e12), this means that the part of the code that splits pme and fills the brine array works correctly
Also, in Control+brine, Mass from sources = brine, this means that the brine module works fine and no other sources (like basal or icb) are accounted.
Note as well that basal and icb = 0.

Sanity checks:

  • Check that wfiform = brine2d
  • Check that sum in the vertical of brine3d = brine2d
  • Check that brine3d is uniform in the first 5 meters (plot a vertical profile of brine3d for a single point)
  • Last, check that the sst in the control+brine run equals the control run in regions far from sea-ice formation (check that the code version is the same)

@pedrocol
Copy link

I tested this for 1 month, sum(brine2d)=sum(wfiform)=-177.17 kg m2/s

However, sum(basal3d) = 183.2 kg m2/s (3% difference). The reason for this may be that we are dealing with very small numbers (10e-4) and that the output precision is simple precision (8 digits) or that python handles by default simple precision.
Moreover, most of the differences seem to be coming from places where numbers are smaller:

diffs

Then I checked brine3d/dz for a specific point, and the profile is linear and it only concerns the first 3 points

profile

I checked the brine3d ncfile, and it only has values for the first 3 vertical levels.

So, I think this is validated and ready to run, @willaguiar you just have to check sst or sss far from sea-ice formation regions to check that the code version is the same.

@willaguiar
Copy link
Owner

willaguiar commented Apr 26, 2024

I tested this for 1 month, sum(brine2d)=sum(wfiform)=-177.17 kg m2/s...
So, I think this is validated and ready to run, @willaguiar you just have to check sst or sss far from sea-ice formation regions to check that the code version is the same.

Hi Pedro, thanks for checking that... I did some analysis after running the model for a few months, and it doesn't seem to me that the difference of brine2d and brine3d drifts along the time, so the precision explanation seems reasonable to me.

One thing I notice tho is that the brine is distributed over the first 3 levels, which makes reach down to 3.6 m (interface depth) instead of 5.06m that we wanted. So the ideal would be to make it distributed into the upper 4 levels. ( Sorry if I wasn't cleat that the limit was 5.06m instead of hard 5m). Could we change that?

@pedrocol
Copy link

Cool that everything works as expected.

I changed the code so it distributes brine over the first 4 levels, instead of setting a depth threshold, the executable is in

/home/552/pc5520/access-om2/bin/brine_kmax4_wilton

@pedrocol
Copy link

Screenshot from 2024-04-28 12-37-21

@willaguiar
Copy link
Owner

Thanks again @pedrocol

I ran the brine redistributed run for over a year now. For that run, the difference between brine2d and brine3d is very small from Jan-Jun, but it increases a lot between Jun-Dec. Especially in December, the difference is 100%, where brine2d has no fluxes while brine3d has. Any idea on why is that? The difference seems to be bigger away from the shelf.

Plots below are for the sum of brine fluxes (Plots are similar if I multiply by the surface area to get Kg/s ).
FWF_brine

Could that be linked to the non-distributed SI melting, or is it something related to the way brine2d/3d are defined.

@pedrocol
Copy link

Hi Wilton, can you please point to where the outputs are? The antarctic plot is brine3d - brine2d, right?

@pedrocol
Copy link

By the way, why do you compute the mean and not the sum of the fluxes?

@willaguiar
Copy link
Owner

Hi Yes.... they should be in /scratch/v45/wf4500/simulations/RYF_1mtop_FWFdist/wf4500/access-om2/archive/ryf_FWFdist

And sorry - wrong title - they are actually the sum, and not the mean ( only mean is the lower plot, along time)

@pedrocol
Copy link

I checked a bit the outputs. I guess you are evaluating this in the antarctic domain and not in the whole domain, and this is the reason why you have brine = 0 in summer. I took a close look with ncview to the files, it is really easy to find values of brine of 1e-08, even 1e-10. This means the data is saved with double precision, but if you check the data in python, for example:
Screenshot from 2024-04-30 00-51-57
You can see that the data is handled with simple precision.

This means that the sum is not performed correctly. The reason why you have larger differences in winter is because brine covers a larger area, and therefore there are more small numbers. I think you should try with double precision calculations in python

@adele-morrison
Copy link
Collaborator Author

Is this brine2d/brine3d variable the equivalent of wfiform? i.e. we're getting more sea ice formation in the new run?

@pedrocol
Copy link

brine2d = wfiform, brine3d is brine2d distributed over depth. brine2d values are already very small, and therefore brine3d values are even smaller. I don't think there is a problem with the new module coded in mom5. I think it is just a precision issue.

@adele-morrison
Copy link
Collaborator Author

@willaguiar what does the change in SWMT over the shelf look like? We were expecting less dense water formation when we spread the formation fluxes over the top 5m, but this seems to be making more sea ice. Does wfimelt also change though in an opposing way?

@willaguiar
Copy link
Owner

@adele-morrison I haven't looked up yet the difference between the runs tho, as I found this difference in the validation of the FWFdist run (between the 2d and 3d variables in the same run)

@pedrocol , I tried converting the output to double precision after importing in python, so the calculations are carried with higher precision. I still get the same plots tho. I checked out the ncfile, and the brine variables are actually saved as single, not double
"ocean_model","brine_fwflx2d","brine_fwflx2d", "ocean_monthly_2d_basal","all",.true.,"none",2 .

I can rerun December saving the brine as double to see if it changes the results - do you think it would be worth it?

@adele-morrison
Copy link
Collaborator Author

Oh ok, I was confused. I thought they were from different runs.

@willaguiar
Copy link
Owner

willaguiar commented Apr 30, 2024

ok, I reran nov/dec in the model, and saved the brine outputs with double precision. Then I recalculated the plots above with the double precision data. The difference stays mostly the same. So perhaps it is not a precision problem? (single: dotted, double: full line)

double_precision_plot

Do you think that is due to the density weighting?
Double precision output in /scratch/v45/wf4500/simulations/RYF_1mtop_DECFWFdist/wf4500/access-om2/archive/ryf_DECFWFdist/output010/ocean/

@pedrocol
Copy link

pedrocol commented Apr 30, 2024

Hi Wil, I'm not sure I'm following, the model runs with double precision by default and the outputs were saved with double precision, there is no need to re-run the simulation. I feel confident the split basal2d/basal3d is done correctly in the model because the single time step diagnostics show this. In the previous comment:

I just tested the new module and is fine now, there was still a problem in reading the namelist in get_ocean_sbc

Mass of brine is -1.14689565618032349e+12 (brine2d), and Mass from sources is -1.14689565618032349e+12 (brine3d). This means that the precision used by the model (double) is enough, and the split is done correctly for that single time step diagnostic.

The issue you are facing now is probably that you still import the data with simple precision. The calculation performed with double precision just adds 8 more zeros to your simple precision imported data, and therefore makes no difference.

@dkhutch
Copy link
Collaborator

dkhutch commented Apr 30, 2024

Hi @pedrocol,
I looked at the double vs single precision issue with Wilton. We checked the output folder. The default output diagnostics were in single precision, which you can see when you type:
ncdump -h <file.nc>
You get something like:
float brine_2d(...)

If the output is in double precision, then this will show up in ncdump as:
double brine_2d(...)

When you load a single precision (float) array into python using the netCDF4 package, it automatically converts the numbers into double precision. If you load single precision data using xarray, it will keep it in single precision.

@willaguiar has now done an explicit test where he changed the output flag to double precision in diag_table. To do that you have to change the final number of the diagnostic request from 2 to 1. The "template" for doing this is:

"module_name", "field_name", "output_name", "file_name" "time_sampling", time_avg, "other_opts", packing

where the following denotes the packing index:

packing  = 1  double precision
         = 2  float
         = 4  packed 16-bit integers
         = 8  packed 1-byte (not tested?)

@pedrocol
Copy link

pedrocol commented May 1, 2024

I see, thanks for the clarification, the outputs were in single precision. Now we need to verify that we import the data keeping the double precision, and then that the computation is performed using that double precision

@pedrocol
Copy link

pedrocol commented May 2, 2024

By the way, here is the piece of the code that splits brine2d into brine3d. Then brine3d is added to mass_source.

Screenshot from 2024-05-01 23-55-16

If in the single time step diagnostics, mass_source equals brine2d, then the split is done correctly. Which is what I mentioned before, and it's the reason why I think this is a precision issue. Let's discuss more tomorrow.

@adele-morrison
Copy link
Collaborator Author

What's thkocean equal to? Maybe it's useful if we could see the code for writing the diagnostics brine2d and brine3d also?

@willaguiar
Copy link
Owner

willaguiar commented May 2, 2024

Here is the code I used for the calculations. First plot on cell 16 is using only single data, and second plot on line 23 has single and double data overlapping. Let me know if there is an error in they way I added the fluxes.

@adele-morrison
Copy link
Collaborator Author

Oh, sorry, I meant the fortran code where the diagnostics are defined.

The differences seem too large to me (and have a weird seasonal pattern) to be caused by single/double differences.

@pedrocol
Copy link

pedrocol commented May 2, 2024

thkocean is the total thickness:

Screenshot from 2024-05-02 20-30-47

The code to diagnose brine and brine3d

Screenshot from 2024-05-02 20-32-42

@pedrocol
Copy link

pedrocol commented May 4, 2024

I confirm that there is an issue with the saved data. This issue is not present for the basal and icb fields in my simulations. Is only related with the brine field. I also add that there is not an issue with brine3d in the simulation, the single time step diagnostic is ok. The issue seems to be with the saved data only.
In one of my tests brine3d[:,342,434].sum() = -0.02977112 and brine[342,434] = -3.0017259e-06, completely unrelated values.

@willaguiar
Copy link
Owner

@pedrocol Thanks for figuring out what the issue was ! Any idea why the saved data is differing from the applied forcing?

@pedrocol
Copy link

pedrocol commented May 4, 2024

Not really, I'll run a few more tests and let you know if I find something else.

@pedrocol
Copy link

pedrocol commented May 7, 2024

Hi Wil, I've set the single time step diagnostic to appear every 10 time steps:

&ocean_tracer_diag_nml
diag_step = 10

"Mass from sources" always equals "Mass of brine". I don't see any problem with the code. However, I can't find where the problem of the output is. If you want to test the code as it is, you can deduce brine3d from the brine array of the outputs.

@willaguiar
Copy link
Owner

I rerun the FWFdist run, with the new fix in the code by Pedro.....


Validation
With the new fix now, the Sum(Brine3d) =sum(brine2d) = sum(wfiform). The local difference between Brine3d.sum('st_ocean' ) and brine2d is negligible (map on the bottom).The freshwater flux is properly distributed over the top 5m too.
Screenshot 2024-06-11 at 9 04 33 AMScreenshot 2024-06-11 at 9 06 07 AM
Screenshot 2024-06-11 at 9 18 47 AM
So it seems that the diags are correctly output now. (thanks @pedrocol )


DSW response
Currently I have run 2 years for that simulation, so the plots below are for the last/2nd year. By the second year, the SWMT and DSW overflow across the 1km isobath in the FWFdist is very similar to the 1mtop, which suggests than that it is not the spreading of the FW/salt fluxes from sea ice formation over larger depth that changes the DSW.
Screenshot 2024-06-11 at 11 54 07 AMScreenshot 2024-06-11 at 11 54 18 AM
other possible explanation is that perhaps we are missing the effect of sea ice melting, since we only spread the FW fluxes from sea ice formation in depth (sea ice still melts over the top 1m). For example, if the surface is fresher during the summer in the 1mtop than in the 5mtop, we could have stronger sea ice formation increasing the seasonal amplitude of the FWFs?

What do you all think?

@willaguiar
Copy link
Owner

It would be interesting to discuss these ideas further in a DSW meeting this week...

@adele-morrison
Copy link
Collaborator Author

Meeting sounds good. I am out of ideas! 10am Friday again? Probably you need to send a meeting invite for Andy and others who may no longer have that spot free in their calendars

@PaulSpence
Copy link
Collaborator

PaulSpence commented Jun 11, 2024 via email

@willaguiar
Copy link
Owner

willaguiar commented Jul 17, 2024

I was plotting the target manuscript figures , specially for the section where we discuss the effect of spreading the FWF in depth. By doing it I realized that even tho the SWMT and overflow curves seem to show almost no response to spreading the FWF [b,c], when we actually calculate the mean overflow and SWMT time series for the target density layers, we can see a substantial decrease in DSW in the FWFdist in comparison with 1mtop [d], of about 22% in overflow.
V1_Figure_4-4
The problem is that the DSW still not stabilized in the FWFdist, so perhaps we might need to run this simulation a little longer (perhaps more 3 years).... what do we think?

Clarification: I still think the salt concentration will not fully explain the DSW shutdown on 5mtop, but perhaps it can explain 30% to 40% of the reduction. Besides I think a reviewer might want to see this series extended.

@adele-morrison
Copy link
Collaborator Author

Yes let's run longer! I don't understand how the overflow time series can decrease when the curve seems to show no change? What do the curves look like if you plot just for the last year (1993)?

@willaguiar
Copy link
Owner

willaguiar commented Jul 17, 2024

This is the case for the last year ( we can see a little more separation).
Screenshot 2024-07-17 at 11 53 30 AM

But I think the issue is another.... I was looking through the selection of the density interval used in the transport time series and noticed that for some reason they are different, when I use .where to separate MOM5_sigma90 = 27.83 and MOM5_sigma65 = 27.9 for example, it gives different intervals for different runs. so when averaging FWFdist it artificially lowers the FWFdist.
Screenshot 2024-07-17 at 12 58 07 PM
Screenshot 2024-07-17 at 12 58 20 PM

It doesn't show up in the python display, but it seems the sigmas defined in RYF_FWFdist case have some extra 9's ( shows on ncdump) by the end, i.e., 27.9 = 27.899999 in that case, which shifts the sigma space used for the ref_FWFdist case..... I must have changed that in the code at some point along the way - my bad. That is why the transport time series do not seem to match the curve (Sorry for the false gleam of hope!).

@willaguiar
Copy link
Owner

PS: I checked for all the other runs, sigma levels only differ for FWFdist

@dkhutch
Copy link
Collaborator

dkhutch commented Jul 17, 2024

That is why the transport time series do not seem to match the curve (Sorry for the false gleam of hope!)

Thanks Wilton - do you mean that the transport time series for FWFdist should in fact match more closely with the standard 1m case if the histogram bins were calculated the same way?

@willaguiar
Copy link
Owner

yes @dkhutch .... or if they were selected for DSW range in the same way.... for example in the case below I fixed just the selection range by adding 0.005 ( MOM5_sigma90 = 27.835 and MOM5_sigma65 = 27.855)
Screenshot 2024-07-17 at 1 56 43 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants