-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sensitivity wishlist #43
Comments
I think a combination of both is fine, where sensitivities without a larger current discussion can go here, but we can also tag other issues with one of the appropriate "sensitivity" tags. |
TOR says we must estimate a single M as a sensitivity. |
Kicking off a bunch of sensitivities to run while we add more text to the document seems like a good idea. Here's a proposal for a cleaned-up set of sensitivities which we can automate within I think grouping them by number (e.g. 101, 102, etc. for biology sensitivities) will make it easy to biology and recruitment (100-series)
composition data (likely requires extra tuning step) (200-series)
indices (300-series)
selectivity (400-series)
|
@iantaylor-NOAA where are you at with this I asking because I have a lot of code in a hake script to do many of these. |
doesn't yet work for lambda changes or multiple parameter lines
run_sensitivities() has been expanded in 68fe18e and ad1f957, so now it's creating all the folders and modifying the parameter lines for those sensitivities where a single parameter line needs to change.
Next steps would be
But is it worth investing the time now? If it takes a model 30 minutes to run and 30 seconds to manually edit the control file? Maybe what we've got now is good enough. |
In the interest of time, it's probably better to just run all the sensitivities with -nohess and then return to get the Hessian (potentially starting from the .par file) if there's particular interest in any of them. We can still use maximum gradient component as a diagnostic of convergence even without the check of a positive definite Hessian. |
creating models/2021.n.017.102_h0.7 is already done in the profile on h |
Good point @kellijohnson-NOAA. I think a few of the age inclusion/exclusions are already in place as well. North model takes about 20 minutes with -nohess -cbs 1500000000 and we have 26 sensitivities on the list, so by my math, that's < 9 hours to run them all and some have been run already. The south model will be faster and I can run at least 3 groups at once, so should have all results by the end of today. I think that makes more sense than trying to divide up among different people. Let me know if this doesn't make sense. I can work on creating the summary figures and tables for groups of them after they finish and don't need to wait for the full set. |
Do you still need me to parse the ones with code in the column? |
Sounds good. Don't worry about parsing the extra column. For this year, I'll just take the 30 seconds to edit the files by hand. |
Too late, I was already down the rabbit hole ...
|
Took too long, but finally the automated sensitivity running is running for north and south models. I forgot to skip the redundant h0.7 case, but can kill it when it gets there and continue with the rest using type = "sens_run" only.
|
It didn't take too long, you were faster than all of us who were doing nothing on it. Our future selves will ❤️ you for this. 👏 |
Yes, for sure there is investment in our future selves. Lots of cleanup of the messy things, but easier to clean up than start from scratch. Now running leave-one-out sensitivities index sensitivities for the south model (#86).
|
|
Sensitivity-related functions are added in 2599f1f and e4ed549. The first table of results is here: It would be easy to change the units, labels, add or subtract rows, etc. within the function so if any of that seems useful, let me know so it applies to future tables, although we can obviously apply additional processing outside the new functions. |
can you save the table in tables rather than doc? |
I just added a table of south index sensitivities. I think the likelihood may have gone up rather than down for the removal of the rec indices because I forgot to turn off the extraSD parameter. I'll look closer later. Also, the final table would theoretically include 2 more columns for index sensitivities on the list that I haven't run yet (301 and 302). |
Looks perfect to me (except for rounding). The M Male value under the share M should be the same as the M female = 0.278481. Even though we don't use parameter offsets, I just learned from Rick that fixing a non-offset male growth parameter = 0 results in a match with the female value, and confirmed this in the Report file for that model. I could fix that in the table-making code or you could do it at the document end--whichever you wish. |
I think it would be better if we fix numbers in the csv code and just fix formatting in the kable code |
will do, should I also round everything to 2 digits? |
I already figured out the rounding thing in kable. |
The two sensitivity tables are update for the "share M" case and new plot associated with these sensitivities have been added in 37fc8da. Plots and associated csv files are in Plots are below, showing influence of the indices that cover the earlier time period for the south model. |
Third (and last for this draft) set of south model sensitivities added in 856ff19. |
Owen just commented that the sensitivity with M=.3 and h =.7 doesn't have M at .3 ... @iantaylor-NOAA can you check this in the files? |
Fix for M=.3 and h =.7 sensitivity added in d02c0d2. Models will run while we sleep (including others for the north) and I'll post fixed results in the morning. |
Sensitivity results are updated to fix an issue with the DM and add the remaining ones for the north (low-tech numbering scheme now puts it at the end of the list). |
Sensitivities are complete. |
I'm not sure if it makes sense to create separate issues to track high and low priority sensitivity ideas or pile them all in one.
Here's a low-priority one from #27 (comment).
The text was updated successfully, but these errors were encountered: