Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
Conditioning Operating Models
The tuna Regional Fisheries Management Organisations (tRFMOs) are increasingly using Management Strategy Evaluation (MSE) to develop robust management advice. This requires the conditioning of Operating Models (OMs) that simulate resource dynamics and monitoring data in order to test the strategies used to set management regulations. This is done by simulating candidate Management Procedures (MPs) as feedback controllers and then choosing the MP that best meets management objectives. Where an MP is the combination of pre-defined data, together with an algorithm to which such data are input to provide a value for a control measure such as a Total Allowable Catch (TAC). Once the OMs and management objectives have been chosen then the "best" MP is an emergent property, the task is to find it. Therefore the procedures for proposing, validating, accepting and weighting OM scenarios is fundamental to the MSE process.
There are many alternative ways to condition OMs which are used to represent the resource dynamics. One way is to use a stock assessment, this implies that the assessment is able to describe nature almost perfectly, however, if this is true why bother with MSE? Therefore, in this project we review current practice adopted when conditioning OMs in the tRFMOs, we then look at objective ways to chose and validate OM scenarios, and provide a worked example based on North Atlantic albacore.
The range of OM’s examined across tRFMOs were primarily based on the assessment models. In some cases these were developed for peculiarities of the species (IO SKJ/AO BFT), and may have explicit spatial structure. Issues of eliminating unrealistic scenarios need to be standardized, and should be clearly documented so one tRFMO can learn from another. Getting agreement on the scenarios examined should be discussed from the onset. In addition, robustness trials should be agreed on from the onset. Grid based OM’s dealing with structural uncertainty has primarily been the basis of most development work so far, though more processes dealing with sampling and time series approaches that account for non-stationarity of ecological processes should be examined in most cases. Data weighting and issues of which models are more plausible is an area for further work, as some management bodies (IWC) maybe a lot further along than others. Spatial issues in OM and multi-stock structures are additional areas where further work can be undertaken, and observation error models accounting for sampling biases are important to consider. The current approach using the assessment models as the basis for OM design is a good starting point, though further processes (observation error and time series processes) should be added as additional processes that should be accounted for in OM designs.
Optimal Grid Design in Operating Models: Work Smarter, Not Harder.
MSE's often use assessment models with complex structural uncertainty designs examining multi-way interactions. The paper examines different approaches to evaluate multi-way interactions and optimize grid design to capture the uncertainty in the system being modeled. The objective is not to make a perfect assessment model, but rather to capture the true dynamics of the system that encapsulates the underlying uncertainty so a reference set along with a robustness set of models could be examined to optimize a control rule that could be empirical or model based in nature.
Cross-validation I: An Objective Procedure for the Validation of Operating Models Conditioned on Stock Assessment Datasets
The provision of fisheries management advice requires fitting a model to data to assess stock status, the prediction of the response of the stock to management, and checking that predictions are consistent with reality. The accuracy and precision of the prediction depend on the quality the model, the information in the data and the prediction horizon (i.e. how far ahead we wish to predict). Often small tweaks to the input data or assumptions can result in substantial differences to advice. The aim of this study therefore is to develop a procedure for the validation of stock assessment model scenarios comprising alternative model structures and datasets. Where validation examines if a model family should be modified or extended, and is complementary to model selection and hypothesis testing. Model selection searches for the most suitable model within a family, whilst hypothesis testing examines if the model structure can be reduced. We do this for a case study based on the 2017 East Atlantic and Mediterranean bluefin stock assessment data (ICCAT, 2018) with scenarios for for a range of datasets, model structures and the values of fixed parameters. Prediction errors are then calculated using leave-one-out, one-step and hindcast procedures and compared to model errors.
Cross-validation II: Simulation
The paper aims at evaluating performance by Xval approaches (e.g. JK and HC) for selecting stock assessment models with different data and complexity through simulation. The rationale is that these approaches can be used for situations where standard model selection criteria cannot be applied. Age-structured models conditioned on XXXX data were used for generating simulation data to fit several stock assessment models such as surplus production models, delay-difference models, statistical catch-at-age models (and stock synthesis models). The models were then compared by Jack-knifed or hindcasted metrics against observed abundance indices, mean weight of catch etc. Simulation results showed that the Xval approaches work well for enhancing contrast in fitting and predictability among stock assessment models. Results can be used for conditioning and screening of operating models used in the management strategy evaluation.