Skip to content

srikrishnan-lab/abm-precalibration

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

your zenodo badge here

Srikrishnan-etal-2024_TBD

Sources of Overconfidence for Agent-Based Models Parameterized by Point Estimates

V. Srikrishnan1*, P. Bhaduri1, J. Yoon2, B. Daniel2 and D. Judi2

1 Department of Biological & Environmental Engineering, Cornell University, Ithaca, NY, USA 2 Pacific Northwest National Laboratory, Richland, WA, USA

* corresponding author: viveks@cornell.edu

Abstract

Insert your paper abstract.

Journal reference

Insert your paper reference. This can be a link to a preprint prior to publication. While in preparation, use a tentative author line and title.

Code reference

References for each minted software release for all code involved.

If Zenodo has been linked to your GitHub repository, these are generated by Zenodo automatically when releasing Github code. The Zenodo references are built by setting the author order in order of contribution to the code using the author's GitHub user name. This citation can, and likely should, be edited without altering the DOI.

If you have modified a codebase that is outside of a formal release, and the modifications are not planned on being merged back into a version, fork the parent repository and add a .<shortname> to the version number of the parent and construct your own name. For example, v1.2.5.hydro.

Example: Human, I.M. (2021, April 14). Project/repo:v0.1.0 (Version v0.1.0). Zenodo. http://doi.org/some-doi-number/zenodo.7777777

Data reference

References for all data used in your analysis. The specific version of the data should be archived, in Zenodo or elsewhere. The Cornell library has additional information on data archiving resources.

Input data

Reference for each minted data source for your input data. For example:

Example: Human, I.M. (2021). My input dataset name [Data set]. DataHub. https://doi.org/some-doi-number

Output data

Reference for each minted data source for your output data. For example:

Example: Human, I.M. (2021). My output dataset name [Data set]. DataHub. https://doi.org/some-doi-number

Dependencies

List packages (with versions) that you installed and relied on for your code (explicitly; any necessary "behind the scenes" packages that were installed by the package manager don't need to be mentioned), along with the versions (this will help ensure compatibility down the road). Depending on the package manager used, these may be easy to get from a Project.toml, environment.yml, or an analogous file. If you installed a package from its code repository, link to the repository and a link to its released version (ideally a DOI if available).

Package Version Repository Link DOI
dependency 1 version -- --
dependency 2 version link to code repository link to DOI of release

Contributing modeling software

If your experiment used models from another repository, link to those repositories here.

Model Version Repository Link DOI
CHANCE-C 1.0 https://github.com/DOE-ICoM/chance-c-v1 link to DOI of release

How to reproduce the experiment

This section should consist of a walkthrough of how to reproduce your experiment. This should be a complete set of steps, starting from installing any necessary models and downloading data, and including which scripts to run for each piece of the experiment. If your code was written to work on a particular HPC environment (including the queue manager), document that here. If you had to manually make any adjustments that aren't captured by your code, document them here as well.

  1. Install the software components required to conduct the experiment from Contributing modeling software
  2. Download and install the supporting input data required to conduct the experiment from Input data
  3. Run the following scripts in the workflow directory to run my experiment:
Script Name Description How to Run
step_one.jl Script to run the first part of my experiment julia step_one.jl
step_two.jl Script to run the last part of my experiment julia step_two.jl
  1. Download and unzip the output data from my experiment Output data
  2. Run the following scripts in the workflow directory to compare my outputs to those from the publication
Script Name Description How to Run
compare_outputs.jl Script to compare my outputs to the original julia compare_outputs.jl

Reproduce paper figures

Use the scripts found in the figures directory to reproduce the figures used in this publication. It might be best practice to separate these scripts for each figure (or batch of figures) to improve readability and facilitate debugging.

Script Name Description How to Run
generate_hindcast.jl Script to generate a hindcast julia generate_hindcast-jl

About

Meta-repository for ABM precalibration experiment

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Julia 100.0%