your zenodo badge here
Sources of Overconfidence for Agent-Based Models Parameterized by Point Estimates
V. Srikrishnan1*, P. Bhaduri1, J. Yoon2, B. Daniel2 and D. Judi2
1 Department of Biological & Environmental Engineering, Cornell University, Ithaca, NY, USA 2 Pacific Northwest National Laboratory, Richland, WA, USA
* corresponding author: viveks@cornell.edu
Insert your paper abstract.
Insert your paper reference. This can be a link to a preprint prior to publication. While in preparation, use a tentative author line and title.
References for each minted software release for all code involved.
If Zenodo has been linked to your GitHub repository, these are generated by Zenodo automatically when releasing Github code. The Zenodo references are built by setting the author order in order of contribution to the code using the author's GitHub user name. This citation can, and likely should, be edited without altering the DOI.
If you have modified a codebase that is outside of a formal release, and the modifications are not planned on being merged back into a version, fork the parent repository and add a .<shortname>
to the version number of the parent and construct your own name. For example, v1.2.5.hydro
.
Example: Human, I.M. (2021, April 14). Project/repo:v0.1.0 (Version v0.1.0). Zenodo. http://doi.org/some-doi-number/zenodo.7777777
References for all data used in your analysis. The specific version of the data should be archived, in Zenodo or elsewhere. The Cornell library has additional information on data archiving resources.
Reference for each minted data source for your input data. For example:
Example: Human, I.M. (2021). My input dataset name [Data set]. DataHub. https://doi.org/some-doi-number
Reference for each minted data source for your output data. For example:
Example: Human, I.M. (2021). My output dataset name [Data set]. DataHub. https://doi.org/some-doi-number
List packages (with versions) that you installed and relied on for your code (explicitly; any necessary "behind the scenes" packages that were installed by the package manager don't need to be mentioned), along with the versions (this will help ensure compatibility down the road). Depending on the package manager used, these may be easy to get from a Project.toml
, environment.yml
, or an analogous file. If you installed a package from its code repository, link to the repository and a link to its released version (ideally a DOI if available).
Package | Version | Repository Link | DOI |
---|---|---|---|
dependency 1 | version | -- | -- |
dependency 2 | version | link to code repository | link to DOI of release |
If your experiment used models from another repository, link to those repositories here.
Model | Version | Repository Link | DOI |
---|---|---|---|
CHANCE-C | 1.0 | https://github.com/DOE-ICoM/chance-c-v1 | link to DOI of release |
This section should consist of a walkthrough of how to reproduce your experiment. This should be a complete set of steps, starting from installing any necessary models and downloading data, and including which scripts to run for each piece of the experiment. If your code was written to work on a particular HPC environment (including the queue manager), document that here. If you had to manually make any adjustments that aren't captured by your code, document them here as well.
- Install the software components required to conduct the experiment from Contributing modeling software
- Download and install the supporting input data required to conduct the experiment from Input data
- Run the following scripts in the
workflow
directory to run my experiment:
Script Name | Description | How to Run |
---|---|---|
step_one.jl |
Script to run the first part of my experiment | julia step_one.jl |
step_two.jl |
Script to run the last part of my experiment | julia step_two.jl |
- Download and unzip the output data from my experiment Output data
- Run the following scripts in the
workflow
directory to compare my outputs to those from the publication
Script Name | Description | How to Run |
---|---|---|
compare_outputs.jl |
Script to compare my outputs to the original | julia compare_outputs.jl |
Use the scripts found in the figures
directory to reproduce the figures used in this publication. It might be best practice to separate these scripts for each figure (or batch of figures) to improve readability and facilitate debugging.
Script Name | Description | How to Run |
---|---|---|
generate_hindcast.jl |
Script to generate a hindcast | julia generate_hindcast-jl |