Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning #3424

Closed
40 tasks done
whedon opened this issue Jun 28, 2021 · 84 comments
Closed
40 tasks done
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX

Comments

@whedon
Copy link

whedon commented Jun 28, 2021

Submitting author: @rusu24edward (Edward Rusu)
Repository: https://github.com/LLNL/Abmarl/
Version: 0.1.4
Editor: @drvinceknight
Reviewer: @seba-1511, @abhiramm7
Archive: 10.5281/zenodo.5196791

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/449c5d79d407dc2e0dbbbb0dae55e122"><img src="https://joss.theoj.org/papers/449c5d79d407dc2e0dbbbb0dae55e122/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/449c5d79d407dc2e0dbbbb0dae55e122/status.svg)](https://joss.theoj.org/papers/449c5d79d407dc2e0dbbbb0dae55e122)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@seba-1511 & @abhiramm7, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @drvinceknight know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @seba-1511

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@rusu24edward) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @abhiramm7

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@rusu24edward) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Jun 28, 2021

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @seba-1511, @abhiramm7 it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jun 28, 2021

Software report (experimental):

github.com/AlDanial/cloc v 1.88  T=0.25 s (461.0 files/s, 68876.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          94           2095           1891          10787
reStructuredText                 9            321            649            522
Markdown                         2             35              0            131
YAML                             5             21             19            111
TeX                              1              8              0             72
Bourne Shell                     1             31            242             41
DOS Batch                        1              8              1             26
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                           114           2523           2809          11699
-------------------------------------------------------------------------------


Statistical information for the repository '06e711bc81620ad59e0ec5aa' was
gathered on 2021/06/28.
The following historical commit information, by author, was found:

Author                     Commits    Insertions      Deletions    % of changes
Eddie Rusu                     421         31301          17013           98.20
Edward Rusu                      2           646            181            1.68
glatt1                           1            39             21            0.12

Below are the number of rows from each author that have survived and are still
intact in the current revision:

Author                     Rows      Stability          Age       % in comments
Eddie Rusu                14757           47.1          1.5                6.27
glatt1                       16           41.0          4.0                0.00

@whedon
Copy link
Author

whedon commented Jun 28, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- 10.1613/jair.3912 may be a valid DOI for title: The Arcade Learning Environment: An Evaluation Platform for General Agents

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Jun 28, 2021

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@drvinceknight
Copy link

MISSING DOIs

  • 10.1613/jair.3912 may be a valid DOI for title: The Arcade Learning Environment: An Evaluation Platform for General Agents

@rusu24edward if you could take a look at the above missing DOI please.

@rusu24edward
Copy link

Reviewers, please note that the release branch is abmarl-87-interface-release. The main branch has some additional developmental work that is not a part of the release nor this review.

@rusu24edward
Copy link

@whedon commands

@whedon
Copy link
Author

whedon commented Jun 28, 2021

Here are some things you can ask me to do:

# List Whedon's capabilities
@whedon commands

# List of editor GitHub usernames
@whedon list editors

# List of reviewers together with programming language preferences and domain expertise
@whedon list reviewers

EDITORIAL TASKS

# Compile the paper
@whedon generate pdf

# Compile the paper from alternative branch
@whedon generate pdf from branch custom-branch-name

# Ask Whedon to check the references for missing DOIs
@whedon check references

# Ask Whedon to check repository statistics for the submitted software
@whedon check repository

@rusu24edward
Copy link

@whedon check references

@whedon
Copy link
Author

whedon commented Jun 28, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1613/jair.3912 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@rusu24edward
Copy link

@drvinceknight I believe I have fixed the DOI issue.

@whedon
Copy link
Author

whedon commented Jul 12, 2021

👋 @seba-1511, please update us on how your review is going (this is an automated reminder).

@whedon
Copy link
Author

whedon commented Jul 12, 2021

👋 @abhiramm7, please update us on how your review is going (this is an automated reminder).

@seba-1511
Copy link

I plan to complete my review by Sunday, July 18.

@abhiramm7
Copy link

I plan to wrap up the review by early next week, July 19

@rusu24edward
Copy link

Hi @seba-1511 and @abhiramm7. Thanks for the review work you have done so far. I just wanted to check in since we are at the dates now where we've anticipated the review would be done. Is there anything you need from me to help move this along? Any feedback on the software or paper?

@seba-1511
Copy link

@rusu24edward Apologies for the delay -- so far the submission looks good. I think ray in requirements.txt should be updated to ray[default] (a warning otherwise pops up) but even with that, I get the following stack trace when following the quickstart instructions:

Traceback (most recent call last):
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/redis/connection.py", line 559, in connect
    sock = self._connect()
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/redis/connection.py", line 615, in _connect
    raise err
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/redis/connection.py", line 603, in _connect
    sock.connect(socket_address)
TimeoutError: [Errno 60] Operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/sebarnol/anaconda3/envs/abmarl36/bin/abmarl", line 33, in <module>
    sys.exit(load_entry_point('abmarl', 'console_scripts', 'abmarl')())
  File "/Users/sebarnol/Desktop/Abmarl/abmarl/scripts/scripts.py", line 43, in cli
    train.run(path_config)
  File "/Users/sebarnol/Desktop/Abmarl/abmarl/scripts/train_script.py", line 17, in run
    train.run(full_config_path)
  File "/Users/sebarnol/Desktop/Abmarl/abmarl/train.py", line 29, in run
    ray.init()
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
    return func(*args, **kwargs)
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/ray/worker.py", line 797, in init
    ray_params=ray_params)
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/ray/node.py", line 230, in __init__
    self.start_head_processes()
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/ray/node.py", line 861, in start_head_processes
    self.start_redis()
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/ray/node.py", line 686, in start_redis
    port_denylist=self._ray_params.reserved_ports)
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/ray/_private/services.py", line 891, in start_redis
    primary_redis_client.set("NumRedisShards", str(num_redis_shards))
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/redis/client.py", line 1801, in set
    return self.execute_command('SET', *pieces)
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/redis/client.py", line 898, in execute_command
    conn = self.connection or pool.get_connection(command_name, **options)
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/redis/connection.py", line 1192, in get_connection
    connection.connect()
  File "/Users/sebarnol/anaconda3/envs/abmarl36/lib/python3.6/site-packages/redis/connection.py", line 563, in connect
    raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 60 connecting to 10.142.140.7:6379. Operation timed out.
[1]    87008 terminated  abmarl train examples/multi_corridor_example.py

Specifically, I ran the following:

conda create -n abmarl36 python=3.6
conda activate abmarl36
git clone https://github.com/LLNL/Abmarl/
cd Abmarl
pip install -r requirements.txt
pip install -e . --no-deps
abmarl train examples/multi_corridor_example.py

@seba-1511
Copy link

Other than that (and once I can confirm functionality), my only minor comment is on state of need/scholarship. The discussion around existing software around multi-agent RL could be more thorough; for example, here's some popular software that could be discussed:

@abhiramm7
Copy link

@rusu24edward I've created two issues 168 and 167 regarding installation and documentation. The examples and documentation were really thorough; I would just recommend adding a note on dependencies in the installation instructions and also providing links the python files (e.g., multi_corridor_example.py) in tutorials. Beyond that, every else looks great.

@rusu24edward
Copy link

Thanks for the reviews! I will try to address each of these points by the end of the week.

@rusu24edward
Copy link

@seba-1511 Regarding your comment on other popular software that can be addressed, I'd be happy to include more discussion on comparisons with popular software, but I think it's important to clarify the suggestions you made as it will shed light on what ABMARL targets. Reinforcement learning has two components: simulation and algorithm (also commonly referred to as environment and agent). Our software targets the simulation component, so I restricted my discussion to other software that specifically targets the simulation component.

Regarding your suggestions:

MADDPG
This paper actually has both simulation and algorithm. The authors create a particle simulation (MPE) and a multi-agent algorithm that they benchmark against their simulation. While this is a popular paper, it is no longer maintained by the originators. The algorithm has since been folded in to RLlib, to which ABMARL connects, and the simulation has been folded into the Petting Zoo, which we do directly mention in our paper.
MADDPG actually highlights the importance of ABMARL. The creators of MADDPG chose to create a simulation in addition to the algorithm in order to benchmark their work. They made it from from scratch, and I argue that the simulation interface that Abmarl provides will greatly ease the burden on algorithm developers who want simple test beds for their multi-agent algorithms. (maybe this is a good sentence to add to the paper?)

pymarl
PyMarl is a collection of multi-agent algorithms, and each of the algorithms that they support is also supported in RLlib, to which ABMARL connects. Pymarl does not have its own simulation benchmark suite; it uses StarCraft, which we reference in our paper.

MAAC
Similar to pymarl, MAAC targets the algorithm component of RL, and the algorithm is supported in RLlib, to which ABMARL connects. It does not really create its own simulation suite. Instead, it creates a specific scenario within MPE (which is the simulation from MADDPG), which is folded into PettingZoo, which we do reference in our paper.

SMARTS and MARLO
Both of these packages do indeed target the simulation component of RL, and I would be happy to include them in my comparison if you think that would be beneficial. Like MAgent, Starcraft, and Neural MMO (already addressed in our paper), both of these also couple the interface with a simulation. I can add these two to the list that I mention in paragraph 2 of statement of need, or I can also expound on each of the packages I listed (include smarts and marlo). I tried to keep it short and sweet in the flavor of many JOSS papers I see, but I'm happy to add more. Suggestions?

@rusu24edward
Copy link

Specifically, I ran the following:

conda create -n abmarl36 python=3.6
conda activate abmarl36
git clone https://github.com/LLNL/Abmarl/
cd Abmarl
pip install -r requirements.txt
pip install -e . --no-deps
abmarl train examples/multi_corridor_example.py

@seba-1511 Thanks for bringing this to my attention. I just ran a fresh virtual environment and did not get any error. We technically specify that python 3.7+ is required, can you try this again with python3.7?

@rusu24edward
Copy link

I would just recommend adding a note on dependencies in the installation instructions and also providing links the python files (e.g., multi_corridor_example.py) in tutorials.

@abhiramm7 I've added links to the tutorials as you suggested.
I wasn't sure exactly what you meant for the dependency note in the installation instructions. Please take a look at this PR and let me know if this is what you had in mind.

@drvinceknight
Copy link

@whedon set 0.1.4 as version

@whedon
Copy link
Author

whedon commented Aug 23, 2021

OK. 0.1.4 is the version.

@drvinceknight
Copy link

@whedon recommend-accept

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label Aug 23, 2021
@whedon
Copy link
Author

whedon commented Aug 23, 2021

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Aug 23, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1613/jair.3912 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Aug 23, 2021

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#2523

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#2523, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@drvinceknight
Copy link

@openjournals/joss-eics note that the branch from which the paper is to be built is abmarl-87-interface-release

@danielskatz
Copy link

@drvinceknight - I'm confused - you didn't do @whedon recommend-accept from branch abmarl-87-interface-release Why not?

@danielskatz
Copy link

It appears the paper is the same in the branch and main, so I don't think it matters, but perhaps @drvinceknight or @rusu24edward could explain before we proceed? In the meantime, I'll proofread the paper (and if changes are needed, suggest them in main)

@danielskatz
Copy link

👋 @rusu24edward - in any case (ignoring the branch vs main issue), there are a couple of small changes needed that I've indicated in LLNL/Abmarl#195

If the paper in main is ok, then these can just be merged and we can proceed to publication with that branch of the paper. If the paper in the other branch is different, these changes can be made there and we can publish using the paper in that branch. But again, I really don't understand what's going on with these two branches, as the paper appears the same in both, and that's the only thing that JOSS uses the branch for.

@drvinceknight
Copy link

@drvinceknight - I'm confused - you didn't do @whedon recommend-accept from branch abmarl-87-interface-release Why not?

Apologies, I didn't realise that's what I should have done. Thanks for pointing it out.

@rusu24edward
Copy link

rusu24edward commented Aug 23, 2021

It appears the paper is the same in the branch and main, so I don't think it matters, but perhaps drvinceknight or rusu24edward could explain before we proceed? In the meantime, I'll proofread the paper (and if changes are needed, suggest them in main)

@danielskatz I'm sorry for the confusion, I'll do my best to explain. Our software utilizes a release strategy similar to the branch release stragey. We have our main branch, which contains active development that is not yet ready for release, and when we make releases we create a release branch off main. This review has been of our first release, which is on abmarl-87-interface-release, not the main branch. The paper is the same on both branches.

Submitting the changes to main is fine, I have duplicated them for the release branch.

@danielskatz
Copy link

The JOSS publication is of the paper, and it includes a pointer to the full repository, so there's no need to worry about the branch in this case, I don't think. Does this make sense?

@rusu24edward
Copy link

@danielskatz Yes, that makes sense.

@danielskatz
Copy link

@whedon recommend-accept

@whedon
Copy link
Author

whedon commented Aug 23, 2021

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Aug 23, 2021

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1613/jair.3912 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Aug 23, 2021

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#2525

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#2525, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@danielskatz
Copy link

@whedon accept deposit=true

@whedon whedon added accepted published Papers published in JOSS labels Aug 23, 2021
@whedon
Copy link
Author

whedon commented Aug 23, 2021

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

whedon commented Aug 23, 2021

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Aug 23, 2021

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.03424 joss-papers#2526
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.03424
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@danielskatz
Copy link

Congratulations to @rusu24edward (Edward Rusu) and co-author!!

And thanks to @drvinceknight for editing, and @seba-1511 and @abhiramm7 for reviewing!

@whedon
Copy link
Author

whedon commented Aug 23, 2021

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.03424/status.svg)](https://doi.org/10.21105/joss.03424)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.03424">
  <img src="https://joss.theoj.org/papers/10.21105/joss.03424/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.03424/status.svg
   :target: https://doi.org/10.21105/joss.03424

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@rusu24edward
Copy link

Thank you everyone very much for this awesome review process and publication. Looking forward to more!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX
Projects
None yet
Development

No branches or pull requests

6 participants