You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am raising this issue as part of the JOSS review over at openjournals/joss-reviews#6711, and will use it to collect any comments that pop-up as I try out the code. This is from my first non-exhaustive pass of reviewing.
README has hardcoded version (0.0.2) on PyPI that does not match latest (0.0.3)
The package does not have any continuous integration set up (the stub .travis.yml does nothing). This is not a requirement for JOSS but moves it down to "OK" from "Good" on this scale: https://joss.readthedocs.io/en/latest/review_criteria.html#tests. I would strongly recommend adding a simple GH actions CI if you want others to use and contribute to this code.
After following the procedure in the README verbatim, 214 tests fail and 159 pass due to hardcoded paths. The package needs to be installed locally pip install -e . and the tests must be run from inside the tests directly only. Even after remedying these issues, there are still 5 test failures on my machine (details below) which seem to arise from naive FP comparisons:
There is limited documentation outside of the examples, which makes it hard to get an overview of the package and where to find certain functionality. From an initial cursory look it seems that most classes/methods have docstrings of varying usefulness, but ideally these would be combined into a searchable docs page using e.g., Sphinx or mkdocs. It is not a JOSS requirement that this is hosted online (though this is very useful) so a simple local docs build would suffice.
I was unable to run bayesian_optimisation.ipynb, dataset_roughness.ipynb or example_function.ipynb due to an import error: AttributeError: module 'topsearch' has no attribute 'potential'. It looks like this module name was changed at some point. Following point 2) above, these examples should also be checked in the CI for each version.
I tried the scripts/atomic/generate_landscape/run.py and my resulting DisconnectivityGraph.png does not match the one in expected_output at all (by my eye, at least, see below). Which should be trusted?
My disconnectivity graph output (click to expand)
The text was updated successfully, but these errors were encountered:
@ml-evs thank you very much for the detailed feedback and recommendations, it’s much appreciated. We’ve made changes/added code and I’ve listed what we changed below, lining up with each of your previous points.
Changed to pip install topsearch to ensure the most up-to-date version is installed.
As recommended we have added Github actions CI. We now run the entire test framework for Python versions 3.10 and 3.11 on all pushes.
We have modified the hard-coded paths to be relative, allowing pytest to be called from the root directory too without failures. We have also updated the tests to avoid failures due to naive floating point comparison, and these test successes are validated in the CI. Finally, we updated the installation instructions.
We have generated documentation with Sphinx, which we have used to populate GitHub pages associated with the project. These pages include an index of (and search function for) all modules, along with their attributes and methods. These GH pages are now linked in the README.
These notebooks were not correctly migrated to the open-source project, resulting in the many failures. We have now updated them so they work as intended. We have added a notebook for atomic clusters.
These disconnectivity graphs are not too dissimilar in content despite their appearance. The results in expected_output are correct and we have updated parts of the script to ensure replication of these results.
I'll keep this issue open so you can continue to collect comments in one place.
I am raising this issue as part of the JOSS review over at openjournals/joss-reviews#6711, and will use it to collect any comments that pop-up as I try out the code. This is from my first non-exhaustive pass of reviewing.
README has hardcoded version (0.0.2) on PyPI that does not match latest (0.0.3)
The package does not have any continuous integration set up (the stub .travis.yml does nothing). This is not a requirement for JOSS but moves it down to "OK" from "Good" on this scale: https://joss.readthedocs.io/en/latest/review_criteria.html#tests. I would strongly recommend adding a simple GH actions CI if you want others to use and contribute to this code.
After following the procedure in the README verbatim, 214 tests fail and 159 pass due to hardcoded paths. The package needs to be installed locally
pip install -e .
and the tests must be run from inside the tests directly only. Even after remedying these issues, there are still 5 test failures on my machine (details below) which seem to arise from naive FP comparisons:Test failures (click to expand)
There is limited documentation outside of the examples, which makes it hard to get an overview of the package and where to find certain functionality. From an initial cursory look it seems that most classes/methods have docstrings of varying usefulness, but ideally these would be combined into a searchable docs page using e.g., Sphinx or mkdocs. It is not a JOSS requirement that this is hosted online (though this is very useful) so a simple local docs build would suffice.
I was unable to run
bayesian_optimisation.ipynb
,dataset_roughness.ipynb
orexample_function.ipynb
due to an import error:AttributeError: module 'topsearch' has no attribute 'potential'
. It looks like this module name was changed at some point. Following point 2) above, these examples should also be checked in the CI for each version.I tried the
scripts/atomic/generate_landscape/run.py
and my resultingDisconnectivityGraph.png
does not match the one inexpected_output
at all (by my eye, at least, see below). Which should be trusted?My disconnectivity graph output (click to expand)
The text was updated successfully, but these errors were encountered: