-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modernize DevOps Tooling #310
Conversation
4eedb4a
to
0d0a7f9
Compare
Codecov Report
Additional details and impacted files |
c57aca7
to
f93456f
Compare
5978537
to
0dadf6e
Compare
I've generally considered distributing tests to be a good thing. Especially as software becomes more modular, it can be helpful to test packages in situ. And I suspect packagers want to run tests as part of build checks. Other views? Also, I noticed a lot of new |
0dadf6e
to
91c9c39
Compare
Good catch on the Re: tests. Testing is a development concern, not an end-user concern. End users should never need to run tests on a distribution. The presumption is that distribution has already been tested before the maintainers (us) distributed it. We aren't shipping built packages that weren't tested are we? ;P So test code is extra cruft in a distribution that adds weight and provides no value (the exact code they are running already was tested!). In a world where people are compiling complicated code and bugs may be strange library compatibility issues (like old scientific C librararies) it makes sense to distribute test for santity. But this is because end users are also building. With python, there is no build step. The code is good-to-go as long as we tested it first. Developers can run tests just like always with |
As examples take a look at FastAPI, pydantic, flake8, poetry, basically any standard python pacakge, |
I can see the user-concern/developer-concern distinction -- certainly I wouldn't run MS Word tests, and I've maybe run Python tests once. But those aren't so close to research and probably get much broader testing at release time than qca. Sure, qcel is tested before pkg building :-) For my part, it's likely even been tested a couple layers downstream through qcng, psi4, qcdb (nwc, gms, cfour). But that's still not great environmental coverage. For example, devs test may on Linux, but maybe we don't generalize the paths right, so something breaks for the user on Windows. Or devs test with MKL, but users run with OpenBLAS, or worse half-MKL, half-OpenBLAS (thanks, pypi). Or devs test with latest and min-version dependencies, and an unintentional breaking change is released for a dep the next day. Or devs test with released versions and users have development versions of several packages. Users are great at installing software combos that faintly horrify developers. Most of these troubles do involve complications from the compiled code and library compatibility tangles you mentioned, not qcel itself. And some of these arguments apply less strongly to qcel than to qcengine, which is literally tying together these unwieldy projects that never expected to bump shoulders. But tests are fairly light and a great way to diagnose package vs. environment problems. Since qcarchive packages are closer to research than broader tools like fastapi, numpy is probably a better model, and it looks like it packages its tests. |
I hear you :) A few additional thoughts:
That said, you are the boss on this! If you want tests in the distributions, I'll add them back. I would propose the follow solution as I think it provides the best end-user experience for what you have in mind. Though again, if their package is broken the tests can't run--so I don't think we're really offering anything ;)
What would you like done? |
d403866
to
e6d67b0
Compare
e6d67b0
to
2e4b02a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all the modernizations. This is a partial review with a few questions.
7f5ac30
to
7224657
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
assert "v" + qcel.util.safe_version(qcel.__version__) == qcel.__version__ | ||
|
||
|
||
def test_parse_version(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these perhaps stay, too, since the functions do.
…efined package dependencies. Removed --validate tests. Added scripts/build_docs.sh and removed Makefile for /docs. Flattened docs directory. Removed setup.py in favor of pyproject.toml and poetry build system. Removed LGTM and travis-ci code.
4739556
to
ab6e830
Compare
@loriab updates made. Test passing! Review pls? |
Woot woot! <3 |
Description
Modernize DevOps tooling. No changes to code functionality in qcel, only devops tooling.
Changelog description
/scripts
directory to root of project that contains scripts for testing, formatting code, and building docs.setuptools
to modernpyproject.toml
specification usingpoetry
for the build backend.pyproject.toml
. Using standard libraryimportlib
for looking up package version in__init__.py
file.build_docs.sh
script to/scrips
and removedMakefile
from/docs
. Flattened/docs
file structure.travis-ci
code fromdevtools
TODO
pytest --validate
thing was about. @loriab said this was OK to drop.Did Not Do
qcelemental/tests/qcshema*/*.json
. The tests rely upon/qcschema/{NAME}/
already being created, so these currently junk up the repo with adummy
file in them so git will keep the empty folders in the repo. Modify tests to remove these unnecessary directories to clean up the repo. Test should never write data to the repo. Junk data should be written to/tmp
or some other temporary in memory buffer. If we want this output data for something (like QCSChema examples) we should have ascripts/build_examples.sh
script, not couple the task to our test. Separation of concerns :)Notes
Status