Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test packages #1037

Merged
merged 9 commits into from
Mar 14, 2020
Merged

Test packages #1037

merged 9 commits into from
Mar 14, 2020

Conversation

romainx
Copy link
Collaborator

@romainx romainx commented Mar 8, 2020

This test module tests if R and Python packages installed can be imported.
It's a basic test aiming to prove that the package is working properly.

The goal is to detect import errors that can be caused by incompatibilities between packages for example:

This module checks dynmamically, through the CondaPackageHelper, only the specified packages i.e. packages requested by conda install in the Dockerfiles.
This means that it does not check dependencies. This choice is a tradeoff to cover the main requirements while achieving reasonabe test duration.
However it could be easily changed (or completed) to cover also dependencies package_helper.installed_packages() instead of package_helper.specified_packages().

Example:

$ make test/datascience-notebook

# [...]
# test/test_packages.py::test_python_packages 
# --------------------------------------------------------------------------------------------- live log setup ----------------------------------------------------------------------------------------------
# 2020-03-08 09:56:04 [    INFO] Starting container jupyter/datascience-notebook ... (helpers.py:51)
# 2020-03-08 09:56:04 [    INFO] Running jupyter/datascience-notebook with args {'detach': True, 'ports': {'8888/tcp': 8888}, 'tty': True, 'command': ['start.sh', 'bash', '-c', 'sleep infinity']} ... (conftest.py:78)
# 2020-03-08 09:56:04 [    INFO] Grabing the list of specifications ... (helpers.py:76)
# ---------------------------------------------------------------------------------------------- live log call ----------------------------------------------------------------------------------------------
# 2020-03-08 09:56:07 [    INFO] Testing the import of packages ... (test_packages.py:125)
# 2020-03-08 09:56:07 [    INFO] Trying to import conda (test_packages.py:127)
# 2020-03-08 09:56:07 [    INFO] Trying to import notebook (test_packages.py:127)
# 2020-03-08 09:56:08 [    INFO] Trying to import jupyterhub (test_packages.py:127)
# [...]

Copy link
Member

@parente parente left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love it! ❤️

I left a couple doc string change suggestions and one comment about what to test inline.

I wonder what your thoughts are on what we do in the case where a test fails because of a problem with a single upstream package, but that the rest of packages across image(s) all work correctly. Do we hold up releasing any further changes in the images here until we resolve the problem upstream? Do we choose to ignore the failing test and continue releasing unrelated changes here while building images with clearly broken dependencies?

test/test_packages.py Outdated Show resolved Hide resolved
test/test_packages.py Show resolved Hide resolved
test/test_packages.py Outdated Show resolved Hide resolved
test/test_packages.py Outdated Show resolved Hide resolved
test/test_packages.py Outdated Show resolved Hide resolved
test/test_packages.py Outdated Show resolved Hide resolved
@romainx
Copy link
Collaborator Author

romainx commented Mar 9, 2020

@parente Thanks for the positive feedback and for the relevant review. 😄

To answer your question, my opinion is to fail if an import error is raised -- if it's not the case users may have a problem and could open an issue. However in some case it could be interesting to ignore some errors. To be ready to do it, I have modified the code to introduce a max_failures parameter.
The parameter is equal to zero by default and can be tuned. If some errors are raised but ignored a warning is printed.

if len(failures) > max_failures:
    raise AssertionError(failures)
elif len(failures) > 0:
    LOGGER.warning(f"Some import(s) has(have) failed: {failures}")

I have committed all the change except the parsing of the package bug ... So I'm flagging it as WIP.

See you.

@romainx romainx changed the title Test packages [WIP] Test packages Mar 9, 2020
@romainx
Copy link
Collaborator Author

romainx commented Mar 12, 2020

@parente I'm waiting for the CI to finish, but I think it's OK now.
If you're agree, we are ready to merge 🎉

@romainx romainx added the status:Ready to Merge PR reviewed, up-to-date, and ready to merge label Mar 13, 2020
@parente
Copy link
Member

parente commented Mar 13, 2020

Let's give it a shot! Thanks for putting this testing in place.

@romainx romainx changed the title [WIP] Test packages Test packages Mar 14, 2020
@parente parente merged commit b782013 into jupyter:master Mar 14, 2020
@romainx romainx deleted the test_packages branch March 14, 2020 19:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status:Ready to Merge PR reviewed, up-to-date, and ready to merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants