Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build and test the documentation in Azure Pipelines #3854

Closed
wants to merge 4 commits into from

Conversation

soupault
Copy link
Member

@soupault soupault commented Apr 21, 2019

Description

Implement documentation building and testing in Azure Pipelines, similarly to https://github.com/scikit-image/scikit-image/blob/master/tools/travis/script.sh.

For reviewers

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.
  • Consider backporting the PR with @meeseeksdev backport to v0.14.x

@soupault soupault added the 🤖 type: Infrastructure CI, packaging, tools and automation label Apr 21, 2019
@soupault soupault added this to the 0.16 milestone Apr 21, 2019
@soupault soupault self-assigned this Apr 21, 2019
@soupault
Copy link
Member Author

applications/plot_haar_extraction_selection_classification.py example requires using "if-name-main" construction on Windows, otherwise the testing falls into the infinite loop. See Python 3.7 build here - https://dev.azure.com/scikit-image/scikit-image/_build/results?buildId=322, or SO -
https://stackoverflow.com/questions/20222534/python-multiprocessing-on-windows-if-name-main.

@scikit-image scikit-image deleted a comment from pep8speaks Apr 21, 2019
@soupault soupault force-pushed the azure_docs branch 2 times, most recently from dba768c to d673e8b Compare April 21, 2019 11:23
@scikit-image scikit-image deleted a comment from pep8speaks Apr 21, 2019
@soupault
Copy link
Member Author

Alright! The documentation testing seems to work! I had to implement several workarounds:

  1. See above Build and test the documentation in Azure Pipelines #3854 (comment). Unfortunately, the fix breaks the rendering of the example, so I'm open to any suggestions here.
  2. doc/source/random_gallery.py used platform-dependent path delimiter.
  3. Many tests in the example section of the docstrings failed due to unmatching np.printoptions: for some specific dtypes, dtype field is presented in the output, for others - not. Maybe conftest.py is skipped in Azure for some reason, I'm not yet sure.

Otherwise, the PR is ready for review.

@soupault soupault changed the title [WIP] Build and test the documentation in Azure Pipelines Build and test the documentation in Azure Pipelines Apr 21, 2019
@@ -93,17 +93,17 @@ class BRIEF(DescriptorExtractor):
>>> extractor.extract(square2, keypoints2)
>>> descriptors2 = extractor.descriptors
>>> matches = match_descriptors(descriptors1, descriptors2)
>>> matches
>>> matches # doctest: +SKIP
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why these tests are skipped?

# Extract all possible features to be able to select the most salient.
feature_coord, feature_type = \

if __name__ == '__main__':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm a big -1 on this. This really confuses new users, who become confused since they don't knkow what __main__ means. why do we need to do this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hmaarrfk I'm still trying to figure out a better solution. The issue is described here - #3854 (comment), specifically, in the StackOverflow post.
In short, on Windows multiprocessing imports the full contents of the file when spawning extra processes. This leads to starting an infinite number of processes.
So, basically, the multiprocessing code has to be wrapped somehow.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sigh, ok.

@@ -64,18 +64,18 @@ class BRIEF(DescriptorExtractor):
>>> import numpy as np
>>> square1 = np.zeros((8, 8), dtype=np.int32)
>>> square1[2:6, 2:6] = 1
>>> square1
>>> square1 # doctest: +SKIP
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should get linux setup, and run tehse doctests on linux. Linux will be able to handle the doctests and all.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Borda I think conftest.py file is not being discovered by pytest on Windows. In that file we force numpy to use legacy printoptions. Therefore, the builds expect the newer notation (that is numerically equivalent, but visually different).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hmaarrfk on Azure you mean? I'm not sure if anyone from the core team is willing to switch from Travis to Azure at the moment. Of course, it shouldn't be an issue to have a Linux instance running, but I really want to have Windows build with the full coverage.
Unfortunately, googling "pytest, windows, conftest" doesn't yield any info. I'm going to check if forcing conftest could be done with the env.variables...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also just test the doctests on numpy 1.14 and higher

or just bunp the minimum reqs all the way up to 1.14 ;)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soupault, it might be worthwhile too post an issue on pytest's github page. I couldn't figure this out either, it could very well be a bug on windows.

I understand that you want full test coverage on windows, but I would argue that having the doctests be readable is more important. I've often seen my comments copied verbatim in other's code. I wouldn't watn to pollute their dropbox with # doctest: +SKIP ;)

@soupault
Copy link
Member Author

The work is continued in #3873.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🧐 Needs review 🤖 type: Infrastructure CI, packaging, tools and automation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants