-
-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolve AppVeyor xfails #393
Comments
About 2/3 of xfails and/or Windows checks can be removed completely, without any win_tolAfter removing the x and y labels and legends in #823, I analyzed the number of tests on Appveyor with failing image comparisons. Just for analysis purposes, I modified the Windows checks so that they did not lead to xfails (replacing Based on the passing and failing image comparisons, 55 of the checks for Remaining 1/3 currently xfail on both Windows and Linux condaThe 31 other cases are xfailing because they contain text outside of the types of text already removed. They are xfailing with the reason IS_WINDOWS_OR_CONDA in issue #892, as they fail in both Windows AND conda environments. Freetype can address Linux conda and Windows condaThe matplotlib testing package has been validated on Appveyor Miniconda and Travis Miniconda CI builds. There is currently only a conda package. Any of the tests with "P" below is passing, so its check for Windows and any xfail can be removed. Conda tends to have a slightly higher RMSE between the two Python distributions, it is therefore shown first. Shown below are P or F (pass or fail), the test name or image, and the RMSEs if failing. Builds for the analysis:PyPI/pip Python: https://ci.appveyor.com/project/nickpowersys/yellowbrick/builds/25047957 P(ass)/F(ail), Test, conda RMSE, PyPI Python RMSEtests/test_base.py test_classifier/test_classification_report.py test_classifier/test_confusion_matrix.py test_classifier/test_prcurve.py test_classifier/test_threshold.py test_cluster/test_elbow.py test_cluster/test_icdm.py test_cluster/test_silhouette.py test_contrib/test_classifier/test_boundaries.py tests/test_features/test_jointplot.py test_features/test_pca.py test_features/test_radviz.py test_features/test_rankd.py test_features/test_rfecv.py test_model_selection/test_learning_curve.py test_model_selection/test_validation_curve.py test_regressor/test_alphas.py test_regressor/test_residuals.py test_target/test_feature_correlation.py test_text/test_freqdist.py test_text/test_umap.py |
We've just settled into a mode of |
In #386 we added an AppVeyor configuration but mostly resolved image comparison failures by marking them as
xfail
(thinking they were the product of different operating systems producing different types of images).In reality, there are a number of images that have some variability and have increased tolerances that actually do not support tests (see any tolerance of >=10) in the code.
Once we get #379 working we can start to diagnose these issues in detail, and hopefully start resolving the
xfail
markers to ensure our tests run on all platforms.The text was updated successfully, but these errors were encountered: