-
Notifications
You must be signed in to change notification settings - Fork 528
Pytest with 89% coverage #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
POT now build with a certain number of tests. I will merge this PR today unless somebody objects. |
before_script: # configure a headless display to test plot generation | ||
- "export DISPLAY=:99.0" | ||
- "sh -e /etc/init.d/xvfb start" | ||
- sleep 3 # give xvfb some time to start |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you use only matplotlib? if so just use the Agg backend
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to use the Agg backedn here
https://github.com/rflamary/POT/blob/pytest/test/test_plot.py
but travis still failed with DISPLAY error
https://travis-ci.org/rflamary/POT/builds/256924206
maybe the test_plot.py is not good but I don't see the problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's weird. You seem to have done it right. Are you sure matplotlib is not imported anywhere before?
you should also nest the imports to matplotlib in the or source tree. So matplotlib is not imported when you do import ot
|
||
pytest : FORCE | ||
python -m py.test -v test/ | ||
python -m py.test -v test/ --cov=ot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should have a native pytest command:
pytest -v test/ --cov=ot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under Debian/Ubuntu logilab-common install a useless executable named pytest. It's a well known bug but takes time to be corrected. This line ensure that the proper py.test is executed.
test/test_bregman.py
Outdated
reg = 1e-3 | ||
bary_wass = ot.bregman.barycenter(A, M, reg, weights) | ||
|
||
assert np.allclose(1, np.sum(bary_wass)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you have an assert_allclose function in numpy
@rflamary please wait. I'll do a proper review in the next 2 days. |
ok you win
sklearn is not as careful
https://github.com/scikit-learn/scikit-learn/blob/master/build_tools/travis/test_script.sh#L25
|
test/test_bregman.py
Outdated
def test_sinkhorn_empty(): | ||
# test sinkhorn | ||
n = 100 | ||
np.random.seed(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use a random state
rng = np.random.RandomState(42)
x = rng.randn(n, 2)
etc.
ie don't change the global seed.
test/test_bregman.py
Outdated
G, log = ot.sinkhorn([], [], M, 1, stopThr=1e-10, verbose=True, log=True) | ||
# check constratints | ||
assert np.allclose(u, G.sum(1), atol=1e-05) # cf convergence sinkhorn | ||
assert np.allclose(u, G.sum(0), atol=1e-05) # cf convergence sinkhorn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use np.testing.assert_allclose
it makes errors clearer than just an assert
test/test_bregman.py
Outdated
|
||
# Gaussian distributions | ||
a1 = ot.datasets.get_1D_gauss(n, m=30, s=10) # m= mean, s= std | ||
a2 = ot.datasets.get_1D_gauss(n, m=40, s=10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as I was saying it should be named in the future
make_1d_gauss
test/test_bregman.py
Outdated
|
||
def test_unmix(): | ||
|
||
n = 50 # nb bins |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n -> n_bins
as n can mean n_samples etc.
if you call it n_bins no need to write nb bins :)
test/test_da.py
Outdated
|
||
|
||
import ot | ||
import numpy as np |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import numpy before ot
as ot depends on numpy
it's for convention
test/test_da.py
Outdated
# import pytest | ||
|
||
|
||
def test_OTDA(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test_OTDA -> test_otda
no caps in function names
test/test_da.py
Outdated
n = 150 # nb bins | ||
|
||
xs, ys = ot.datasets.get_data_classif('3gauss', n) | ||
xt, yt = ot.datasets.get_data_classif('3gauss2', n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_data_classif -> make_classification
would be sklearn consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK we should open an issue and handle that in a separate PR I think, this one is mainly for testing
test/test_dr.py
Outdated
import pytest | ||
|
||
try: # test if cudamat installed | ||
import ot.dr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test for what you really need to test ie if cudamat is available
try:
import cudamat
has_cudamat = False
except ...:
has_cudamat = True
OK @agramfort I took care of most your reviews. What remains and will be opened as Issues :
If the travis build work I will merge the PR since I introduced no features in the toolbox only tests. |
def test_otda(): | ||
|
||
n_samples = 150 # nb samples | ||
np.random.seed(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RandomState
the get_data_classif function should take the rng in param and use it instead of np.random.randn
see the check_random_state function in sklearn
def test_conditional_gradient(): | ||
|
||
n_bins = 100 # nb bins | ||
np.random.seed(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RandomState
test/test_utils.py
Outdated
|
||
l2 = ot.utils.parmap(f, a) | ||
l2 = list(ot.utils.parmap(f, a)) | ||
|
||
assert np.allclose(l1, l2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use np.testing.assert_allclose
test/test_utils.py
Outdated
|
||
# dist shoul return squared euclidean | ||
assert np.allclose(D, D2) | ||
assert np.allclose(D, D3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
idem
test/test_utils.py
Outdated
M = ot.utils.dist0(n, method='lin_square') | ||
|
||
# dist0 default to linear sampling with quadratic loss | ||
assert np.allclose(M[0, -1], (n - 1) * (n - 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
idem and below too
ok no more nitpicks after these |
OK great thank you again, I won't handle the rng stuff in this PR I will add it to the Issue about the make_datasets function. |
k
|
+1 for merge |
OK let's merge this PR, We now have a 89% coverage of the code when all libraries are installed (cudamat, pymanopt, autograd). Also the Makefile include the test target that checks for PEP8 violations and and run the tests. I have created Issue #20 for the dataset function names and random state problems. |
Will merge soon since currently POT do not build.