New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Appveyor: enable builds #7616

Merged
merged 10 commits into from Aug 12, 2017

Conversation

Projects
None yet
8 participants
@ghost

ghost commented Jul 17, 2017

No description provided.

@ghost

This comment has been minimized.

@ghost

This comment has been minimized.

ghost commented Jul 17, 2017

Does anyone know why the test output is only on stderr?

@ghost

This comment has been minimized.

ghost commented Jul 17, 2017

Also it seems quite verbose!

@ghost

This comment has been minimized.

ghost commented Jul 17, 2017

related: #7597 and numpy/numpy#9429

@ghost

This comment has been minimized.

ghost commented Jul 17, 2017

3 failures:

[00:11:25] ======================================================================
[00:11:25] ERROR: test_bsplines.TestBSpline.test_ctor
[00:11:25] ----------------------------------------------------------------------
[00:11:25] Traceback (most recent call last):
[00:11:25]   File "C:\Python36-x64\lib\site-packages\nose\case.py", line 198, in runTest
[00:11:25]     self.test(*self.arg)
[00:11:25]   File "C:\Python36-x64\lib\site-packages\scipy\interpolate\tests\test_bsplines.py", line 24, in test_ctor
[00:11:25]     assert_raises(ValueError, BSpline, **dict(t=[1, np.nan], c=[1.], k=0))
[00:11:25]   File "C:\Python36-x64\lib\site-packages\numpy\testing\utils.py", line 1190, in assert_raises
[00:11:25]     return nose.tools.assert_raises(*args,**kwargs)
[00:11:25]   File "C:\Python36-x64\lib\unittest\case.py", line 728, in assertRaises
[00:11:25]     return context.handle('assertRaises', args, kwargs)
[00:11:25]   File "C:\Python36-x64\lib\unittest\case.py", line 177, in handle
[00:11:25]     callable_obj(*args, **kwargs)
[00:11:25]   File "C:\Python36-x64\lib\site-packages\scipy\interpolate\_bsplines.py", line 210, in __init__
[00:11:25]     if (np.diff(self.t) < 0).any():
[00:11:25] RuntimeWarning: invalid value encountered in less
[00:11:25] 
[00:11:25] ======================================================================
[00:11:25] ERROR: test_slsqp.TestSLSQP.test_bounds_clipping
[00:11:25] ----------------------------------------------------------------------
[00:11:25] Traceback (most recent call last):
[00:11:25]   File "C:\Python36-x64\lib\site-packages\nose\case.py", line 198, in runTest
[00:11:25]     self.test(*self.arg)
[00:11:25]   File "C:\Python36-x64\lib\site-packages\scipy\optimize\tests\test_slsqp.py", line 398, in test_bounds_clipping
[00:11:25]     sol = minimize(f, [10], method='slsqp', bounds=[(None, 0)])
[00:11:25]   File "C:\Python36-x64\lib\site-packages\scipy\optimize\_minimize.py", line 477, in minimize
[00:11:25]     constraints, callback=callback, **options)
[00:11:25]   File "C:\Python36-x64\lib\site-packages\scipy\optimize\slsqp.py", line 341, in _minimize_slsqp
[00:11:25]     bnderr = bnds[:, 0] > bnds[:, 1]
[00:11:25] RuntimeWarning: invalid value encountered in greater
[00:11:25] 
[00:11:25] ======================================================================
[00:11:25] ERROR: test_slsqp.TestSLSQP.test_inconsistent_linearization
[00:11:25] ----------------------------------------------------------------------
[00:11:25] Traceback (most recent call last):
[00:11:25]   File "C:\Python36-x64\lib\site-packages\nose\case.py", line 198, in runTest
[00:11:25]     self.test(*self.arg)
[00:11:25]   File "C:\Python36-x64\lib\site-packages\scipy\optimize\tests\test_slsqp.py", line 349, in test_inconsistent_linearization
[00:11:25]     method='SLSQP')
[00:11:25]   File "C:\Python36-x64\lib\site-packages\scipy\optimize\_minimize.py", line 477, in minimize
[00:11:25]     constraints, callback=callback, **options)
[00:11:25]   File "C:\Python36-x64\lib\site-packages\scipy\optimize\slsqp.py", line 341, in _minimize_slsqp
[00:11:25]     bnderr = bnds[:, 0] > bnds[:, 1]
[00:11:25] RuntimeWarning: invalid value encountered in greater
[00:11:25] 
[00:11:25] ----------------------------------------------------------------------
[00:11:25] Ran 21723 tests in 253.107s
[00:11:25] 
[00:11:25] FAILED (KNOWNFAIL=90, SKIP=1426, errors=3)
[00:11:25] 
[00:11:25] Running unit tests for scipy
[00:11:25] NumPy version 1.13.1
[00:11:25] NumPy relaxed strides checking option: True
[00:11:25] NumPy is installed in C:\Python36-x64\lib\site-packages\numpy
[00:11:25] SciPy version 1.0.0.dev0+1017c8f
[00:11:25] SciPy is installed in C:\Python36-x64\lib\site-packages\scipy
[00:11:25] Python version 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 18:41:36) [MSC v.1900 64 bit (AMD64)]
[00:11:25] nose version 1.3.7
[00:11:25] too many axes: 2 (effrank=2), expected rank=1
[00:11:25] too many axes: 2 (effrank=2), expected rank=1
[00:11:25] 0-th dimension must be fixed to 3 but got 15
[00:11:25] 0-th dimension must be fixed to 3 but got 5
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 
[00:11:25] 
[00:11:25] 
[00:11:25] Command executed with exception: too many axes: 2 (effrank=2), expected rank=1
[00:11:25] too many axes: 2 (effrank=2), expected rank=1
[00:11:25] 0-th dimension must be fixed to 3 but got 15
[00:11:25] 0-th dimension must be fixed to 3 but got 5
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 0-th dimension must be fixed to 4 but got 2
[00:11:25] 
+ 'superlu_src',
+ 'sc_c_misc',
+ 'sc_cephes',
+ ]

This comment has been minimized.

@ghost

ghost Jul 17, 2017

The build_clib is reordered so that fortran libs can be linked against their dependencies.

+ '-Wl,--enable-auto-import',
+ ],
+ debug=self.debug)
+

This comment has been minimized.

@ghost

ghost Jul 17, 2017

Here we may be building C, fortran, or both. If we need to build fortran, then we need to generate a fortran DLL and a shared lib (see numpy/numpy#9427)

+ libs += [lib_name + '_gfortran.lib']
+ self.dlls.append(os.path.join(
+ self.build_clib, lib_name + '_gfortran.dll'))
+

This comment has been minimized.

@ghost

ghost Jul 17, 2017

Each time we build a DLL, we need to keep track of it so that dependendent fortran code can link to the DLL. However, fortran code cannot call c code.

build_clib = self.get_finalized_command('build_clib')
self.library_dirs.append(build_clib.build_clib)
+ self.dlls.extend(build_clib.dlls)
+ self.build_clib = build_clib.build_clib

This comment has been minimized.

@ghost

ghost Jul 17, 2017

We need to be able to link fortran code built here to potentially any DLL built in build_clib.

+ '-Wl,--enable-auto-import',
+ ],
+ debug=self.debug)
+

This comment has been minimized.

@ghost

ghost Jul 17, 2017

Again, if there is fortran code, then we need to build it into a separate DLL and then generate a lib for msvc.

+ os.makedirs(dll_folder)
+ for dll in dlls:
+ shutil.copy(dll, dll_folder)
+

This comment has been minimized.

@ghost

ghost Jul 17, 2017

We need to copy all of the DLLs that we have built plus the mingw DLLs plus the OpenBLAS DLL so that our extension can access it.

@ghost

This comment has been minimized.

ghost commented Jul 17, 2017

@rgommers It should be okay to merge this after #7613 and #7614 because it only makes changes to "appveyor" which wasn't working anyway. I would appreciate someone setting up an account.

@pv

This comment has been minimized.

Member

pv commented Jul 17, 2017

@larsoner

This comment has been minimized.

Member

larsoner commented Jul 17, 2017

@xoviat in the meantime, do you have AppVeyor set up to run on your SciPy fork? If not, you can set that up and test there first.

@ghost

This comment has been minimized.

ghost commented Jul 17, 2017

@Eric89GXL Yeah I posted the link above. It works for the most part; now I'm just trying to clean up numpy.distutils.

@pv

This comment has been minimized.

Member

pv commented Jul 17, 2017

@ghost

This comment has been minimized.

ghost commented Jul 17, 2017

The distributor_init stuff also shouldn't do anything by default --- or at least is should be guarded by "if sys.platform == 'win32'" etc.

Do not discuss any changes that are not in the "appveyor" or "ci" folders here. Discuss these changes on #7613 and #7614.

@@ -0,0 +1,3 @@
numpy

This comment has been minimized.

@pv

pv Jul 17, 2017

Member

This probably should pin the version number, so that the patch can always be applied.

This comment has been minimized.

@ghost
@matthew-brett

This comment has been minimized.

Contributor

matthew-brett commented Jul 17, 2017

@xoviat - now #7613 and #7614 are merged, can you rebase these on top? Please let me know if you need help with the rebase (sorry to ask, but I know these can be ugly).

@matthew-brett

This comment has been minimized.

Contributor

matthew-brett commented Jul 17, 2017

Sorry - I mean when #7613 is merged.

+ if f_objects:
+ if lib_name == 'odepack':
+ f_objects = [
+ obj for obj in f_objects if 'xsetf' not in obj and 'xsetun' not in obj]

This comment has been minimized.

@matthew-brett

matthew-brett Jul 17, 2017

Contributor

Sorry for my ignorance - but does it make sense to delete these objects from odepack? Are they not necessary? Or are they duplicates?

This comment has been minimized.

@ghost
@ghost

This comment has been minimized.

ghost commented Jul 18, 2017

I wasn't able to get the 32 bit builds working (someone needs to look at the symbols coming out of the gfortran dlls on 32 bit) or Python 2 (I really don't spend my time there) but that's all I have time for as of now. This can be merged with appveyor set up and then the 32 bit issues can be fixed later.

@ghost

This comment has been minimized.

ghost commented Jul 18, 2017

The 32 bit builds appear to have been fixed.

@rgommers

This comment has been minimized.

Member

rgommers commented Jul 18, 2017

Does anyone know why the test output is only on stderr?

No idea, nose is documented to write to stdout by default. Only debug messages should be going to stderr.

@ghost

This comment has been minimized.

ghost commented Jul 18, 2017

@matthew-brett @pv @carlkl The DLL conflict issues have been resolved. Here are the DLLs that could be loaded by scipy:

image

As you can see, a name conflict would be exceedingly unlikely.

@ghost

This comment has been minimized.

ghost commented Jul 18, 2017

In addition, there is no longer any depdendency on mingw DLLs. Only the mingwpy_v0.19 DLL, but the mingwypy people control that, so there shouldn't be issues there.

@matthew-brett

This comment has been minimized.

Contributor

matthew-brett commented Jul 18, 2017

The problem with the mingw DLL is that is reasonably likely that someone will also be linking to a DLL with the same name, for the same reason. Therefore we have to hope that their copy of the DLL is the same as ours.

How about using Nathaniel's tool as a post-processing step, to rename this DLL also?

@pv

This comment has been minimized.

Member

pv commented Aug 12, 2017

Possibly, but I would continue this discussion separately.

I will now squash the commits in this PR, update the urls to point to rackcdn, and force-push to this contributor branch. If it passes on 64-bit, I'll merge and we continue the 32-bit discussion.

@ghost

This comment has been minimized.

ghost commented Aug 12, 2017

Looks ok

@pv pv merged commit df8e1cd into scipy:master Aug 12, 2017

4 checks passed

ci/circleci Your tests passed on CircleCI!
Details
codecov/project 87.19% (target 1%)
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@pv

This comment has been minimized.

Member

pv commented Aug 12, 2017

Thanks, merged! Thanks @xoviat, @matthew-brett and everyone involved.

@pv

This comment has been minimized.

Member

pv commented Aug 12, 2017

@matthew-brett: we can try a new PR to swap the 32-bit BLAS library used.

I suspect the 32-bit hang at least is an issue in CDFLIB, which at least locally according to gdb enters an infinite loop, as the code is ancient and not written with nans in mind. Moreover, it has persistent state that it apparently doesn't reset properly, giving a plausible mechanism for spurious failures e.g. due to test ordering. Not going to look at this today though.

@ghost ghost deleted the appveyor branch Aug 12, 2017

matthew-brett added a commit to matthew-brett/scipy that referenced this pull request Aug 13, 2017

matthew-brett added a commit to matthew-brett/scipy that referenced this pull request Aug 13, 2017

@carlkl

This comment has been minimized.

carlkl commented Aug 14, 2017

concerning the OpenBLAS 32bit errors:

it turns out, that OpenBLAS uses some C-code instead of vanilla Lapack Fortran code for some routines (called out-of-source). This allows for parallel execution of some routines - i.e. dpotrf. Using vanilla Lapack worked with all mingw-w64 releases I tried out, however, this is not case for at least some of the out-of-source routines.

Compiling OpenBLAS with mingw-w64 with a recent gcc and testing with scipy test seems apropriate for me in this case.

@matthew-brett

This comment has been minimized.

Contributor

matthew-brett commented Aug 14, 2017

Hi Carl - thanks for tracking that down further.

@matthew-brett

This comment has been minimized.

Contributor

matthew-brett commented Aug 14, 2017

Seconding Pauli - a big thank you to @xoviat for all your work here, and thank you too Pauli and Carl and Eric for Ralf for keeping this moving and getting it in. It's great to see the light at the end of the Scipy wheels tunnel.

@ghost

This comment has been minimized.

ghost commented Aug 14, 2017

Seconding Pauli - a big thank you to @xoviat for all your work here, and thank you too Pauli and Carl and Eric for Ralf for keeping this moving and getting it in. It's great to see the light at the end of the Scipy wheels tunnel.

I remember the days when you would type pip install requirements.txt and just know that something would interrupt the process (vcvarsall not found, ugh). My, things have improved since then. Pretty much the last eyesore was scipy.

On another note, I wonder whether it was really necessary to put everyone through what amounts to 17 years of packaging misery, which was directly due to the fact that the PSF could not come up with a binary standard (wheel) until over a decade from the first python release. Oh well, I guess you can only look forward at this point.

@ghost

This comment has been minimized.

ghost commented Aug 14, 2017

But definitely, thanks so much for the help on this.

@carlkl

This comment has been minimized.

carlkl commented Aug 14, 2017

@xoviat, kudos for your innovative method to combine the MS compiler with gfortran (mingw-w64)

@matthew-brett

This comment has been minimized.

Contributor

matthew-brett commented Aug 14, 2017

My hope is, that once we get these wheels up, that will be a stimulus for other scientific Python packages to provide wheels. Scipy has been a big blocker for a while now.

@andyfaff

This comment has been minimized.

Contributor

andyfaff commented Aug 14, 2017

@andyfaff

This comment has been minimized.

Contributor

andyfaff commented Aug 14, 2017

My hope is, that once we get these wheels up, that will be a stimulus for other scientific Python packages to provide wheels

Enabling potential users of packages to be able to use them without going through a whole build process (for those that need compiled Extensions) is always good. One roadblock that package authors face in making packages available is:

  1. access to build machines of the required architecture / OS
  2. a straightforward automated way of creating packages on those different architectures

At least the modern CI's make it easier to solve (1). (2) is harder to achieve and is one reason why conda-forge is growing in popularity.

@matthew-brett

This comment has been minimized.

Contributor

matthew-brett commented Aug 14, 2017

Sure - for conda-forge - but there is automatation machinery for doing OSX and Manylinux wheels which many projects are using.

Up until now, there hasn't been much motivation to improve Windows wheel-building tools, because many packages need Scipy, and so cannot install from Pip. I'd imagine there will now be a lot more interest in that problem - I'm looking forward to seeing what happens next.

@matthew-brett matthew-brett referenced this pull request Aug 14, 2017

Merged

Add scipy #648

@Juanlu001

This comment has been minimized.

Contributor

Juanlu001 commented Aug 18, 2017

On another note, I wonder whether it was really necessary to put everyone through what amounts to 17 years of packaging misery

I'm printing this and hanging it on the wall.

@Juanlu001

This comment has been minimized.

Contributor

Juanlu001 commented Aug 18, 2017

Seriously, this is the most exciting infrastructure change I have seen in SciPy since I started using the project five years ago... I am so thankful to all the people that was involved in this, and also those who wasted long hours without success. Congratulations to all, you deserve all our admiration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment