Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Numpy116 migration #138

Merged
merged 37 commits into from
May 24, 2019
Merged

Conversation

hmaarrfk
Copy link
Contributor

@conda-forge/arm-arch I don't have right on the bot's branch. so here goes.
Checklist

  • Used a fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@hmaarrfk hmaarrfk force-pushed the numpy116_migration branch 2 times, most recently from 3c03129 to 05508b7 Compare March 4, 2019 00:55
@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Mar 9, 2019

@conda-forge/arm-arch do any of you have time to look at these failures? Feel free to push to my branch, these are kinda beyond my skill level.

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Mar 9, 2019

@conda-forge-admin please rerender

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-webservice.

I tried to re-render for you, but it looks like there was nothing to do.

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 7, 2019

@tylerjereddy might you know what would have changed between numpy 1.15 and 1.16 that could cause that obscure error?

We can get numpy 1.15 to compile on our systems.

@mariusvniekerk
Copy link
Member

This may require using shippable (or other native alternatives) for aarch64

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 7, 2019

did you have shippable rerendering working?

# tag_name: 7

image_name: arm64v8/ubuntu
image_tag: 18.04
Copy link
Member

@isuruf isuruf Apr 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Building this using ubuntu 18.04 would make the package built here incompatible with centos7 and all the arm64v8 builds using numpy will have to be compiled with ubuntu 18.04 or newer.

Any reason not to use the run_docker_build.sh script?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly just testing for now, to see if the non-native build is the cause of the compilation issue. Appears that it isn't the case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exact container here is beside the point. I'm just hacking here on some old build scripts I have.

Shippable needs things like sudo, and very particular containers. The original error is still there, so I've ruled out the hypothesis that it was due to qemu

    numpy/core/src/multiarray/descriptor.c:2053:9: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
             Py_RETURN_FALSE;
             ^~~~~~~~~~~~~~~
    In file included from numpy/core/include/numpy/npy_math.h:548:0,
                     from numpy/core/src/npymath/npy_math_common.h:9,
                     from numpy/core/src/npymath/npy_math_complex.c.src:34:
    numpy/core/src/npymath/npy_math_internal.h.src: In function 'npy_cacoshf':
    numpy/core/src/npymath/npy_math_internal.h.src:482:12: internal compiler error: Segmentation fault
         return @kind@@c@(x, y);
                ^~~~~~~~~~~~~~~
    Please submit a full bug report,
    with preprocessed source if appropriate.

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 8, 2019

@mariusvniekerk the build error is still there, even in shippable

    numpy/core/src/multiarray/descriptor.c:2053:9: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
             Py_RETURN_FALSE;
             ^~~~~~~~~~~~~~~
    In file included from numpy/core/include/numpy/npy_math.h:548:0,
                     from numpy/core/src/npymath/npy_math_common.h:9,
                     from numpy/core/src/npymath/npy_math_complex.c.src:34:
    numpy/core/src/npymath/npy_math_internal.h.src: In function 'npy_cacoshf':
    numpy/core/src/npymath/npy_math_internal.h.src:482:12: internal compiler error: Segmentation fault
         return @kind@@c@(x, y);
                ^~~~~~~~~~~~~~~
    Please submit a full bug report,
    with preprocessed source if appropriate.

@isuruf
Copy link
Member

isuruf commented Apr 8, 2019

@hmaarrfk, try unsetting CFLAGS to see if some of the default CFLAGS is causing the ICE.

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 8, 2019

@isuruf do you think it is tree vectorize?

CFLAGS=-ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O3 -pipe -I$PREFIX/include -fdebug-prefix-map=${SRC_DIR}=/usr/local/src/conda/${PKG_NAME}-${PKG_VERSION} -fdebug-prefix-map=${PREFIX}=/usr/local/src/conda-prefix

shippable.yml Outdated
env:
global:
# The BINSTAR_TOKEN secure variable. This is defined canonically in conda-forge.yml.
- secure: bkTdATvev7sVFsP62xFV2ck215nXEtH7eWXdhzRRtbzeKquSkNhTGTCoa5FcLDvAVe36w+Sv59/3/oWNyMood8pIWjHLMC5CqqLdc4NRmyyaCKWys4CLhTTurIBPFSWUilxZW1KCKv/WHOe+zQDi2o9R9lf5/MizuwThHSQOIcqeTIn4wtPzbne5MeKSW+mRCsb+l4E/Q1oY2w/mTJ+izDWkxefstZ2t8RqOxH6H20wwNOOj/1WdeztdCOtCAl99r8Aj58odGyfUMAEyw89c5HglAEPurBQs21DZbHp10NmgSLyIbukplulRUm+cQ37loT/hFfTjPUCqLEC3lu6SPw==
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this binstar token is wrong too.

recipe/build.sh Outdated
cat > site.cfg <<EOF
[DEFAULT]
libraries = blas,cblas,lapack
library_dirs = $PREFIX/lib
include_dirs = $PREFIX/include
EOF

# CFLAGS=-ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O3 -pipe -I$PREFIX/include -fdebug-prefix-map=${SRC_DIR}=/usr/local/src/conda/${PKG_NAME}-${PKG_VERSION} -fdebug-prefix-map=${PREFIX}=/usr/local/src/conda-prefix
CFLAGS="$CFLAGS -O2"
CPPFLAGS="$CPPFLAGS -O2"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you do this only for aarch64? And also try going back to azure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly done. gotta see what MACHINE is for aarch64

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 8, 2019

@conda-forge-admin, please rerender

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 8, 2019

better, thanks! Why do you like to remove set -ex. it is so useful for sh noobs like me.

@isuruf
Copy link
Member

isuruf commented Apr 8, 2019

set -e is the default when conda-build invokes the script. set -x doesn't do anything other than echoing the command

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 8, 2019

i like to echo ;). it teaches me when I need to add quotes.

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 8, 2019

I'll delete shippable stuff when the build passes.

@isuruf
Copy link
Member

isuruf commented Apr 8, 2019

i like to echo ;). it teaches me when I need to add quotes.

Sure. Please add it for other branches as well. I'd like to keep the build script same across branches.

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 8, 2019

@isuruf seems like it could be an issue with char *

numpy/numpy#12862

I mean for now, you already reverted the set -x so I won't take the effort for something that works.

@isuruf
Copy link
Member

isuruf commented Apr 8, 2019

@conda-forge-admin, rerender

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Apr 8, 2019

what changed?

@isuruf
Copy link
Member

isuruf commented Apr 8, 2019

what changed?

target_platform was used and needs a rerender for that variable to be registered as used. I'm not sure why conda-build's default value was not used

recipe/build.sh Outdated
@@ -10,7 +10,7 @@ include_dirs = $PREFIX/include
EOF

# CFLAGS=-ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O3 -pipe -I$PREFIX/include -fdebug-prefix-map=${SRC_DIR}=/usr/local/src/conda/${PKG_NAME}-${PKG_VERSION} -fdebug-prefix-map=${PREFIX}=/usr/local/src/conda-prefix
CFLAGS="-I$PREFIX/include -fdebug-prefix-map=${SRC_DIR}=/usr/local/src/conda/${PKG_NAME}-${PKG_VERSION} -fdebug-prefix-map=${PREFIX}=/usr/local/src/conda-prefix -O2"
CFLAGS="$CFLAGS -O2"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-ftree-vectorize might still be an issue

@tylerjereddy
Copy link

@hmaarrfk Not sure what's going on here from a very quick look--is there something specific you want from the NumPy side of things? We have Shippable ARMv8 runs in our CI & I've even built OpenBLAS on ARMv8 to hopefully better match wheels on other platforms in the future.

Just scanning through--are there specific flags we should test for / constrain on, that are needed for ARM sometimes but not currently used in our CI?

@hmaarrfk
Copy link
Contributor Author

Can somebody with knowledge of PPC64LE comment on whether or not this is a real error, or a bug with the tests?

___________ TestComplexFunctions.test_loss_of_precision[complex256] ____________

self = <numpy.core.tests.test_umath.TestComplexFunctions object at 0x3fff78526310>
dtype = <type 'numpy.complex256'>

    @pytest.mark.parametrize('dtype', [np.complex64, np.complex_, np.longcomplex])
    def test_loss_of_precision(self, dtype):
        """Check loss of precision in complex arc* functions"""
    
        # Check against known-good functions
    
        info = np.finfo(dtype)
        real_dtype = dtype(0.).real.dtype
        eps = info.eps
    
        def check(x, rtol):
            x = x.astype(real_dtype)
    
            z = x.astype(dtype)
            d = np.absolute(np.arcsinh(x)/np.arcsinh(z).real - 1)
            assert_(np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(),
                                      'arcsinh'))
    
            z = (1j*x).astype(dtype)
            d = np.absolute(np.arcsinh(x)/np.arcsin(z).imag - 1)
            assert_(np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(),
                                      'arcsin'))
    
            z = x.astype(dtype)
            d = np.absolute(np.arctanh(x)/np.arctanh(z).real - 1)
            assert_(np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(),
                                      'arctanh'))
    
            z = (1j*x).astype(dtype)
            d = np.absolute(np.arctanh(x)/np.arctan(z).imag - 1)
            assert_(np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(),
                                      'arctan'))
    
        # The switchover was chosen as 1e-3; hence there can be up to
        # ~eps/1e-3 of relative cancellation error before it
    
        x_series = np.logspace(-20, -3.001, 200)
        x_basic = np.logspace(-2.999, 0, 10, endpoint=False)
    
        if dtype is np.longcomplex:
            # It's not guaranteed that the system-provided arc functions
            # are accurate down to a few epsilons. (Eg. on Linux 64-bit)
            # So, give more leeway for long complex tests here:
            # Can use 2.1 for > Ubuntu LTS Trusty (2014), glibc = 2.19.
>           check(x_series, 50.0*eps)

../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pl/lib/python2.7/site-packages/numpy/core/tests/test_umath.py:2596: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

x = array([1.00000000e-20, 1.21736864e-20, 1.48198641e-20, 1.80412378e-20,
       ...871e-04, 6.73218092e-04, 8.19554595e-04, 9.97700064e-04],
      dtype=float128)
rtol = 1.23259516440783094595582588325435e-30

    def check(x, rtol):
        x = x.astype(real_dtype)
    
        z = x.astype(dtype)
        d = np.absolute(np.arcsinh(x)/np.arcsinh(z).real - 1)
        assert_(np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max(),
>                                 'arcsinh'))
E       AssertionError: (131, 1.54987431981169249551435343964035e-09, 4.0035173453529620614007210953362e-19, 'arcsinh')

@hmaarrfk
Copy link
Contributor Author

Well if somebody can confirm that the ppc64le are OK failures, I think I just included the patch in the recipe, and the selector would just have to be changed.

@tylerjereddy
Copy link

@hmaarrfk Are you using genuine ppc64le architecture? I don't see shippable in the CI matrix for this PR & we don't see the failure on NumPy master with ppc64le / shippable.

That said, I do see the exact failure you mention when I run the NumPy test suite on WSL (Windows Subsystem for Linux), so I wouldn't be surprised if some other kind of "emulation" experiences a similar failure.

I'm sure floating point calc experts know far more about this stuff..

@hmaarrfk
Copy link
Contributor Author

@tylerjereddy I think marius got native PPC64LE running on travis
https://travis-ci.org/conda-forge/numpy-feedstock/builds/525758989

Any idea why windows is randomly failing?

@tylerjereddy
Copy link

oh, right I'm confusing ppc64le and ARM -- don't think we see that on master either though, since we do the Travis ppc64le testing too

@hmaarrfk
Copy link
Contributor Author

yeah, thats why im half worried. ill try to see if any prs were made affecting that in master.

Do you mind backporting
numpy/numpy#12709
Just to see the results on 1.16.3

@hmaarrfk
Copy link
Contributor Author

This seems to be the most recent backport that touches that part of the code.
numpy/numpy#8566
numpy/numpy#12378

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented May 6, 2019

@conda-forge-admin please rerender

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-webservice.

I tried to re-render for you but ran into some issues, please ping conda-forge/core for further assistance.

@hmaarrfk
Copy link
Contributor Author

Is this OK???
self = <numpy.core.tests.test_einsum.TestEinsum object at 0x3fff8be3c8d0>

    def test_einsum_sums_cfloat64(self):
>       self.check_einsum_sums('c8')

../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pl/lib/python3.6/site-packages/numpy/core/tests/test_einsum.py:561: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <numpy.core.tests.test_einsum.TestEinsum object at 0x3fff8be3c8d0>
dtype = 'c8', do_opt = False

    def check_einsum_sums(self, dtype, do_opt=False):
        # Check various sums.  Does many sizes to exercise unrolled loops.
    
        # sum(a, axis=-1)
        for n in range(1, 17):
            a = np.arange(n, dtype=dtype)
            assert_equal(np.einsum("i->", a, optimize=do_opt),
                         np.sum(a, axis=-1).astype(dtype))
            assert_equal(np.einsum(a, [0], [], optimize=do_opt),
                         np.sum(a, axis=-1).astype(dtype))
    
        for n in range(1, 17):
            a = np.arange(2*3*n, dtype=dtype).reshape(2, 3, n)
            assert_equal(np.einsum("...i->...", a, optimize=do_opt),
                         np.sum(a, axis=-1).astype(dtype))
            assert_equal(np.einsum(a, [Ellipsis, 0], [Ellipsis], optimize=do_opt),
                         np.sum(a, axis=-1).astype(dtype))
    
        # sum(a, axis=0)
        for n in range(1, 17):
            a = np.arange(2*n, dtype=dtype).reshape(2, n)
            assert_equal(np.einsum("i...->...", a, optimize=do_opt),
                         np.sum(a, axis=0).astype(dtype))
            assert_equal(np.einsum(a, [0, Ellipsis], [Ellipsis], optimize=do_opt),
                         np.sum(a, axis=0).astype(dtype))
    
        for n in range(1, 17):
            a = np.arange(2*3*n, dtype=dtype).reshape(2, 3, n)
            assert_equal(np.einsum("i...->...", a, optimize=do_opt),
                         np.sum(a, axis=0).astype(dtype))
            assert_equal(np.einsum(a, [0, Ellipsis], [Ellipsis], optimize=do_opt),
                         np.sum(a, axis=0).astype(dtype))
    
        # trace(a)
        for n in range(1, 17):
            a = np.arange(n*n, dtype=dtype).reshape(n, n)
            assert_equal(np.einsum("ii", a, optimize=do_opt),
                         np.trace(a).astype(dtype))
            assert_equal(np.einsum(a, [0, 0], optimize=do_opt),
                         np.trace(a).astype(dtype))
    
        # multiply(a, b)
        assert_equal(np.einsum("..., ...", 3, 4), 12)  # scalar case
        for n in range(1, 17):
            a = np.arange(3 * n, dtype=dtype).reshape(3, n)
            b = np.arange(2 * 3 * n, dtype=dtype).reshape(2, 3, n)
            assert_equal(np.einsum("..., ...", a, b, optimize=do_opt),
                         np.multiply(a, b))
            assert_equal(np.einsum(a, [Ellipsis], b, [Ellipsis], optimize=do_opt),
                         np.multiply(a, b))
    
        # inner(a,b)
        for n in range(1, 17):
            a = np.arange(2 * 3 * n, dtype=dtype).reshape(2, 3, n)
            b = np.arange(n, dtype=dtype)
>           assert_equal(np.einsum("...i, ...i", a, b, optimize=do_opt), np.inner(a, b))
E           AssertionError: 
E           Arrays are not equal
E           
E           Mismatch: 50%
E           Max absolute difference: 28.
E           Max relative difference: 0.056
E            x: array([[ 204.+0.j,  528.+0.j,  852.+0.j],
E                  [1176.+0.j, 1500.+0.j, 1824.+0.j]], dtype=complex64)
E            y: array([[ 204.+0.j,  500.+0.j,  852.+0.j],
E                  [1148.+0.j, 1500.+0.j, 1796.+0.j]], dtype=complex64)

@hmaarrfk
Copy link
Contributor Author

@conda-forge-admin, please rerender

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-webservice.

I tried to re-render for you, but it looks like there was nothing to do.

@hmaarrfk
Copy link
Contributor Author

sigh the aarch build got stuck, but i'll save CI resources and just leave it at that.

@hmaarrfk
Copy link
Contributor Author

@conda-forge/arm-arch does anybody want to give this a last review and a good push?

@mariusvniekerk mariusvniekerk merged commit cb7e8ac into conda-forge:master May 24, 2019
@hmaarrfk
Copy link
Contributor Author

I guess we just update the pinning in recipes to be numpy 1.16? and call it a day?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants