-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject. #3655
Comments
The new NumPy release is tripping a lot of people up. Please downgrade your NumPy to a previous version, or use conda-forge for now. We are planning a new release for later today. See also #3649 |
New release of skimage fixed it thanks. Not sure of cause but was not using latest release of numpy. |
Interesting. Maybe you can tell us more about how you installed numpy, scikit-image, and pytorch if it happens again. Things like OS, python version, pip vs conda are important. |
FYI
ubuntu18.04, python 3.7 via miniconda latest
pip install numpy
conda install pytorch-cpu -c pytorch
compile skimage according to instructions
Pytorch is not available on pip. If I took out the pytorch install then
skimage worked. The version of pytorch is same as previously worked but it
installs dependencies which have changed including numpy (though not the
latest version).
…On Fri, 18 Jan 2019 at 14:01, Mark Harfouche ***@***.***> wrote:
Interesting. Maybe you can tell us more about how you installed numpy,
scikit-image, and pytorch if it happens again.
Things like OS, python version, pip vs conda are important.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3655 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJN6U2I9AQEBKi4VEwjFxTcV3thV7mHks5vEdPDgaJpZM4aGuDE>
.
|
is there a particular reason why you are compiling skimage from source? the issue is likely (though not 100% sure) that you are likely compiling skimage with a newer version of numpy than pytorch was compiled with |
oh yeah, also, don't install one package with pip, the other with conda. that is a recipe for disaster. |
mostly, you really shouldn't install numpy via pip. if a package exists on conda, you should use that. if it doesn't you can probably request it to be on conda-forge |
Required the source as there was a bug in houghlines which was fixed in
source but not in the release at the time - it is now!. I was trying to
install everything with pip so it would all be in one install and more
things are on pip than conda. The only ones not on pip were Pytorch and
skimage which was from source.
…On Fri, 18 Jan 2019 at 14:09, Mark Harfouche ***@***.***> wrote:
is there a particular reason why you are compiling skimage from source?
do the packages on conda not work?
are you using conda-forge???
the issue is likely (though not 100% sure) that you are likely compiling
skimage with a newer version of numpy than pytorch was compiled with
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3655 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJN6YxwqK3Kn510E4UMWa3k7PEmG-Muks5vEdWdgaJpZM4aGuDE>
.
|
oh man, it was a bug in the pyx file, so you couldn't have just fixed it yourself. Yeah..... there was some discussion about having more frequent releases. unfortunately, it doesn't seem to have gone anywhere, likely due to lack of bandwidth. I think it is reasonable to try and find that issue, or to open a new one asking for more releases. Explaining your story (wanting to install pytorch and scikit-image) is likely good enough. pytorch now exists on pip no? |
Many of us use pip exclusively, actually; so, use whatever the default installer of your environment is. Try to stick to "all conda" or "all pip". |
Consistency is more what I was looking for. Thanks for making that clear @stefanv |
I just tried doing it all in conda. First difference is that it takes
longer or seems to because nothing happens for ages. Second thing is it
downgrades python from 3.7 to 2.7 and as I am doing this in one line for a
docker build it is unclear which of the 30+ packages is doing that. And the
latest skimage release isn't available on conda-forge yet!
…On Fri, 18 Jan 2019 at 18:45, Mark Harfouche ***@***.***> wrote:
Many of us use pip exclusively, actually; so, use whatever the default
installer of your environment is. Try to stick to "all conda" or "all pip".
Consistency is more what I was looking for. Thanks for making that clear
@stefanv <https://github.com/stefanv>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3655 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJN6blp9Qs9oPlhU148kZiJz3GClDlyks5vEhZXgaJpZM4aGuDE>
.
|
it might be because you have something like a dependency that only exists on python2.7. for example, enum34 is one of these. conda knows that it is being slow these days. they are trying to fix it..... |
As of now, I tried rolling back numpy to 1.14.5 and all other versions but none worked but upgrading to 1.16.1 finally fixed it!. pip install numpy==1.16.1 |
this works. |
thank You! it helped |
NumPy changed the ``numpy.ufunc`` size which could lead to inconsistencies with other (compiled) packages. The issue is supposed to be fixed vor version >1.16.1 scikit-image/scikit-image#3655 (comment)
HI Everyone .. i m getting below error need your help File "init.pxd", line 918, in init sklearn.utils.murmurhash ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject i m using version: absl-py 0.8.1 while using from sklearn.model_selection import train_test_split |
Any success in making this work? |
@mdtahsinasif @juniorojha This is the scikit-image issue tracker, not scikit-learn. Having said that, I expect that you need to either upgrade your scikit-image/learn or downgrade your NumPy. |
If that is the case then what should we do when one any package is not available in conda but exists in pip? |
This worked |
As a last resort, you install it with pip. @hmaarrfk did not mean that as a hard rule but as a guideline. Ultimately, mixing pip and conda can lead to incompatible compiled binary code (code like NumPy that is not written in Python but rather C with Python "bindings", meaning ways to access the C code), at which point you usually have to start with a fresh environment. |
Thanks for better explaining my thoughts jni. We really want to do our best to help people still experiencing a similar issue. Opening a new on GitHub is the best way to do that. Feel free to open them! |
* provide unified benchmark pool built from specialized ones * improve wording * update benchmarks example * require numpy>=1.16 following scikit-image/scikit-image#3655 (comment) * account for legacy noaperture movshon assembly
* resize stimuli based on model's declared visual degrees * do not convert stimulus_set if target==source * set visual_degrees to assembly for unit test * add visual_degrees again * set screen degrees for Rajalingham and tolias.Cadena (#197) * Corrected stimulus size of 8deg for behavioral benchmark. Removes cropping images to 85% for tensorflow models in preprocessing. Adds stimilus size of 2deg for Cadena benchmark. Allows behavior benchmark to work on models with multiple layers. * Added V1 neural predictivity benchmark for the Movshon public dataset and for the Tolias dataset * use logging for Score warnings; add comments; quick-return nan for ost no timebins * fix tests for V1/2 aperture; download precomputed test files from S3 brain-score/brainio_collection#27 * fix lstrip_local: use name instead of fixed index * fix PrecomputedProbabilities for Rajalingham2018 tests: add visual degrees * fix test file download, add echo statement * require numpy>=1.16 following scikit-image/scikit-image#3655 (comment) * fix previous commit: need at least 1.16 * fix Kar2019ost test and add precomputed test; remove mask-benchmark test * use 3.6 again in an attempt to fix mysterious curl pull issue (in travis' `pip install .`) Collecting brainio_collection@ git+https://github.com/brain-score/brainio_collection Cloning https://github.com/brain-score/brainio_collection to /tmp/pip-install-7suake0n/brainio-collection Running command git clone -q https://github.com/brain-score/brainio_collection /tmp/pip-install-7suake0n/brainio-collection error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function. fatal: the remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed * Add visual degrees to benchmarks; add V1 single benchmark (#199) * Adds stimulus size of 6deg for behavioral benchmark. Adds stimilus size of 2deg for Cadena benchmark. Allows behavior benchmark to work on models with multiple layers. * Corrected stimulus size of 8deg for behavioral benchmark. Removes cropping images to 85% for tensorflow models in preprocessing. Adds stimilus size of 2deg for Cadena benchmark. Allows behavior benchmark to work on models with multiple layers. * Added V1 neural predictivity benchmark for the Movshon public dataset and for the Tolias dataset * Corrected several bugs to run visual degrees Moved visual degrees implementation from stimulus set to benchmarks Added single neuron mapping * Added V1 neural predictivity benchmark for the Movshon public dataset and for the Tolias dataset * Corrected screen degrees transformation * fix PR comments: CamelCase naming, public benchmarks, correlations r Co-authored-by: Martin Schrimpf <m4rtinsch@gmail.com> * test for the correct ratio of gray in stimuli of benchmarks in response to different-sized candidates * update comment * remove out-dated global setting of stimulus set degrees * add visual degrees to public benchmarks * place stimuli on screen according to visual degrees for example benchmark Co-authored-by: tiagogmarques <tmarques@mit.edu>
its work |
Hi all, I just encountered the opposite problem: For the record, I am running on a MacOS Catalina (v10.15.7) and using python 3.8.3 with numpy 1.21.2 and sklearn 0.23.1 |
Help! I am installing from source because I need a bug fix for houghlines that is not in the latest release.
I get the message "Expected 216 from C header, got 192 from PyObject."
Possibly it is a numpy incompatibility. However I was also getting the reverse message "Expected 192 but got 216" for bcolz until I moved it to later in the install.
How can I install bcolz, numpy, pytorch, skimage at the same time?
The text was updated successfully, but these errors were encountered: