Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject. #3655

Closed
simonm3 opened this issue Jan 17, 2019 · 25 comments

Comments

@simonm3
Copy link

simonm3 commented Jan 17, 2019

Help! I am installing from source because I need a bug fix for houghlines that is not in the latest release.

I get the message "Expected 216 from C header, got 192 from PyObject."

Possibly it is a numpy incompatibility. However I was also getting the reverse message "Expected 192 but got 216" for bcolz until I moved it to later in the install.

How can I install bcolz, numpy, pytorch, skimage at the same time?

@stefanv
Copy link
Member

stefanv commented Jan 17, 2019

The new NumPy release is tripping a lot of people up. Please downgrade your NumPy to a previous version, or use conda-forge for now. We are planning a new release for later today. See also #3649

@stefanv stefanv closed this as completed Jan 17, 2019
@simonm3
Copy link
Author

simonm3 commented Jan 18, 2019

New release of skimage fixed it thanks. Not sure of cause but was not using latest release of numpy.

@hmaarrfk
Copy link
Member

Interesting. Maybe you can tell us more about how you installed numpy, scikit-image, and pytorch if it happens again.

Things like OS, python version, pip vs conda are important.

@simonm3
Copy link
Author

simonm3 commented Jan 18, 2019 via email

@hmaarrfk
Copy link
Member

is there a particular reason why you are compiling skimage from source?
do the packages on conda not work?
are you using conda-forge???

the issue is likely (though not 100% sure) that you are likely compiling skimage with a newer version of numpy than pytorch was compiled with

@hmaarrfk
Copy link
Member

oh yeah, also, don't install one package with pip, the other with conda. that is a recipe for disaster.

@hmaarrfk
Copy link
Member

mostly, you really shouldn't install numpy via pip. if a package exists on conda, you should use that. if it doesn't you can probably request it to be on conda-forge
https://github.com/conda-forge/staged-recipes

@simonm3
Copy link
Author

simonm3 commented Jan 18, 2019 via email

@hmaarrfk
Copy link
Member

oh man, it was a bug in the pyx file, so you couldn't have just fixed it yourself. Yeah.....

there was some discussion about having more frequent releases. unfortunately, it doesn't seem to have gone anywhere, likely due to lack of bandwidth.

I think it is reasonable to try and find that issue, or to open a new one asking for more releases. Explaining your story (wanting to install pytorch and scikit-image) is likely good enough.

pytorch now exists on pip no?

https://pytorch.org/

@stefanv
Copy link
Member

stefanv commented Jan 18, 2019

mostly, you really shouldn't install numpy via pip. if a package exists on conda, you should use that. if it doesn't you can probably request it to be on conda-forge
https://github.com/conda-forge/staged-recipes

Many of us use pip exclusively, actually; so, use whatever the default installer of your environment is. Try to stick to "all conda" or "all pip".

@hmaarrfk
Copy link
Member

Many of us use pip exclusively, actually; so, use whatever the default installer of your environment is. Try to stick to "all conda" or "all pip".

Consistency is more what I was looking for. Thanks for making that clear @stefanv

@simonm3
Copy link
Author

simonm3 commented Jan 18, 2019 via email

@hmaarrfk
Copy link
Member

it might be because you have something like a dependency that only exists on python2.7. for example, enum34 is one of these.

conda knows that it is being slow these days. they are trying to fix it.....

@odb2
Copy link

odb2 commented Apr 18, 2019

As of now, I tried rolling back numpy to 1.14.5 and all other versions but none worked but upgrading to 1.16.1 finally fixed it!.

pip install numpy==1.16.1

@patsotoe
Copy link

this works.

@yustiks
Copy link

yustiks commented May 24, 2019

As of now, I tried rolling back numpy to 1.14.5 and all other versions but none worked but upgrading to 1.16.1 finally fixed it!.

pip install numpy==1.16.1

thank You! it helped

@scikit-image scikit-image locked as too heated and limited conversation to collaborators May 24, 2019
@scikit-image scikit-image unlocked this conversation May 24, 2019
lkluft added a commit to lkluft/konrad that referenced this issue Jul 30, 2019
NumPy changed the ``numpy.ufunc`` size which could lead to
inconsistencies with other (compiled) packages. The issue is supposed to
be fixed vor version >1.16.1

scikit-image/scikit-image#3655 (comment)
@mdtahsinasif
Copy link

mdtahsinasif commented Dec 6, 2019

HI Everyone .. i m getting below error need your help
from .murmurhash import murmurhash3_32

File "init.pxd", line 918, in init sklearn.utils.murmurhash

ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject

i m using version:
Package Version


absl-py 0.8.1
alabaster 0.7.12
asn1crypto 0.24.0
astor 0.8.0
astroid 1.6.5
Babel 2.7.0
backcall 0.1.0
bleach 1.5.0
certifi 2018.8.24
cffi 1.11.5
chardet 3.0.4
cloudpickle 1.2.2
colorama 0.4.1
cryptography 2.3.1
cycler 0.10.0
decorator 4.4.1
defusedxml 0.6.0
docutils 0.14
entrypoints 0.2.3
gast 0.3.2
grpcio 1.25.0
h5py 2.8.0
html5lib 0.9999999
idna 2.7
imagesize 1.1.0
ipykernel 4.10.0
ipython 6.5.0
ipython-genutils 0.2.0
isort 4.3.4
jedi 0.12.1
Jinja2 2.10.3
joblib 0.14.0
jsonschema 2.6.0
jupyter-client 5.3.3
jupyter-core 4.5.0
Keras 2.2.2
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
keyring 13.2.1
kiwisolver 1.0.1
lazy-object-proxy 1.3.1
List 1.3.0
Markdown 3.1.1
MarkupSafe 1.0
matplotlib 3.0.0
mccabe 0.6.1
mistune 0.8.3
nbconvert 5.5.0
nbformat 4.4.0
numpy 1.17.4
numpydoc 0.9.1
packaging 19.2
pandas 0.23.4
pandocfilters 1.4.2
parso 0.5.1
pickleshare 0.7.4
pip 10.0.1
prompt-toolkit 1.0.15
protobuf 3.11.1
psutil 5.4.7
pycodestyle 2.4.0
pycparser 2.19
pyflakes 2.0.0
Pygments 2.5.2
pylint 1.9.2
pyOpenSSL 18.0.0
pyparsing 2.4.5
PySocks 1.6.8
python-dateutil 2.8.1
pytz 2019.3
pywin32 223
PyYAML 3.13
pyzmq 17.1.2
QtAwesome 0.6.0
qtconsole 4.6.0
QtPy 1.9.0
requests 2.19.1
rope 0.14.0
scikit-learn 0.18
scipy 1.1.0
setuptools 39.1.0
simplegeneric 0.8.1
six 1.13.0
sklearn 0.0
snowballstemmer 2.0.0
Sphinx 2.2.2
sphinxcontrib-applehelp 1.0.1
sphinxcontrib-devhelp 1.0.1
sphinxcontrib-htmlhelp 1.0.2
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.2
sphinxcontrib-serializinghtml 1.1.3
spyder 3.3.1
spyder-kernels 0.2.6
tensorboard 1.11.0
tensorflow 1.11.0
termcolor 1.1.0
testpath 0.4.4
tornado 5.1.1
traitlets 4.3.2
urllib3 1.23
wcwidth 0.1.7
webencodings 0.5.1
Werkzeug 0.16.0
wheel 0.33.6
win-inet-pton 1.0.1
win-unicode-console 0.5
wincertstore 0.2
wrapt 1.10.11

while using

from sklearn.model_selection import train_test_split

@juniorojha
Copy link

HI Everyone .. i m getting below error need your help
from .murmurhash import murmurhash3_32

File "init.pxd", line 918, in init sklearn.utils.murmurhash

ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject

i m using version:
Package Version

absl-py 0.8.1
alabaster 0.7.12
asn1crypto 0.24.0
astor 0.8.0
astroid 1.6.5
Babel 2.7.0
backcall 0.1.0
bleach 1.5.0
certifi 2018.8.24
cffi 1.11.5
chardet 3.0.4
cloudpickle 1.2.2
colorama 0.4.1
cryptography 2.3.1
cycler 0.10.0
decorator 4.4.1
defusedxml 0.6.0
docutils 0.14
entrypoints 0.2.3
gast 0.3.2
grpcio 1.25.0
h5py 2.8.0
html5lib 0.9999999
idna 2.7
imagesize 1.1.0
ipykernel 4.10.0
ipython 6.5.0
ipython-genutils 0.2.0
isort 4.3.4
jedi 0.12.1
Jinja2 2.10.3
joblib 0.14.0
jsonschema 2.6.0
jupyter-client 5.3.3
jupyter-core 4.5.0
Keras 2.2.2
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
keyring 13.2.1
kiwisolver 1.0.1
lazy-object-proxy 1.3.1
List 1.3.0
Markdown 3.1.1
MarkupSafe 1.0
matplotlib 3.0.0
mccabe 0.6.1
mistune 0.8.3
nbconvert 5.5.0
nbformat 4.4.0
numpy 1.17.4
numpydoc 0.9.1
packaging 19.2
pandas 0.23.4
pandocfilters 1.4.2
parso 0.5.1
pickleshare 0.7.4
pip 10.0.1
prompt-toolkit 1.0.15
protobuf 3.11.1
psutil 5.4.7
pycodestyle 2.4.0
pycparser 2.19
pyflakes 2.0.0
Pygments 2.5.2
pylint 1.9.2
pyOpenSSL 18.0.0
pyparsing 2.4.5
PySocks 1.6.8
python-dateutil 2.8.1
pytz 2019.3
pywin32 223
PyYAML 3.13
pyzmq 17.1.2
QtAwesome 0.6.0
qtconsole 4.6.0
QtPy 1.9.0
requests 2.19.1
rope 0.14.0
scikit-learn 0.18
scipy 1.1.0
setuptools 39.1.0
simplegeneric 0.8.1
six 1.13.0
sklearn 0.0
snowballstemmer 2.0.0
Sphinx 2.2.2
sphinxcontrib-applehelp 1.0.1
sphinxcontrib-devhelp 1.0.1
sphinxcontrib-htmlhelp 1.0.2
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.2
sphinxcontrib-serializinghtml 1.1.3
spyder 3.3.1
spyder-kernels 0.2.6
tensorboard 1.11.0
tensorflow 1.11.0
termcolor 1.1.0
testpath 0.4.4
tornado 5.1.1
traitlets 4.3.2
urllib3 1.23
wcwidth 0.1.7
webencodings 0.5.1
Werkzeug 0.16.0
wheel 0.33.6
win-inet-pton 1.0.1
win-unicode-console 0.5
wincertstore 0.2
wrapt 1.10.11

while using

from sklearn.model_selection import train_test_split

Any success in making this work?

@jni
Copy link
Member

jni commented Dec 17, 2019

@mdtahsinasif @juniorojha This is the scikit-image issue tracker, not scikit-learn. Having said that, I expect that you need to either upgrade your scikit-image/learn or downgrade your NumPy.

@manish-tiwari
Copy link

oh yeah, also, don't install one package with pip, the other with conda. that is a recipe for disaster.

If that is the case then what should we do when one any package is not available in conda but exists in pip?

@vamsiinmicron
Copy link

As of now, I tried rolling back numpy to 1.14.5 and all other versions but none worked but upgrading to 1.16.1 finally fixed it!.
pip install numpy==1.16.1

This worked

@jni
Copy link
Member

jni commented Mar 4, 2020

@manish-tiwari

If that is the case then what should we do when one any package is not available in conda but exists in pip?

As a last resort, you install it with pip. @hmaarrfk did not mean that as a hard rule but as a guideline. Ultimately, mixing pip and conda can lead to incompatible compiled binary code (code like NumPy that is not written in Python but rather C with Python "bindings", meaning ways to access the C code), at which point you usually have to start with a fresh environment.

@hmaarrfk
Copy link
Member

hmaarrfk commented Mar 5, 2020

Thanks for better explaining my thoughts jni.

We really want to do our best to help people still experiencing a similar issue.

Opening a new on GitHub is the best way to do that. Feel free to open them!

mschrimpf added a commit to brain-score/vision that referenced this issue Mar 22, 2020
mschrimpf added a commit to mschrimpf/brain-score that referenced this issue Mar 24, 2020
mschrimpf added a commit to brain-score/vision that referenced this issue Mar 24, 2020
* provide unified benchmark pool built from specialized ones

* improve wording

* update benchmarks example

* require numpy>=1.16 following scikit-image/scikit-image#3655 (comment)

* account for legacy noaperture movshon assembly
mschrimpf added a commit to brain-score/brainio_collection that referenced this issue Mar 24, 2020
mschrimpf added a commit to brain-score/brainio_collection that referenced this issue Mar 24, 2020
mschrimpf added a commit to brain-score/vision that referenced this issue Mar 24, 2020
* resize stimuli based on model's declared visual degrees

* do not convert stimulus_set if target==source

* set visual_degrees to assembly for unit test

* add visual_degrees again

* set screen degrees for Rajalingham and tolias.Cadena (#197)

* Corrected stimulus size of 8deg for behavioral benchmark.
Removes cropping images to 85% for tensorflow models in preprocessing.
Adds stimilus size of 2deg for Cadena benchmark.
Allows behavior benchmark to work on models with multiple layers.

* Added V1 neural predictivity benchmark for the Movshon public dataset and for the Tolias dataset

* use logging for Score warnings; add comments; quick-return nan for ost no timebins

* fix tests for V1/2 aperture; download precomputed test files from S3

brain-score/brainio_collection#27

* fix lstrip_local: use name instead of fixed index

* fix PrecomputedProbabilities for Rajalingham2018 tests: add visual degrees

* fix test file download, add echo statement

* require numpy>=1.16 following scikit-image/scikit-image#3655 (comment)

* fix previous commit: need at least 1.16

* fix Kar2019ost test and add precomputed test; remove mask-benchmark test

* use 3.6 again in an attempt to fix mysterious curl pull issue

(in travis' `pip install .`)
Collecting brainio_collection@ git+https://github.com/brain-score/brainio_collection
  Cloning https://github.com/brain-score/brainio_collection to /tmp/pip-install-7suake0n/brainio-collection
  Running command git clone -q https://github.com/brain-score/brainio_collection /tmp/pip-install-7suake0n/brainio-collection
  error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function.
  fatal: the remote end hung up unexpectedly
  fatal: early EOF
  fatal: index-pack failed

* Add visual degrees to benchmarks; add V1 single benchmark (#199)

* Adds stimulus size of 6deg for behavioral benchmark.
Adds stimilus size of 2deg for Cadena benchmark.
Allows behavior benchmark to work on models with multiple layers.

* Corrected stimulus size of 8deg for behavioral benchmark.
Removes cropping images to 85% for tensorflow models in preprocessing.
Adds stimilus size of 2deg for Cadena benchmark.
Allows behavior benchmark to work on models with multiple layers.

* Added V1 neural predictivity benchmark for the Movshon public dataset and for the Tolias dataset

* Corrected several bugs to run visual degrees
Moved visual degrees implementation from stimulus set to benchmarks
Added single neuron mapping

* Added V1 neural predictivity benchmark for the Movshon public dataset and for the Tolias dataset

* Corrected screen degrees transformation

* fix PR comments: CamelCase naming, public benchmarks, correlations r

Co-authored-by: Martin Schrimpf <m4rtinsch@gmail.com>

* test for the correct ratio of gray in stimuli of benchmarks in response to different-sized candidates

* update comment

* remove out-dated global setting of stimulus set degrees

* add visual degrees to public benchmarks

* place stimuli on screen according to visual degrees for example benchmark

Co-authored-by: tiagogmarques <tmarques@mit.edu>
@didpurwanto
Copy link

As of now, I tried rolling back numpy to 1.14.5 and all other versions but none worked but upgrading to 1.16.1 finally fixed it!.

pip install numpy==1.16.1

its work

@iprafols
Copy link

iprafols commented Aug 26, 2021

Hi all,

I just encountered the opposite problem:
<frozen importlib._bootstrap>:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject

For the record, I am running on a MacOS Catalina (v10.15.7) and using python 3.8.3 with numpy 1.21.2 and sklearn 0.23.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests