Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should Python packages depending on TensorFlow structure their requirements? #7166

Closed
dustinvtran opened this Issue Jan 31, 2017 · 10 comments

Comments

Projects
None yet
9 participants
@dustinvtran
Copy link
Member

dustinvtran commented Jan 31, 2017

Many packages build on TensorFlow. For example, our work in Edward uses tensorflow>=1.0.0a0 as an install requirement.

However, this conflicts with tensorflow-gpu, which can no longer be installed because of the requirement specifically on tensorflow. What do you suggest is the best way to handle this?

One option suggested by @gokceneraslan (blei-lab/edward#428 (comment)) is to hack in the dependency according to whether the user has a GPU. Another option, which PrettyTensor and Keras employ, is to not even require TensorFlow. (Both options sound not good.)

Also see blei-lab/edward#428. also looping in GPflow devs (@jameshensman, @alexggmatthews) in case they have the same problem. (Note I'm raising this as an issue instead of asking on a mailing list, in case this is something that should be changed on TensorFlow's end and not our end.)

@shoyer

This comment has been minimized.

Copy link
Member

shoyer commented Jan 31, 2017

I'm pretty sure the fundamental issue here is that pypi doesn't support uploading wheels with and without GPU support (see PEP 425 for a list of supported tags). Hence the separate "tensorflow-gpu" distribution on pypi.

Both of your suggested work-arounds (hacks in setup.py or simply removing problematic dependencies altogether) are commonly done by Python packages for scientific computing.

@dustinvtran

This comment has been minimized.

Copy link
Member Author

dustinvtran commented Feb 1, 2017

Do you have some examples? This would be great references in deciding from the work-arounds. (I'm leaning towards removing the dependence.)

@shoyer

This comment has been minimized.

Copy link
Member

shoyer commented Feb 1, 2017

@dustinvtran Here's a discussion about this for patsy: pydata/patsy#5

@yaroslavvb

This comment has been minimized.

Copy link
Contributor

yaroslavvb commented Feb 1, 2017

@shoyer interesting example, I wonder if that explains why pip install --upgrade $TF_BINARY_URL replaces MKL numpy on our machines with OpenBLAS numpy (there's REQUIRED_PACKAGES = [ 'numpy >= 1.11.0', inside of TF's setup.py

@shoyer

This comment has been minimized.

Copy link
Member

shoyer commented Feb 2, 2017

@yaroslavvb yes, that's likely the case. Pip install only recently got the option --upgrade-strategy=only-if-needed which is probably what you want to use here. Eventually this will become the default behavior (pypa/pip#3871).

@sjperkins

This comment has been minimized.

Copy link
Contributor

sjperkins commented Feb 7, 2017

@dustinvtran See here for an example which detects a CUDA installation and then selects either tensorflow/tensorflow-gpu depending on CUDA availability.

@t0mk

This comment has been minimized.

Copy link

t0mk commented Feb 9, 2017

@sjperkins Detecting if nviidia gpu is available will not work when installing in Dockerfile to a Docker image, and in general if you install it somewhere where you don't intend to run it.

I was checking if it could be done with setup.py extras:
https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies

but I don't think it's possible.

IMO most straightforward would be to drop tensorflow from the install_requires list, and just try to import tf it in setup.py, catch the ImportError, and raise some pip exception saying that you need either tensorflow or tensorflow-gpu installed.

@sjperkins

This comment has been minimized.

Copy link
Contributor

sjperkins commented Feb 9, 2017

Detecting if nviidia gpu is available will not work when installing in Dockerfile to a Docker image, and in general if you install it somewhere where you don't intend to run it.

@t0mk One still needs to install CUDA in the docker container. The example I provided will still compile CUDA code if GPUs aren't available, but it won't be able to target specific architectures and will default to sm_30.

@standy66

This comment has been minimized.

Copy link
Contributor

standy66 commented Feb 18, 2017

Currently I am using extras_require in setup.py

setup(
    name="my_package",
    ...,  # other stuff
    install_requires=<list of dependencies EXCLUDING tensorflow>,
    extras_require={
        "tf": ["tensorflow>=1.0.0"],
        "tf_gpu": ["tensorflow-gpu>=1.0.0"],
    }
)

The problem here is that if user does not specify which version of package he/she wants, i.e. like this my_package[tf_gpu], tensorflow won't be required. But I think at least it is better then not specifying tensorflow at all.

@dustinvtran

This comment has been minimized.

Copy link
Member Author

dustinvtran commented Feb 27, 2017

For Edward I decided to remove the explicit dependence on TensorFlow and make it part of extras_require. Not ideal, but I think it's the best present solution. Feel free to close this issue—it would be nice though to have this type of recommended advice in the docs.

@prb12 prb12 closed this Mar 9, 2017

krfricke added a commit to tensorforce/tensorforce that referenced this issue Jul 15, 2017

ragulpr added a commit to ragulpr/wtte-rnn that referenced this issue Jul 18, 2017

refs #19 Add `tensorflow` as optional installation
- Necessary to not override existing tensorflow-gpu installation 
see 
tensorflow/tensorflow#7166

pawni added a commit to DLTK/DLTK that referenced this issue Nov 16, 2017

@caseyfitz caseyfitz referenced this issue Apr 4, 2018

Merged

Gpu test #52

dimidd added a commit to dimidd/finetune that referenced this issue Oct 25, 2018

setup: add TF as extra-requirement
This would allow the user to specify explicitly which version of TF
is needed. E.g. `finetune[tf_gpu]`.

See this discussion for details:
tensorflow/tensorflow#7166

Signed-off-by: Dimid Duchovny <dimidd@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.