Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install TensorFlow from a prebuilt binary when possible #65

Merged
merged 3 commits into from Mar 11, 2017

Conversation

adamcrume
Copy link
Contributor

No description provided.

@adamcrume adamcrume force-pushed the binary branch 6 times, most recently from 17361fa to c7c35d7 Compare March 7, 2017 05:49
@adamcrume
Copy link
Contributor Author

This massively speeds up our build time, so our Travis builds don't time out any more. 😃 Not to mention being less painful when developing locally.

@adamcrume adamcrume requested a review from jhseu March 8, 2017 04:42
@jhseu
Copy link

jhseu commented Mar 8, 2017

@asimshankar fyi

@daschl
Copy link
Contributor

daschl commented Mar 8, 2017

@adamcrume should we still have a switch to force build? on the other hand you can always build on your own and then use that one at link time...

@adamcrume
Copy link
Contributor Author

@daschl @jhseu I thought about using a feature, but it doesn't seem like the right level of abstraction. Plus, if we start configuring build details with features, we can only turn things on or off, not pass in values. We could use an environment variable, but I'm not crazy about that, either. What do you think?

@daschl
Copy link
Contributor

daschl commented Mar 9, 2017

@adamcrume I agree, a feature is probably not the right abstraction level. For example, the openssl crate also exposes environment variables (https://github.com/sfackler/rust-openssl#manual-configuration) which I think also is the most portable.

The only thing I worry about potentially, but let me know if I'm mistaken, is that the --copt=-march=native compile option will be gone, right? Or are the prebuilt binaries compiled with such optimizations? On the other hand noone is preventing one from compiling manually with those flags and then using it at link time.

So bottom line is I think this is pretty great since it helps speeding up build time a lot. We should just see how we can keep the flexibility/optimization possibilities in an accessible way.

@daschl
Copy link
Contributor

daschl commented Mar 9, 2017

@adamcrume the other thing which came to mind is that right now the code downloads the tarball into the target directory, so on a clean it has to be redownloaded. I wonder if we should support setting a system property for a different location so that even after a cargo clean it just picks up the tarball from that folder if it does exist?

@jhseu
Copy link

jhseu commented Mar 9, 2017

@daschl Yeah, the prebuilt binaries are general and so don't include the benefits of SIMD. The performance difference can be notable (~2x). A reasonable compromise here might be to use a system-installed libtensorflow.so if it's available, or add it as an option if there's an idiomatic way to do that in Rust.

I'm not too worried about this in the long-term. When XLA is default enabled, we can JIT compile with SIMD instructions.

Related: ideally TensorFlow would get statically linked in for both Rust and Go. Bazel should be getting support for building static libraries soon.

This controls the location for the downloaded prebuilt binary tarball.
@adamcrume
Copy link
Contributor Author

I added a couple of environment variables. What do you think?

@daschl
Copy link
Contributor

daschl commented Mar 10, 2017

@adamcrume 👍

Copy link
Contributor

@nbigaouette-eai nbigaouette-eai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried this PR.

As expected, --copt=-march=native is not present, and tf will report it:

(~/tensorflow_rust.git/tensorflow-sys)
 -> cargo run --example multiplication
    Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
     Running `/home/nbigaouette/tensorflow_rust.git/target/debug/examples/multiplication`
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.

The downloaded binary is downloaded in tensorflow.git/tensorflow-sys/target, not tensorflow.git/target. Could that be a bug in cargo's handling of CARGO_MANIFEST_DIR and workspaces? In any case, this prevents cargo clean from deleting the downloaded file! The file is ~17MB so it's not that large either.

I'd also like to see the CI running cargo test -vv -j 1 (it's disabled for now) since it is still failing on my machine (see #66). I really don't know why it's failing; I was hoping that a pre-built version would fix this but it doesn't look like it.

- cargo test -vv -j 2 --features tensorflow_unstable
- cargo run --example regression
- cargo run --features tensorflow_unstable --example expressions
- cargo doc -vv --features tensorflow_unstable
- (cd tensorflow-sys && cargo test -vv -j 1)
- # TODO(#66): Re-enable: (cd tensorflow-sys && cargo test -vv -j 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that required in the CI? I couldn't find anybody that could reproduce #66...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes; this test fails on my local machine and on Travis: https://travis-ci.org/tensorflow/rust/builds/208479366

@adamcrume adamcrume merged commit bc5b8e0 into tensorflow:master Mar 11, 2017
@adamcrume adamcrume deleted the binary branch March 11, 2017 04:39
@nbigaouette-eai nbigaouette-eai mentioned this pull request Aug 24, 2017
ramon-garcia pushed a commit to ramon-garcia/tensorflow-rust that referenced this pull request May 20, 2023
* started working on boolean writer

* - add plain writer for float, int64 and double
- dont' read definitions for required columns

* bitpacker for bolean

* writing strings

* test for reading/writing bools

* finish reading basic types

* update readme
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants