New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge changes from the last several days #2230
Conversation
Change: 121043384
Change: 121060255
add_check_numerics_ops() works with fp16 nodes. Change: 121060838
Change: 121062660
Change: 121062829
Change: 121073548
Change: 121075252
…truction and graph execution separate. Change: 121078652
Change: 121078769
Change: 121086262
Change: 121089711
…mpeg binary and does not support google3 sandboxing. That will happen in a later CL. This is a second attempt at the same functionality. Since the first attempt, I've pulled the auto-initialization code out of the contrib/__init__.py directory, so nothing will be loaded by default. This may be appropriate for ops that don't work unless you've installed another app. Change: 121126709
Change: 121126929
stripped_op_list_for_graph() except it returns string names instead of OpDefs. Change: 121128648
…man. tensorflow#2159 Change: 121133289
Also fix a bunch of files that ended up without future imports while it was off. Change: 121161570
Change: 121171885
Change: 121172029
resource_mgr fails). Change: 121227875
Change: 121238286
Change: 121238628
…ops, i.e. batch_norm. Change: 121238955
Change: 121239402
Change: 121265877
…pport is only for the DataFeeder class, and only works if the make_epoch_variable method is called. Change: 121268249
Change: 121275794
Change: 121276216
Change: 121492869
(MakeShape now takes an int64 instead of an int, avoiding some of the casting ugliness and reducing the need for callers to do their own, redundant checks). Fixing additional int32->64 warnings Change: 121498517
Change: 121507010
This facilitates shape induction when shape is produced via tf.pack() and is generally more efficient. Change: 121508615
Change: 121510530
Change: 121511327
… Note that this doesn't use FFmpeg. .wav is simple enough that we can make it work without adding the dependency. Change: 121517681
Change: 121518240
Change: 121522621
…aration. 'npm run prepare' runs 'npm install && bower install && typings install' Change: 121524007
Change: 121524348
Change: 121525882
Use std::move in allocate_tensor. Change: 121528791
Change: 121533831
When the gradient TensorArray is created, its elements are now soft-initialized with the shapes of the elements of the corresponding forward TensorArray. Reads from these soft-initialized elements create a zero tensor of the correct shape. Writes to these soft-initialized elements as part of a gradient accumulation write just assign a proper tensor, thus ignoring the soft initialization (though now the shapes being sent into the gradients are checked). This means TensorArray users can now write: ta = TensorArray(size=2) x = constant([x0, x1]) w = ta.unpack(x) r0 = w.read(0) and calculate gradients of r0 with respect to x. In the example above, the gradients are 1.0 with respect to x0 and 0.0 with respect to x1. In the past, this would have led to an error because no value was being read from w for index 1 (and thus the gradient TensorArray did not have any writes at the corresponding index). Now the assumption is made that the default value in the gradient TensorArray index is zero with the correct shape. Change: 121536091
Also improved the performance of the BiasAdd operation New: BM_cpu_BiasAdd_R512_C2048/51202048 1705279 408 2459.6MB/s 614.9M items/s BM_cpu_BiasAdd_R512_C4096/51204096 2603394 254 3222.2MB/s 805.5M items/s BM_cpu_BiasAdd_R2048_C512/204800512 1666322 369 2517.1MB/s 629.3M items/s BM_cpu_BiasAdd_R4096_C512/409600512 2603498 249 3222.1MB/s 805.5M items/s Old: BM_cpu_BiasAdd_R512_C2048/51202048 1703119 303 2462.7MB/s 615.7M items/s BM_cpu_BiasAdd_R512_C4096/51204096 3393146 213 2472.2MB/s 618.1M items/s BM_cpu_BiasAdd_R2048_C512/204800512 2184495 293 1920.0MB/s 480.0M items/s BM_cpu_BiasAdd_R4096_C512/409600512 3796247 190 2209.7MB/s 552.4M items/s Change: 121537749
… each function and class (in addition to the usual files that aggregate under broad topics). Change: 121542946
Change: 121544766
TF_REGISTER_ALL_TYPES does not register complex types on android builds, but matmul etc. require it; and use SetZero functor. Reverted that change, and cleaned up the TensorArray LockedRead to handle only numeric types with SetZero; and then only call it when the size of the Tensor is > 0. Change: 121546296
…Random. Change: 121549048
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for the commit author(s). If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. |
Okay all tests passed, so merging (looks like my conflict resolution worked, as far as tests could determine). |
Remove the duplicate code and add cmake package for QA
This was a pretty hard conflict to resolve from internal to external, so we'll see if I did it right.