Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes from the last several days #2230

Merged
merged 109 commits into from May 5, 2016
Merged

Conversation

vrv
Copy link

@vrv vrv commented May 5, 2016

This was a pretty hard conflict to resolve from internal to external, so we'll see if I did it right.

A. Unique TensorFlower and others added 30 commits April 28, 2016 12:11
add_check_numerics_ops() works with fp16 nodes.
Change: 121060838
…truction and graph execution separate.

Change: 121078652
…mpeg binary and does not support google3 sandboxing. That will happen in a later CL.

This is a second attempt at the same functionality. Since the first attempt, I've pulled the auto-initialization code out of the contrib/__init__.py directory, so nothing will be loaded by default. This may be appropriate for ops that don't work unless you've installed another app.
Change: 121126709
stripped_op_list_for_graph() except it returns string names instead
of OpDefs.
Change: 121128648
Also fix a bunch of files that ended up without future imports while
it was off.
Change: 121161570
Change: 121265877
…pport is only for the DataFeeder class, and only works if the make_epoch_variable method is called.

Change: 121268249
Change: 121275794
A. Unique TensorFlower and others added 24 commits May 4, 2016 10:12
(MakeShape now takes an int64 instead of an int, avoiding
some of the casting ugliness and reducing the need for callers
to do their own, redundant checks).
Fixing additional int32->64 warnings
Change: 121498517
This facilitates shape induction when shape is produced via tf.pack() and is
generally more efficient.
Change: 121508615
… Note that

this doesn't use FFmpeg. .wav is simple enough that we can make it work without
adding the dependency.
Change: 121517681
…aration.

'npm run prepare' runs 'npm install && bower install && typings install'
Change: 121524007
Use std::move in allocate_tensor.
Change: 121528791
Change: 121533831
When the gradient TensorArray is created, its elements are now soft-initialized
with the shapes of the elements of the corresponding forward TensorArray.

Reads from these soft-initialized elements create a zero tensor of the
correct shape.

Writes to these soft-initialized elements as part of a gradient accumulation
write just assign a proper tensor, thus ignoring the soft initialization
(though now the shapes being sent into the gradients are checked).

This means TensorArray users can now write:

ta = TensorArray(size=2)
x = constant([x0, x1])
w = ta.unpack(x)
r0 = w.read(0)

and calculate gradients of r0 with respect to x.  In the example
above, the gradients are 1.0 with respect to x0 and 0.0 with respect to x1.

In the past, this would have led to an error because no value was being
read from w for index 1 (and thus the gradient TensorArray did not have
any writes at the corresponding index).  Now the assumption is made that the
default value in the gradient TensorArray index is zero with the correct shape.
Change: 121536091
Also improved the performance of the BiasAdd operation

New:
BM_cpu_BiasAdd_R512_C2048/51202048     1705279        408	 2459.6MB/s 614.9M items/s
BM_cpu_BiasAdd_R512_C4096/51204096     2603394        254	 3222.2MB/s 805.5M items/s
BM_cpu_BiasAdd_R2048_C512/204800512    1666322        369	 2517.1MB/s 629.3M items/s
BM_cpu_BiasAdd_R4096_C512/409600512    2603498        249	 3222.1MB/s 805.5M items/s

Old:
BM_cpu_BiasAdd_R512_C2048/51202048     1703119        303	 2462.7MB/s 615.7M items/s
BM_cpu_BiasAdd_R512_C4096/51204096     3393146        213	 2472.2MB/s 618.1M items/s
BM_cpu_BiasAdd_R2048_C512/204800512    2184495        293	 1920.0MB/s 480.0M items/s
BM_cpu_BiasAdd_R4096_C512/409600512    3796247        190	 2209.7MB/s 552.4M items/s
Change: 121537749
… each function and class (in addition to the usual files that aggregate under broad topics).

Change: 121542946
TF_REGISTER_ALL_TYPES does not register complex types on android builds, but
matmul etc. require it; and use SetZero functor.  Reverted that change, and
cleaned up the TensorArray LockedRead to handle only numeric types with SetZero;
and then only call it when the size of the Tensor is > 0.
Change: 121546296
@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for the commit author(s). If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.

@vrv
Copy link
Author

vrv commented May 5, 2016

Okay all tests passed, so merging (looks like my conflict resolution worked, as far as tests could determine).

@vrv vrv merged commit 7b4d733 into tensorflow:master May 5, 2016
@bhack bhack mentioned this pull request May 5, 2016
fsx950223 pushed a commit to fsx950223/tensorflow that referenced this pull request Dec 15, 2023
Remove the duplicate code and add cmake package for QA
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet