-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Branch 194768567 #18983
Merged
Merged
Branch 194768567 #18983
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
PiperOrigin-RevId: 194571125
PiperOrigin-RevId: 194579253
PiperOrigin-RevId: 194580654
PiperOrigin-RevId: 194580957
PiperOrigin-RevId: 194588403
This currently causes tags mismatch because a leading whitespace is added within the saved_model_cli when doing ', '.join(tag_set). PiperOrigin-RevId: 194590154
they should be named differently. Otherwise, tf.gradients gets confused. PiperOrigin-RevId: 194593519
PiperOrigin-RevId: 194594759
PiperOrigin-RevId: 194596337
PiperOrigin-RevId: 194602336
PiperOrigin-RevId: 194608854
PiperOrigin-RevId: 194609850
PiperOrigin-RevId: 194614877
PiperOrigin-RevId: 194621163
…ns; NFC PiperOrigin-RevId: 194622198
PiperOrigin-RevId: 194625155
This CL extends the --xla_hlo_profile knob to tfcompile. tf_library rules can now set enable_xla_hlo_profiling to True to: - Have the generated code update per-HLO profile counters as it executes. - Have tfcompile generate and serialize an instance HloProfilePrinterData with a compiled model that can be used to pretty-print the collected profile counters. PiperOrigin-RevId: 194627272
The following symbols are available: - tf.contrib.recurrent.bidirectional_functional_rnn - tf.contrib.recurrent.functional_rnn - tf.contrib.recurrent.Recurrent PiperOrigin-RevId: 194632138
PiperOrigin-RevId: 194634563
PiperOrigin-RevId: 194636076
PiperOrigin-RevId: 194637892
…m TF to Swift via direct session. The changes are: 1. Added an experimental TF C API TF_DequeueNamedTensor() to consume the queued tensors from a dequeue op. One use case is for the Swift host program to consume tensors sent by TF, where the queue is a Fifo queue managed by TF. Enqueuing tensors are done by running an enqueue op in a graph. The queued tensors are not persisted, and will be lost if the process/machine dies. The queue has a bounded capacity, to prevent producer from being unboundedly ahead of consumer. while caller of TF_DequeueNamedTensor() could have run the Fifo dequeue op directly, the extra level of indirection provided by this API allows us to more easily switch the queuing impl to another mechanism. If and once we stabilize on the Fifo queue based impl, we can remove this API. 2. Added a new S4TF runtime API _TFCReceiveTensorHandle() that receives a tensor via TF_DequeueNamedTensor(). 3. To support tensor receives in host program, taught PartitionCloner in TFPartition to insert SIL code to call _TFCReceiveTensorHandle(). 4. To support tensor sends in accelerator program, taught TFGraphLowering in generate QueueEnqueueV2 nodes in the TF graphs, with appropriate control dependence to make sure these nodes get executed. a) The enqueue produces no output tensor, and is executed only for its side effect. To ensure it is executed properly, control dependence is wired up. The general design is: before a TF_Function (can be a top level function or the body function of a while op) produces an output tensor OT, make OT control dependent on the enqueue op, so that enqueue gets run before the function returns. b) If tensor send occurs in a while loop body, the body logic currently gets lowered in 3 places: the while op cond function, the while op body function, and the ops at the same level as the while op itself (for running the last loop iteration). In this case, the correct TFGraph lowering is to run the enqueue in the last 2 out of the 3 places above. After this CL, the dual versions of the above (dequeuing via an op, and enqueuing via C API) will be added. PiperOrigin-RevId: 194658511
PiperOrigin-RevId: 194661814
If for whatever reason iterator_resource->set_iterator did not return Status::OK(), we would leak a reference on the iterator_resource. With this change, we won't leak the resource. PiperOrigin-RevId: 194662412
PiperOrigin-RevId: 194663800
PiperOrigin-RevId: 194679657
PiperOrigin-RevId: 194687897
PiperOrigin-RevId: 194711291
…ination notice. PiperOrigin-RevId: 194722985
PiperOrigin-RevId: 194723199
PiperOrigin-RevId: 194768567
av8ramit
approved these changes
Apr 30, 2018
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.