Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small grammar fixes in the programmers guide FAQ. #19170

Merged
merged 2 commits into from
May 9, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
17 changes: 8 additions & 9 deletions tensorflow/docs_src/programmers_guide/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ tensors in the execution of a step.

If `t` is a @{tf.Tensor} object,
@{tf.Tensor.eval} is shorthand for
@{tf.Session.run} (where `sess` is the
@{tf.Session.run}, where `sess` is the
current @{tf.get_default_session}. The
two following snippets of code are equivalent:

Expand Down Expand Up @@ -101,9 +101,8 @@ sessions, it may be more straightforward to make explicit calls to
Sessions can own resources, such as
@{tf.Variable},
@{tf.QueueBase}, and
@{tf.ReaderBase}; and these resources can use
a significant amount of memory. These resources (and the associated memory) are
released when the session is closed, by calling
@{tf.ReaderBase}. These resources can sometimes use
a significant amount of memory, and can be released when the session is closed by calling
@{tf.Session.close}.

The intermediate tensors that are created as part of a call to
Expand Down Expand Up @@ -137,7 +136,7 @@ TensorFlow also has a
to help build support for more client languages. We invite contributions of new
language bindings.

Bindings for various other languages (such as [C#](https://github.com/migueldeicaza/TensorFlowSharp), [Julia](https://github.com/malmaud/TensorFlow.jl), [Ruby](https://github.com/somaticio/tensorflow.rb) and [Scala](https://github.com/eaplatanios/tensorflow_scala)) created and supported by the opensource community build on top of the C API supported by the TensorFlow maintainers.
Bindings for various other languages (such as [C#](https://github.com/migueldeicaza/TensorFlowSharp), [Julia](https://github.com/malmaud/TensorFlow.jl), [Ruby](https://github.com/somaticio/tensorflow.rb) and [Scala](https://github.com/eaplatanios/tensorflow_scala)) created and supported by the open source community build on top of the C API supported by the TensorFlow maintainers.

#### Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine?

Expand Down Expand Up @@ -210,8 +209,8 @@ a new tensor with a different dynamic shape.

#### How do I build a graph that works with variable batch sizes?

It is often useful to build a graph that works with variable batch sizes, for
example so that the same code can be used for (mini-)batch training, and
It is often useful to build a graph that works with variable batch sizes
so that the same code can be used for (mini-)batch training, and
single-instance inference. The resulting graph can be
@{tf.Graph.as_graph_def$saved as a protocol buffer}
and
Expand Down Expand Up @@ -260,7 +259,7 @@ See the how-to documentation for
There are three main options for dealing with data in a custom format.

The easiest option is to write parsing code in Python that transforms the data
into a numpy array. Then use @{tf.data.Dataset.from_tensor_slices} to
into a numpy array. Then, use @{tf.data.Dataset.from_tensor_slices} to
create an input pipeline from the in-memory data.

If your data doesn't fit in memory, try doing the parsing in the Dataset
Expand All @@ -274,7 +273,7 @@ If your data is not easily parsable with the built-in TensorFlow operations,
consider converting it, offline, to a format that is easily parsable, such
as @{tf.python_io.TFRecordWriter$`TFRecord`} format.

The more efficient method to customize the parsing behavior is to
The most efficient method to customize the parsing behavior is to
@{$adding_an_op$add a new op written in C++} that parses your
data format. The @{$new_data_formats$guide to handling new data formats} has
more information about the steps for doing this.
Expand Down