-
Notifications
You must be signed in to change notification settings - Fork 45.5k
Autograph guide #4733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autograph guide #4733
Conversation
From tensorflow/tensorflow/blob/8a6ef2cb4f98bacc1f821f60c21914b4bd5faaef
This is meant to be the guide page, yes? (Not a tutorial) |
}, | ||
"cell_type": "markdown", | ||
"source": [ | ||
"Into graph-compatible functions like this:" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or graph-building, if you prefer (it feels difficult to explain what is a graph-compatible function)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with Dan for using "graph-building".
Also, should avoid using "eager mode" and "graph mode" in this doc. Prefer "eager execution" and "graph execution".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice! A few cosmetic details to double check.
}, | ||
"cell_type": "markdown", | ||
"source": [ | ||
"You can take code written for eager execution and run it in graph mode. You get the same results, but with all the benfits of graphs:" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block might be confusing, because we're not in eager mode in this cell. The code does run with the input 9.0, but it doesn't run any ops - it just does computations in Python. It might be simpler to just omit the cell and only leave the text note.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point.
Given that we're using Graph
and Session
contexts everywhere... it looks like we can just switch on eager_execution
, and wrap these in tf.constants
.
Is that a reasonable approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
" # You can inspect the graph using tf.get_default_graph().as_graph_def()\n", | ||
" g_ops = tf_g(tf.constant(9.0))\n", | ||
" with tf.Session() as sess:\n", | ||
" print('Autograph value: %2.2f\\n' % sess.run(g_ops)) " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, could be also "Tensor value", if we want to avoid overbranding (at any rate, should be consistent with the subsequent cells).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switched these to "Eager result" and "Graph result".
"with tf.Graph().as_default(): \n", | ||
" # The result works like a regular op: takes tensors in, returns tensors.\n", | ||
" # You can inspect the graph using tf.get_default_graph().as_graph_def()\n", | ||
" input = tf.placeholder(tf.int32)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: "input" is a python keyword, perhaps name the symbol "input_"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switched to "num" to match the argument name.
"source": [ | ||
"### Lists\n", | ||
"\n", | ||
"Appending to lists in loops also works (we create a `TensorArray` for you behind the scenes)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is no longer always the case. Rephrasing to this would be ok: "(we create tensor list ops for you behind the scenes)"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
"def f(n):\n", | ||
" z = []\n", | ||
" # We ask you to tell us the element dtype of the list\n", | ||
" z = autograph.utils.set_element_type(z, tf.int32)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bit of a last-minute update: we no longer require to assign to z. So this would be better:
autograph.utils.set_element_type(z, tf.int32)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side note: we're rolling out a change to allow the shorter form:
autograph.set_element_type(z, tf.int32)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the current tf-nightly this works:
z = []
autograph.utils.set_element_type(z, tf.int32)
but this fails:
z=[]
autograph.set_element_type(z, tf.int32)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
b/111372661
"\n", | ||
"with tf.Graph().as_default(): \n", | ||
" with tf.Session():\n", | ||
" print(tf_f(tf.constant(3)).eval())" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any thought about using eval() / sess.run() consistently? I'm ok if you feel both should be exercised.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any thought about using eval() / sess.run() consistently? I'm ok if you feel both should be exercised.
Consistency, sure! Do you have a preferred style? I have no strong feelings on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generally use sess.run(), because eval() is not as idiomatic.
}, | ||
"cell_type": "markdown", | ||
"source": [ | ||
"### Nested If statement" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Nested if statements"? (plural, not capitalized)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
}, | ||
"cell_type": "markdown", | ||
"source": [ | ||
"## Advanced example: A training, loop in-graph\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unnecessary comma?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
" while i < hp.max_steps:\n", | ||
" train_x, train_y = get_next_batch(train_ds)\n", | ||
" test_x, test_y = get_next_batch(test_ds)\n", | ||
" # add get next\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment feels unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
"\n", | ||
"Writing control flow in AutoGraph is easy, so running a training loop in a TensorFlow graph should be easy as well! \n", | ||
"\n", | ||
"Here, we show an example of training a simple Keras model on MNIST, where the entire training process -- loading batches, calculating gradients, updating parameters, calculating validation accuracy, and repeating until convergence -- is done in-graph." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WDYT about a sentence somewhere ahead of this that mentions something in the lines of "oh, and you can use Keras libraries in AutoGraph as well"? Not sure if it's something the user would expect or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. I'll link to the other notebook here when it's ready.
Add note about keras layers and models.
I'm open to suggestions. I was thinking this notebook could go in the "Low Level APIs" section of the guide, since that's the only place where we expect users to understand the details of graph mode. I'm (still) working on a keras-model example that could go in tutorials.
done.
Done.
Fixed. |
Latest draft looks good. A note about the import lines - the reader might be confused into thinking AutoGraph requires eager mode to be enabled. Could we add a note that we only enable it for comparison purposes? |
I was kinda wondering about that ... |
Looks great! I'll be able to assist with debugging the convergence TODO later today if you wish. |
samples/core/guide/autograph.ipynb
Outdated
"source": [ | ||
"We'll enable [eager execution](https://www.tensorflow.org/guide/eager) for demonstration purposes, but AutoGraph works in both eager and [graph execution](https://www.tensorflow.org/guide/graphs) environments:", | ||
"", | ||
"Important: The converted code will _work_ directly in eager mode. But it runs as an eager-python function. To run in-graph use explicit graphs, as in this doc, or use `tf.contrib.eager.defun`." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we added similiar notes, can you remove this one?
Updated note in PR: MarkDaoust#8 |
Fix note about eager
Cool, I linked you to an internal bug for it.
…On Fri, Jul 13, 2018 at 8:26 AM Dan Moldovan ***@***.***> wrote:
Looks great! I'll be able to assist with debugging the convergence TODO
later today if you wish.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#4733 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABWWteXOpIQO-b0Pnnd7wobx1MVoR4PDks5uGLwPgaJpZM4VJvdh>
.
|
Pull AutoGraph Workshop notebook into tensorflow.org/guide
Staging link:
https://colab.research.google.com/github/markdaoust/models/blob/autopgraph-guide/samples/core/guide/autograph_control_flow.ipynb