Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
182 lines (138 sloc) 3.96 KB

AutoGraph reference

Index

Debugging AutoGraph code

The recommended way to debug AutoGraph code is to run it eagerly (see below).

AutoGraph generates a new function, rather than directly executing the input function. Non-code elements, such as breakpoints, do not transfer to the generated code.

You can step through the generated code and set breakpoints while debugging. The converted function is cached, and breakpoints should persist for the lifetime of the Python runtime.

Note: The code generated by AutoGraph code is more complex than the input code, and is interspersed with AutoGraph boilerplate.

Note: Python debugging can only be used to step through the code during graph construction time (or tracing time in the case of tf.function). To debug TensorFlow execution, use Eager execution.

Debugging tf.function: tf.config.experimental_execute_functions_eagerly

When using @tf.function, you can temporarily toggle graph execution by using tf.config.experimental_execute_functions_eagerly. This will effectively run the annotated code eagerly, without transformation. Since AutoGraph has semantics consistent with Eager, it's an effective way to debug the code step-by-step.

Note: AutoGraph is compatible with Eager, but the converse is not always true, so exercise care when making modifications to the code while debugging.

Consider the following code:

@tf.function
def f(a):
  pdb.set_trace()
  if a > 0:
    tf.print(a, 'is positive')

Executing the line below will land the debugger in generated code, when the function is traced:

f(1)
>l
     10     def tf__f(a):
     11       pdb.set_trace()
---> 12       ag__.converted_call('print', tf, ag__.STD, (a,), None)
     13
     14       ...

Adding a call to tf.config.experimental_execute_functions_eagerly before executing the function will land the debugger in the original code instead:

tf.config.experimental_run_functions_eagerly(True)
f(1)
>l
      8 def f(a):
      9   pdb.set_trace()
---> 10   tf.print(a)
     11   if a > 0:
     12     tf.print('is positive')

Using print and tf.print

The print function is not converted by AutoGraph, and can be used to inspect the values of variables as graph construction time.

Mixing print with tf.print can be confusing at first because they run at different stages. In general:

  • all prints run when the TensorFlow graph is constructed
  • all tf.prints run when the TensorFlow graph is executed

Example: print

To see the difference between print and tf.print.

@tf.function
def f(a):
  print(a)
  if a > 0:
    a = -a

When a is a tf.Tensor object, it is printed without an actual value:

f(tf.constant(1))
Tensor("a:0", shape=(), dtype=int32)

Similarly, when a is just a Python value, it is printed directly:

f(1)
1

Example: print followed by tf.print

To see the difference between print and tf.print, let's run them together:

@tf.function
def f(a):
  print(a)
  tf.print(a)

For non-Tensor values, they produce similar results:

f(1)
1
1

For Tensor values, only tf.print outputs the actual value:

f(tf.constant(1))
Tensor("a:0", shape=(), dtype=int32)
1

Example: tf.print followed by print

Remember that, in general, all prints run before all tf.prints. What's more, since graphs are usually built once and executed multiple times, print usually runs just once when the function is first called.

So in the example below, even though tf.print appears above print, it will run after it, because the graph is executed after it is built:

@tf.function
def f(a):
  tf.print('At graph execution:', a)
  print('At graph construction:', a)
f(tf.constant(1))
At graph construction: Tensor("a:0", shape=(), dtype=int32)
At graph execution: 1

Calling the function again will re-use the graph in this case:

f(tf.constant(1))
At graph execution: 1
You can’t perform that action at this time.