@tf.function
弥补Eager execution带来的效率问题:
- Debug in eager mode, then decorate with
@tf.function
. - Don’t rely on Python side effects like object mutation or list appends.
tf.function
works best with TensorFlow ops; NumPy and Python calls are converted to constants.
1、当构建的计算图上只有少量特殊的ops时,时间效率差别不大:
1 | import timeit |
2、动态绑定:python具有动态绑定的语法特性,传递给函数不同类型的参数,函数有不同的行为,tf.function
也可以做到,而且能够重用已有的计算图:
1 |
|
可以用print(double.pretty_printed_concrete_signatures())
查看已有的traces:
1 | double(a) |
对tensorflow的计算图的四点:
- A
tf.Graph
is the raw, language-agnostic, portable representation of your computation. - A
ConcreteFunction
is an eagerly-executing wrapper around atf.Graph
. - A
Function
manages a cache ofConcreteFunction
s and picks the right one for your inputs. tf.function
wraps a Python function, returning aFunction
object
Every time a function is traced, a new concrete function is created.
可以通过接口get_concrete_function
得到concrete function
1 | print("Obtaining concrete trace") |
1 | # You can also call get_concrete_function on an InputSpec |
Starting with TensorFlow 2.3, Python arguments remain in the signature, but are constrained to take the value set during tracing.
Tensorflow2.3开始,python传递的函数参数保留,并一直在tracing阶段保留:
1 |
|
When tracking down issues that only appear within tf.function
, here are some tips:
- Plain old Python
print
calls only execute during tracing, helping you track down when your function gets (re)traced. tf.print
calls will execute every time, and can help you track down intermediate values during execution.tf.debugging.enable_check_numerics
is an easy way to track down where NaNs and Inf are created.pdb
can help you understand what’s going on during tracing. (Caveat: PDB will drop you into AutoGraph-transformed source code.)