Skip to content

Commit

Permalink
Remove --fold_const parameter (#1861)
Browse files Browse the repository at this point in the history
* remove fold_const param

Signed-off-by: hwangdeyu <dejack953@outlook.com>

* remove input_signature in from_keras_tf1()

Signed-off-by: hwangdeyu <dejack953@outlook.com>
  • Loading branch information
hwangdeyu committed Feb 27, 2022
1 parent 55d001a commit ab6584c
Show file tree
Hide file tree
Showing 12 changed files with 25 additions and 33 deletions.
4 changes: 0 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,6 @@ python -m tf2onnx.convert
[--concrete_function CONCRETE_FUNCTION]
[--target TARGET]
[--custom-ops list-of-custom-ops]
[--fold_const]
[--large_model]
[--continue_on_error]
[--verbose]
Expand Down Expand Up @@ -230,9 +229,6 @@ will be used.

Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.

#### --fold_const

Deprecated.

### <a name="summarize_graph"></a>Tool to get Graph Inputs & Outputs

Expand Down
2 changes: 1 addition & 1 deletion Troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ The reason for this is that there is a dynamic input of a tensorflow op but the

An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.

You can pass the options ```--fold_const``` in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.
You can pass the options ```--fold_const```(removed after tf2onnx-1.9.3) in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.

If this doesn't work the model is most likely not to be able to convert to ONNX. We used to see this a lot of issue with the ONNX Slice op and in opset-10 was updated for exactly this reason.
4 changes: 2 additions & 2 deletions examples/rnn_tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ For other advanced RNN cells, it is supposed to good to convert as well, but the
Use following commands to have a quick trial on your model:

```
python -m tf2onnx.convert --input frozen_rnn_model.pb --inputs input1:0,input2:0 --outputs output1:0,output2:0 --fold_const --opset 8 --output target.onnx --continue_on_error
python -m tf2onnx.convert --input frozen_rnn_model.pb --inputs input1:0,input2:0 --outputs output1:0,output2:0 --opset 8 --output target.onnx --continue_on_error
```

## Limitation
Expand All @@ -36,7 +36,7 @@ Use [onnxruntime](https://github.com/Microsoft/onnxruntime) or [caffe2](https://
There is a simpler way to run your models and test its correctness (compared with TensorFlow run) using following command.

```
python tests\run_pretrained_models.py --backend onnxruntime --config rnn.yaml --tests model_name --fold_const --onnx-file ".\tmp" --opset 8
python tests\run_pretrained_models.py --backend onnxruntime --config rnn.yaml --tests model_name --onnx-file ".\tmp" --opset 8
```

The content of rnn.yaml looks as below. For inputs, an explicit numpy expression or a shape can be used. If a shape is specified, the value will be randomly generated.
Expand Down
10 changes: 5 additions & 5 deletions tests/backend_test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ def assert_results_equal(self, expected, actual, rtol, atol, mtol=None,
if check_shape:
self.assertEqual(expected_val.shape, actual_val.shape)

def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeholders, large_model, constant_fold):
def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeholders, large_model):
np.random.seed(1) # Make it reproducible.
clean_feed_dict = {utils.node_name(k): v for k, v in feed_dict.items()}
if is_tf2() and not as_session:
Expand Down Expand Up @@ -195,7 +195,7 @@ def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeh
tf_reset_default_graph()
with tf_session() as sess:
tf.import_graph_def(graph_def, name='')
graph_def = tf_optimize(list(feed_dict.keys()), outputs, graph_def, fold_constant=constant_fold)
graph_def = tf_optimize(list(feed_dict.keys()), outputs, graph_def)

return result, graph_def, initialized_tables

Expand Down Expand Up @@ -331,8 +331,8 @@ def get_dtype(info):
self.assertEqual(get_dtype(info), graph.get_dtype(info.name))

def run_test_case(self, func, feed_dict, input_names_with_port, output_names_with_port,
rtol=1e-07, atol=1e-5, mtol=None, convert_var_to_const=True, constant_fold=True,
check_value=True, check_shape=True, check_dtype=True, process_args=None, onnx_feed_dict=None,
rtol=1e-07, atol=1e-5, mtol=None, convert_var_to_const=True, check_value=True,
check_shape=True, check_dtype=True, process_args=None, onnx_feed_dict=None,
graph_validator=None, as_session=False, large_model=False, premade_placeholders=False,
use_custom_ops=False, optimize=True):
"""
Expand Down Expand Up @@ -361,7 +361,7 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit

expected, graph_def, initialized_tables = \
self.freeze_and_run_tf(func, feed_dict, output_names_with_port, as_session,
premade_placeholders, large_model, constant_fold)
premade_placeholders, large_model)

graph_def_path = os.path.join(self.test_data_directory, self._testMethodName + "_after_tf_optimize.pb")
utils.save_protobuf(graph_def_path, graph_def)
Expand Down
5 changes: 2 additions & 3 deletions tests/test_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,6 @@ def get_maxpoolwithargmax_getdata():
class BackendTests(Tf2OnnxBackendTestBase):
def _run_test_case(self, func, output_names_with_port, feed_dict, **kwargs):
kwargs["convert_var_to_const"] = False
kwargs["constant_fold"] = False
return self.run_test_case(func, feed_dict, [], output_names_with_port, **kwargs)

def _test_expand_dims_known_rank(self, idx):
Expand Down Expand Up @@ -709,7 +708,7 @@ def func(x):
feed_dict = {"input_1:0": x_val}
input_names_with_port = ["input_1:0"]
output_names_with_port = ["output:0"]
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port, constant_fold=False,
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port,
graph_validator=lambda g: (check_op_count(g, "RandomUniform", 0) and
check_op_count(g, "RandomUniformLike", 0)))

Expand Down Expand Up @@ -5229,7 +5228,7 @@ def func(query_holder):
lookup_results = hash_table.lookup(query_holder)
ret = tf.add(lookup_results, 0, name=_TFOUTPUT)
return ret
self._run_test_case(func, [_OUTPUT], {_INPUT: query}, constant_fold=False, as_session=True)
self._run_test_case(func, [_OUTPUT], {_INPUT: query}, as_session=True)
os.remove(filnm)

@check_opset_min_version(8, "CategoryMapper")
Expand Down
1 change: 0 additions & 1 deletion tests/test_const_fold.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
class ConstantFoldingTests(Tf2OnnxBackendTestBase):
def _run_test_case(self, func, output_names_with_port, feed_dict, **kwargs):
kwargs["convert_var_to_const"] = False
kwargs["constant_fold"] = False
return self.run_test_case(func, feed_dict, [], output_names_with_port, **kwargs)

def test_concat(self):
Expand Down
2 changes: 1 addition & 1 deletion tests/test_string_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ def func(text):
return tokens_, begin_, end_, rows_
# Fails due to Attempting to capture an EagerTensor without building a function.
self._run_test_case(func, [_OUTPUT, _OUTPUT1, _OUTPUT2, _OUTPUT3],
{_INPUT: text_val}, constant_fold=False, as_session=True)
{_INPUT: text_val}, as_session=True)


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion tests/test_tf_shape_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def _run_test_case(self, input_names_with_port, output_names_with_port):
tf.import_graph_def(graph_def, name='')

# optimize graph
graph_def = tf_optimize(input_names_with_port, output_names_with_port, sess.graph_def, True)
graph_def = tf_optimize(input_names_with_port, output_names_with_port, sess.graph_def)

with tf_session() as sess:
if self.config.is_debug_mode:
Expand Down
15 changes: 7 additions & 8 deletions tf2onnx/convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,7 @@ def get_args():
parser.add_argument("--verbose", "-v", help="verbose output, option is additive", action="count")
parser.add_argument("--debug", help="debug mode", action="store_true")
parser.add_argument("--output_frozen_graph", help="output frozen tf graph to file")
parser.add_argument("--fold_const", help="Deprecated. Constant folding is always enabled.",
action="store_true")

# experimental
parser.add_argument("--inputs-as-nchw", help="transpose inputs as from nhwc to nchw")
args = parser.parse_args()
Expand Down Expand Up @@ -353,9 +352,9 @@ def _is_legacy_keras_model(model):
return False


def _from_keras_tf1(model, input_signature=None, opset=None, custom_ops=None, custom_op_handlers=None,
custom_rewriter=None, inputs_as_nchw=None, extra_opset=None, shape_override=None,
target=None, large_model=False, output_path=None):
def _from_keras_tf1(model, opset=None, custom_ops=None, custom_op_handlers=None, custom_rewriter=None,
inputs_as_nchw=None, extra_opset=None, shape_override=None, target=None,
large_model=False, output_path=None):
"""from_keras for tf 1.15"""
input_names = [t.name for t in model.inputs]
output_names = [t.name for t in model.outputs]
Expand All @@ -375,7 +374,7 @@ def _from_keras_tf1(model, input_signature=None, opset=None, custom_ops=None, cu
frozen_graph, initialized_tables = tf_loader.freeze_session(sess, input_names, output_names, get_tables=True)
with tf.Graph().as_default():
tf.import_graph_def(frozen_graph, name="")
frozen_graph = tf_loader.tf_optimize(input_names, output_names, frozen_graph, False)
frozen_graph = tf_loader.tf_optimize(input_names, output_names, frozen_graph)
model_proto, external_tensor_storage = _convert_common(
frozen_graph,
name=model.name,
Expand Down Expand Up @@ -423,8 +422,8 @@ def from_keras(model, input_signature=None, opset=None, custom_ops=None, custom_
An ONNX model_proto and an external_tensor_storage dict.
"""
if LooseVersion(tf.__version__) < "2.0":
return _from_keras_tf1(model, input_signature, opset, custom_ops, custom_op_handlers, custom_rewriter,
inputs_as_nchw, extra_opset, shape_override, target, large_model, output_path)
return _from_keras_tf1(model, opset, custom_ops, custom_op_handlers, custom_rewriter, inputs_as_nchw,
extra_opset, shape_override, target, large_model, output_path)

old_out_names = _rename_duplicate_keras_model_names(model)
from tensorflow.python.keras.saving import saving_utils as _saving_utils # pylint: disable=import-outside-toplevel
Expand Down
1 change: 0 additions & 1 deletion tf2onnx/rewriter/random_uniform.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ def rewrite_random_uniform(g, ops):
return ops


# rewriter function when fold_const is enabled
def rewrite_random_uniform_fold_const(g, ops):
pattern = \
OpTypePattern('Add', name='output', inputs=[
Expand Down
8 changes: 4 additions & 4 deletions tf2onnx/tf_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -673,15 +673,15 @@ def from_keras(model_path, input_names, output_names):
return frozen_graph, input_names, output_names


def tf_optimize_grappler(input_names, output_names, graph_def, fold_constant=None):
def tf_optimize_grappler(input_names, output_names, graph_def):
from tensorflow.core.protobuf import meta_graph_pb2 as meta_graph_pb2, config_pb2, rewriter_config_pb2
from tensorflow.python.grappler import tf_optimizer as tf_opt

config = config_pb2.ConfigProto()
rewrite_options = config.graph_options.rewrite_options
config.graph_options.infer_shapes = True
# TODO: if we turn on pruning, grappler removes some identities that the tf-1.x lstm rewriter
# depends on so for now don't turn this on, fold_constant is always enabled now.
# depends on so for now don't turn this on, constfold is always enabled now.
rewrite_options.optimizers[:] = [
# 'pruning', 'constfold', 'arithmetic', 'dependency', 'function',
'constfold', 'function'
Expand All @@ -700,7 +700,7 @@ def tf_optimize_grappler(input_names, output_names, graph_def, fold_constant=Non
return graph_def


def tf_optimize(input_names, output_names, graph_def, fold_constant=True):
def tf_optimize(input_names, output_names, graph_def):
"""Extract inference subgraph and optimize graph."""
assert isinstance(input_names, list)
assert isinstance(output_names, list)
Expand All @@ -712,7 +712,7 @@ def tf_optimize(input_names, output_names, graph_def, fold_constant=True):

want_grappler = is_tf2() or LooseVersion(tf.__version__) >= "1.15"
if want_grappler:
graph_def = tf_optimize_grappler(input_names, output_names, graph_def, fold_constant)
graph_def = tf_optimize_grappler(input_names, output_names, graph_def)
else:
# the older transform path
from tensorflow.tools.graph_transforms import TransformGraph # pylint: disable=redefined-outer-name
Expand Down
4 changes: 2 additions & 2 deletions tf2onnx/tfonnx.py
Original file line number Diff line number Diff line change
Expand Up @@ -625,7 +625,7 @@ def compat_handler(ctx, node, **kwargs):
return g


def tf_optimize(input_names, output_names, graph_def, fold_constant=True):
def tf_optimize(input_names, output_names, graph_def):
"""optimize tensorflow graph. This is in tf_loader but some apps call this
so we proxy into tf_loader to keep them working."""
return tf2onnx.tf_loader.tf_optimize(input_names, output_names, graph_def, fold_constant)
return tf2onnx.tf_loader.tf_optimize(input_names, output_names, graph_def)

0 comments on commit ab6584c

Please sign in to comment.