Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tensorflow] ops not quantized #25

Closed
peiwenhuang27 opened this issue Aug 27, 2021 · 17 comments
Closed

[Tensorflow] ops not quantized #25

peiwenhuang27 opened this issue Aug 27, 2021 · 17 comments

Comments

@peiwenhuang27
Copy link

peiwenhuang27 commented Aug 27, 2021

Framework: Tensorflow 2.6.0
LPOT: 1.6.0

When I printed out the tune_cfg() in strategy.py

### op_cfgs ###
('model/dense_5/Tensordot/MatMul', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul,BiasAdd', 'precision': 'int8'}}
('model/dense_5/Tensordot/concat_1', 'concat')
{'activation': {'dtype': 'uint8', 'algorithm': 'minmax', 'scheme': 'sym', 'granularity': 'per_tensor'}}
('model/dense_4/Tensordot/MatMul', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul,BiasAdd', 'precision': 'int8'}}
('model/dense_4/Tensordot/concat_1', 'concat')
{'activation': {'dtype': 'uint8', 'algorithm': 'minmax', 'scheme': 'sym', 'granularity': 'per_tensor'}}
('model/dense_3/Tensordot/MatMul', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul,BiasAdd', 'precision': 'int8'}}
('model/dense_3/Tensordot/concat_1', 'concat')
{'activation': {'dtype': 'uint8', 'algorithm': 'minmax', 'scheme': 'sym', 'granularity': 'per_tensor'}}
('model/dense_1/Tensordot/MatMul', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul,BiasAdd', 'precision': 'int8'}}
('model/dense_1/Tensordot/concat_1', 'concat')
{'activation': {'dtype': 'uint8', 'algorithm': 'minmax', 'scheme': 'sym', 'granularity': 'per_tensor'}}
('model/dense_2/Tensordot/MatMul', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul,BiasAdd', 'precision': 'int8'}}
('model/dense_2/Tensordot/concat_1', 'concat')
{'activation': {'dtype': 'uint8', 'algorithm': 'minmax', 'scheme': 'sym', 'granularity': 'per_tensor'}}
('model/dense/Tensordot/MatMul', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul,BiasAdd', 'precision': 'int8'}}
('model/LSTM_2/PartitionedCall/while/body/_23/while/MatMul', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul', 'precision': 'int8'}}
('model/dense/Tensordot/concat_1', 'concat')
{'activation': {'dtype': 'uint8', 'algorithm': 'minmax', 'scheme': 'sym', 'granularity': 'per_tensor'}}
('model/LSTM_2/PartitionedCall/while/body/_23/while/MatMul_1', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul', 'precision': 'int8'}}
('model/LSTM_1/PartitionedCall/while/body/_83/while/MatMul', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul', 'precision': 'int8'}}
('model/LSTM_1/PartitionedCall/while/body/_83/while/MatMul_1', 'matmul')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'asym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'MatMul', 'precision': 'int8'}}
('model/52/Conv2D', 'conv2d')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_channel', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'Conv2D', 'precision': 'int8'}}
('model/51/Conv2D', 'conv2d')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_channel', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'Conv2D', 'precision': 'int8'}}
('model/42/Conv2D', 'conv2d')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_channel', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'Conv2D', 'precision': 'int8'}}
('model/41/Conv2D', 'conv2d')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_channel', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'Conv2D', 'precision': 'int8'}}
('model/32/Conv2D', 'conv2d')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_channel', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'Conv2D', 'precision': 'int8'}}
('model/31/Conv2D', 'conv2d')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_channel', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'Conv2D', 'precision': 'int8'}}
('model/2/Conv2D', 'conv2d')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_channel', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'Conv2D', 'precision': 'int8'}}
('model/1/Conv2D', 'conv2d')
{'weight': {'dtype': 'int8', 'scheme': 'sym', 'granularity': 'per_channel', 'algorithm': 'minmax', 'bit': 7.0}, 'activation': {'dtype': 'uint8', 'scheme': 'sym', 'granularity': 'per_tensor', 'algorithm': 'minmax'}, 'pattern': {'sequence': 'Conv2D', 'precision': 'int8'}}


### dispatched_op_names ###
['model/dense_5/Tensordot/MatMul', 'model/dense_5/Tensordot/concat_1', 'model/dense_4/Tensordot/MatMul', 'model/dense_4/Tensordot/concat_1', 'model/dense_3/Tensordot/MatMul', 'model/dense_3/Tensordot/concat_1', 'model/dense_1/Tensordot/MatMul', 'model/dense_1/Tensordot/concat_1', 'model/dense_2/Tensordot/MatMul', 'model/dense_2/Tensordot/concat_1', 'model/dense/Tensordot/MatMul', 'model/LSTM_2/PartitionedCall/while/body/_23/while/MatMul', 'model/dense/Tensordot/concat_1', 'model/LSTM_2/PartitionedCall/while/body/_23/while/MatMul_1', 'model/LSTM_1/PartitionedCall/while/body/_83/while/MatMul', 'model/LSTM_1/PartitionedCall/while/body/_83/while/MatMul_1', 'model/52/Conv2D', 'model/51/Conv2D', 'model/42/Conv2D', 'model/41/Conv2D', 'model/32/Conv2D', 'model/31/Conv2D', 'model/2/Conv2D', 'model/1/Conv2D']
### invalid_op_names ###
[]



2021-08-27 08:43:41 [WARNING] Found possible input node names: ['input_noisy', 'input_noisy_norm'], output node names: ['outputMask'].
2021-08-27 08:43:53 [WARNING] Found possible input node names: ['input_noisy', 'input_noisy_norm'], output node names: ['outputMask'].
2021-08-27 08:44:01.108141: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-08-27 08:44:01.108428: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-08-27 08:44:01.156499: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 974 nodes (370), 1237 edges (530), time = 17.966ms.
  function_optimizer: function_optimizer did nothing. time = 0.805ms.

2021-08-27 08:44:02.385658: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-08-27 08:44:02.385886: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-08-27 08:44:02.657347: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: tf_graph
  constant_folding: Graph size after: 782 nodes (-96), 919 edges (-118), time = 150.555ms.
  constant_folding: Graph size after: 782 nodes (0), 919 edges (0), time = 37.266ms.

2021-08-27 08:44:05 [INFO] Pass Quantization elapsed time: 2325.7 ms
2021-08-27 08:44:38 [INFO] Pass QuantizedRNNConverter elapsed time: 57.53 ms
2021-08-27 08:44:39 [INFO] Pass StripUnusedNodesOptimizer elapsed time: 168.84 ms
2021-08-27 08:44:39 [INFO] Pass RemoveTrainingNodesOptimizer elapsed time: 57.83 ms
2021-08-27 08:44:39 [INFO] Pass FoldBatchNormNodesOptimizer elapsed time: 57.06 ms
2021-08-27 08:44:39 [INFO] Pass MetaOpOptimizer elapsed time: 54.55 ms
2021-08-27 08:44:39 [WARNING] Node name unused_control_flow_input_20 specified in yaml doesn't exist in the model.
2021-08-27 08:44:39 [WARNING] Found possible input node names: ['input_noisy', 'input_noisy_norm'], output node names: ['outputMask'].
2021-08-27 08:44:41 [INFO] Pass PostCseOptimizer elapsed time: 1593.45 ms
2021-08-27 08:44:41 [INFO] |********Mixed Precision Statistics*******|
2021-08-27 08:44:41 [INFO] +---------------+---------+-------+-------+
2021-08-27 08:44:41 [INFO] |    Op Type    |  Total  |  INT8 |  FP32 |
2021-08-27 08:44:41 [INFO] +---------------+---------+-------+-------+
2021-08-27 08:44:41 [INFO] |     Conv2D    |    8    |   0   |   8   |
2021-08-27 08:44:41 [INFO] |     MatMul    |    10   |   6   |   4   |
2021-08-27 08:44:41 [INFO] |    ConcatV2   |    6    |   0   |   6   |
2021-08-27 08:44:41 [INFO] |   QuantizeV2  |    6    |   6   |   0   |
2021-08-27 08:44:41 [INFO] |   Dequantize  |    1    |   1   |   0   |
2021-08-27 08:44:41 [INFO] +---------------+---------+-------+-------+
2021-08-27 08:44:41 [INFO] Pass quantize model elapsed time: 73892.89 ms
2021-08-27 08:44:41 [INFO] Start to evaluate the TensorFlow model.
2021-08-27 08:46:07 [INFO] Tune 1 result is: [accuracy: 0.3118, duration (seconds): 86.5451], Best tune result is: [accuracy: 0.3118, duration (seconds): 86.5451]

First Conv2D and Matmul seems to be set to quantize to int8, but in mixed precision statistics, they are still in fp32 format. My main focus is to speed up Conv2D computation, but I cannot find the reason why it stays unquantized.
Is this because the pattern is unmatched?
Originally, my convolutional layer is paired with a leaky ReLU, and I also tried using ReLU, or no activation at all, but it just won't quantize Conv2D.

Please find my model link here

@guomingz
Copy link
Contributor

conv2d doesn't support paired with leakyrelu. conv2d + relu and single conv2d only supports that conv2d has positive input.

@peiwenhuang27
Copy link
Author

Does it mean that if my input calibration data has any negative values, I won't be able to quantize the Conv2D layer?

@peiwenhuang27
Copy link
Author

Is there any pattern that supports Conv2D with negative inputs?

@peiwenhuang27
Copy link
Author

But I did experiment another model that also has 5 Conv2D but with no activation, and it takes the same inputs as the model mentioned here (which contain negative values), these two models' Conv2D has different weights.
However, all 5 Conv2D layers of the other model are quantized successfully. So I am confused by what is the difference here.

@guomingz
Copy link
Contributor

It's nothing to do with calibration data.

The so-called negative input means the input of conv2d doesn't contain relu/relu6 op.

e.g op a---->op b --->op c, the op c is a conv2d op. If the op b is relu or relu6 , then the single conv2d op c is quantizable. If the op b is another conv2d, then its output may have negative value, and thus the op c couldn't be quantizable.

@peiwenhuang27
Copy link
Author

I see! Thank you for explaining it in great detail, I will have another try on this later.

@peiwenhuang27
Copy link
Author

Hi @guomingz, just wanted to ask another question, thank you so much!
After understanding the quantization rules, I modified my convolution layers to each contain a BiasAdd and a ReLU. The model structure looks roughly like this:

            x
            |
          Conv_1
            |
          Conv_2
         /      \
     Conv_3_1  Conv_4_1
       |         |
     Conv_3_2 Conv_4_2
       |         |
     Conv_3_3 Conv_4_3
       |         |
      ...       ...

I also printed out the matched fusion_mapping name as below:

### Conv fusion_name in fusion_mapping ###
Conv2DBiasAddRelu



### Conv fusion_name in fusion_mapping ###
Conv2DBiasAddRelu



### Conv fusion_name in fusion_mapping ###
Conv2D



### Conv fusion_name in fusion_mapping ###
Conv2D



### Conv fusion_name in fusion_mapping ###
Conv2DBiasAddRelu



### Conv fusion_name in fusion_mapping ###
Conv2D



### Conv fusion_name in fusion_mapping ###
Conv2D



### Conv fusion_name in fusion_mapping ###
Conv2DBiasAddRelu
Pass PostCseOptimizer elapsed time: 1508.76 ms
2021-08-30 03:19:42 [INFO] |********Mixed Precision Statistics*******|
2021-08-30 03:19:42 [INFO] +---------------+---------+-------+-------+
2021-08-30 03:19:42 [INFO] |    Op Type    |  Total  |  INT8 |  FP32 |
2021-08-30 03:19:42 [INFO] +---------------+---------+-------+-------+
2021-08-30 03:19:42 [INFO] |     Conv2D    |    8    |   4   |   4   |
2021-08-30 03:19:42 [INFO] |     MatMul    |    10   |   6   |   4   |
2021-08-30 03:19:42 [INFO] |    ConcatV2   |    6    |   0   |   6   |
2021-08-30 03:19:42 [INFO] |   QuantizeV2  |    9    |   9   |   0   |
2021-08-30 03:19:42 [INFO] |   Dequantize  |    4    |   4   |   0   |
2021-08-30 03:19:42 [INFO] +---------------+---------+-------+-------+

It seems that Conv_3_1, Conv_4_1, Conv_3_2, and Conv_4_2 are not detected as Conv2DBiasAddReLU but rather just a single Conv2D, and these four end up not being quantized. May I ask the reason for this behavior? And how should I modify to sucessfully quantize all 8 Conv2D?

@guomingz
Copy link
Contributor

are u sure those 8 conv2d all have their own biasadd + relu as successor?

@peiwenhuang27
Copy link
Author

peiwenhuang27 commented Aug 31, 2021

In a sense, yes and no.
My model has dynamic input shape, i.e. some dimensions are set to None, which will be determined upon runtime. To handle some unknown shape operations, I believe Tensorflow adds extra nodes in between the operations. In fact, even when I set the dimensions to be fixed, though there are less added nodes, SpaceToBatch and BatchToSpace are still automatically added.

Here's an example of the printed node info for Conv_2 mentioned above:

name: "model/2/BiasAdd"
op: "BiasAdd"
input: "model/2/Conv2D"
input: "model/2/BiasAdd/ReadVariableOp"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "data_format"
  value {
    s: "NHWC"
  }
}

which of course, takes Conv2D as input, so LPOT is able to identify the pattern.
But for Conv_3_1:

name: "model/31/BiasAdd"
op: "BiasAdd"
input: "model/31/Conv2D/BatchToSpaceND"
input: "model/31/BiasAdd/ReadVariableOp"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "data_format"
  value {
    s: "NHWC"
  }
}

The input becomes model/31/Conv2D/BatchToSpaceND, because there are many inserted shape-related nodes between Conv2D and BiasAdd.

I have attached the node names of the model here for your reference, I was wondering if LPOT can find a way to identify the patterns?

model_node_names.txt

@guomingz
Copy link
Contributor

You're mentioning a new pattern conv2d + BatchToSpaceND + biasadd + relu rather than conv2d +biasadd+relu, so it's also the rootcause of your previous concern why not all conv2d ops got quantized.

@peiwenhuang27
Copy link
Author

May I ask if there is a chance that this new pattern conv2d + BatchToSpaceND + biasadd + relu will be added to LPOT fusion list?

@guomingz
Copy link
Contributor

If the pattern is SpaceToBatchND + Conv2d + BatchToSpaceND, it has the possibility to enable the fusion by converting this pattern to conv with dialation.

@peiwenhuang27
Copy link
Author

I am not sure I know how to convert the pattern to conv with dialation, as I already set Conv2D with dialation rate, which I think is why Tensorflow converts it to SpaceToBatchND + Conv2d + BatchToSpaceND. Could you provide some pointers on this? Thanks.

@guomingz
Copy link
Contributor

guomingz commented Sep 1, 2021

which I think is why Tensorflow converts it to SpaceToBatchND + Conv2d + BatchToSpaceND.
Not very clear on this sentence. Would please explain it more?

@peiwenhuang27
Copy link
Author

Originally I built my model with the following code:

with tf.compat.v1.variable_scope('feature_extractor'):
        layer_1 = tf.keras.layers.Conv2D(64, (3, 3), padding='SAME', name='1',
                                        activation='relu',
                                        kernel_initializer=tf.constant_initializer(1.)) 
        layer_1_out = layer_1(x_noisy_norm)

        layer_2 = tf.keras.layers.Conv2D(64, (3, 3), padding='SAME', name='2',
                                        activation='relu',
                                        kernel_initializer=tf.constant_initializer(1.)) 
        layer_2_out = layer_2(layer_1_out)
        
    with tf.compat.v1.variable_scope('module1'):
        layer_31 = tf.keras.layers.Conv2D(64, (5, 1), padding='SAME', name='31', 
                                          activation='relu',
                                          dilation_rate=(2, 1), kernel_initializer=tf.constant_initializer(1.)) # leaky_relu
        layer_31_out = layer_31(layer_2_out)

        layer_41 = tf.keras.layers.Conv2D(64, (5, 1), padding='SAME', name='41', 
                                          activation='relu',
                                          dilation_rate=(4, 1), kernel_initializer=tf.constant_initializer(1.)) # leaky_relu
        layer_41_out = layer_41(layer_31_out)

        layer_51 = tf.keras.layers.Conv2D(8, (1, 1), padding='SAME', name='51', 
                                          activation='relu',
                                          dilation_rate=(1, 1), kernel_initializer=tf.constant_initializer(1.)) # leaky_relu
        layer_51_out = layer_51(layer_41_out)
       
        # ...
        # other layers following Conv2D

    with tf.compat.v1.variable_scope('module2'):
        layer_32 = tf.keras.layers.Conv2D(64, (5, 1), padding='SAME', name='32',
                                          activation='relu',
                                          dilation_rate=(2, 1), kernel_initializer=tf.constant_initializer(1.))
        layer_32_out = layer_32(layer_2_out)

        layer_42 = tf.keras.layers.Conv2D(64, (5, 1), padding='SAME', name='42',
                                          activation='relu',
                                          dilation_rate=(4, 1), kernel_initializer=tf.constant_initializer(1.))
        layer_42_out = layer_42(layer_32_out)

        layer_52 = tf.keras.layers.Conv2D(8, (1, 1), padding='SAME', name='52',
                                          activation='relu',
                                          dilation_rate=(1, 1), kernel_initializer=tf.constant_initializer(1.))
        layer_52_out = layer_52(layer_42_out)

         # ...
        # other layers following Conv2D

I thought the reason why Tensorflow adds the SpaceToBatchND + Conv2d + BatchToSpaceND pattern for layer_31, layer 41, layer_32, and layer_42 is because I set their dilation rates to be larger than 1.

@guomingz
Copy link
Contributor

guomingz commented Sep 2, 2021

So the pattern you mentioned earlier May I ask if there is a chance that this new pattern conv2d + BatchToSpaceND + biasadd + relu will be added to LPOT fusion list? is not correct, it should be SpaceToBatchND + Conv2d + BatchToSpaceND ,right?

If so, LPOT couldn't support it due to there's Tf dependecy on that. You may wait for TF2.7 release.

@ftian1
Copy link
Contributor

ftian1 commented Oct 27, 2021

local patch ready, waiting for TF2.7 formal release with kernel merged.

@ftian1 ftian1 closed this as completed Oct 27, 2021
deb-intel pushed a commit to deb-intel/lp-opt-tool that referenced this issue Nov 4, 2021
…rt pass for tensorflow backend. (intel#25)

Signed-off-by: Zhang, Guoming <guoming.zhang@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants