New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support more ops in .tflite format #25296
Comments
So what do you want? |
@CNOCycle, let's start with tests. Please provide generation steps similar to https://github.com/opencv/opencv_extra/blob/4.x/testdata/dnn/tflite/generate.py or share the Please format issue with check boxes for each case to estimate the progress: TFLite support is relatively new so any contribution are welcome! I see that you've started #25297 so let's go further reviewing it. |
Hi @fengyuentau, I would like to highlight a couple of points regarding this issue:
If the opencv team is satisfied with the current implementation, no further action is needed, and I will proceed to close this issue. |
Our plan is to replace FullyConnected with Gemm for all importers for better performance. Gemm layer have already supported bias. Fusion of activation can be also added to Gemm layer. So your feature checklist is
Am I correct? Also noticed from @dkurt 's comment that you have started contribution on Transpose operator support. We welcome contributions and you are welcome to contribute on the two listed items above. |
Describe the feature and motivation
Hi,
I would like to bring to your attention some recent developments regarding the implementation of more ops in .tflite format. Specifically, I've been working on incorporating global pooling, transpose, softmax, and fullyconnected ops. I noticed that the support for the last two ops has been merged in PR(#25273).
However, I have a couple of concerns regarding this PR. Firstly, in
parseFullyConnected
, FC ops are implemented usingGemm
. In my view,Gemm
andFully_connected
represent distinct concepts. The latter allows for more options, such as activation and bias terms. It appears that some functionalities are missing in the current implementation.Secondly, the support for
Softmax
seems to be ambiguous across different formats. Specifically, while theSoftmax
layer in Keras and ONNX can acceptaxis
as an argument, this option is absent in the .tflite format. The ambiguity arises from the default value of axis, as described in PR (#24613). Moreover, as shown in below, the default values differ between the FP version and the INT8 version.opencv/modules/dnn/src/layers/softmax_layer.cpp
Line 79 in 9716bf9
opencv/modules/dnn/src/int8layers/softmax_layer.cpp
Line 28 in 9716bf9
From my perspective, I believe setting the default value to -1 would align better with TF and ensure consistency. However, it might be beneficial to include comments regarding the default values somewhere to assist users in quickly diagnosing unexpected behavior, especially if the model originates from ONNX or other converters.
Additional context
No response
The text was updated successfully, but these errors were encountered: