You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the NASNetLarge/NASNetMobile and the MobileNetV2 on the lastest Keras version (github build). I am testing with all three backends and I see the differences between TensorFlow and Theano/CNTK.
By plotting the normalized mean squared error it seem to started from layer separable_conv_1_reduction_left1_reduce_6 (NASNetLarge), separable_conv_1_reduction_left1_stem_2 (NASNetMobile), and the expanded_conv_depthwise (MobileNetV2).
I looked at the source code and saw that both Theano and CNTK does not have native support for Separable Convolution layer or DepthWise Convolution layer, only TensorFlow has these functionalities. In recent versions (2.2.0 for Separable and 2.1.5 for Depthwise), these were added to Theano and CNTK by using normal conv2D.
I am just not sure if this means that the implementation in cnkt_backend.py and theano_backend.py are correct (they are consistent with each other) or the implementation in TensorFlow native code is correct.
Attached are the plots that I was talking about (for NASNetLarge, NASNetMobile, and MobileNetV2 respectively). These are the normalized mean square differences of the outputs at each layer between Tensorflow and CNTK (We have the same graph for Tensorflow vs Theano since Theano agrees with CNTK). As you can see toward the end, the prediction is very different between the two backends (I also included the prediction result for each combination to so the severity of inconsistency)
The text was updated successfully, but these errors were encountered:
I am using the NASNetLarge/NASNetMobile and the MobileNetV2 on the lastest Keras version (github build). I am testing with all three backends and I see the differences between TensorFlow and Theano/CNTK.
By plotting the normalized mean squared error it seem to started from layer separable_conv_1_reduction_left1_reduce_6 (NASNetLarge), separable_conv_1_reduction_left1_stem_2 (NASNetMobile), and the expanded_conv_depthwise (MobileNetV2).
I looked at the source code and saw that both Theano and CNTK does not have native support for Separable Convolution layer or DepthWise Convolution layer, only TensorFlow has these functionalities. In recent versions (2.2.0 for Separable and 2.1.5 for Depthwise), these were added to Theano and CNTK by using normal conv2D.
I am just not sure if this means that the implementation in cnkt_backend.py and theano_backend.py are correct (they are consistent with each other) or the implementation in TensorFlow native code is correct.
Attached are the plots that I was talking about (for NASNetLarge, NASNetMobile, and MobileNetV2 respectively). These are the normalized mean square differences of the outputs at each layer between Tensorflow and CNTK (We have the same graph for Tensorflow vs Theano since Theano agrees with CNTK). As you can see toward the end, the prediction is very different between the two backends (I also included the prediction result for each combination to so the severity of inconsistency)
The text was updated successfully, but these errors were encountered: