Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tflite conv bug #21817

Closed
zgxnet opened this issue Aug 23, 2018 · 6 comments
Closed

tflite conv bug #21817

zgxnet opened this issue Aug 23, 2018 · 6 comments
Assignees
Labels
comp:lite TF Lite related issues

Comments

@zgxnet
Copy link

zgxnet commented Aug 23, 2018

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):N/A
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Linux Ubuntu 16.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:N/A
  • TensorFlow installed from (source or binary):source
  • TensorFlow version (use command below):1.10.0
  • Python version:3.6.0
  • Bazel version (if compiling from source):0.16.1
  • GCC/Compiler version (if compiling from source):7.3.0
  • CUDA/cuDNN version:N/A
  • GPU model and memory:N/A
  • Exact command to reproduce:N/A

There is a bug in the multithreaded implementation of conv2d. In certain cases when batch_size > 1, only the first image are considered while others ignored. For the reason see file contrib/lite/kernels/internal/optimized/multithreaded_conv.h

 } else if (filter_height == input_height && filter_width == input_width &&
               pad_width == 0 && pad_height == 0) {
      // If the input data and filter have the same height/width,
      // the 2D convolution is reduced to matrix multiplication.
      const int k =  // Length of reduction dimension.
          filter_width * filter_height * input_depth;
      Eigen::array<Eigen::IndexPair<Eigen::DenseIndex>, 1> dim_pair;
      dim_pair[0] = Eigen::IndexPair<Eigen::DenseIndex>(1, 0);
      EigenMatrix output(output_data, 1, filter_count);
      ConstEigenMatrix input(input_data, 1, k);
      ConstEigenMatrix filter(filter_data, k, filter_count);
      MatMulConvFunctor<Eigen::ThreadPoolDevice, T>()(device, output, input,
                                                      filter, dim_pair);
    } else {

In the above code, the input_batches are ignored.
I have also verified a quick fix:

    } else if (filter_height == input_height && filter_width == input_width &&
               pad_width == 0 && pad_height == 0) {
      // If the input data and filter have the same height/width,
      // the 2D convolution is reduced to matrix multiplication.
      const int k =  // Length of reduction dimension.
          filter_width * filter_height * input_depth;
	  Eigen::array<Eigen::IndexPair<Eigen::DenseIndex>, 1> dim_pair;
      dim_pair[0] = Eigen::IndexPair<Eigen::DenseIndex>(1, 0);
      EigenMatrix output(output_data, input_batches, filter_count);
      ConstEigenMatrix input(input_data, input_batches, k);
      ConstEigenMatrix filter(filter_data, k, filter_count);
      MatMulConvFunctor<Eigen::ThreadPoolDevice, T>()(device, output, input,
		  filter, dim_pair);
    } else {

Just change 1 to input_batches.
My code works well based on my experiments.
It is a simple and quick fix, please merge it into the master if possible.

@tensorflowbutler tensorflowbutler added the stat:awaiting response Status - Awaiting response from author label Aug 23, 2018
@tensorflowbutler
Copy link
Member

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
Have I written custom code
OS Platform and Distribution
TensorFlow installed from
TensorFlow version
Bazel version
CUDA/cuDNN version
GPU model and memory
Exact command to reproduce
Mobile device

@poxvoculi poxvoculi assigned andrehentz and unassigned poxvoculi Aug 23, 2018
@poxvoculi
Copy link
Contributor

@andrehentz Can you verify the proposed fix generates intended behavior?

@zgxnet
Copy link
Author

zgxnet commented Aug 27, 2018

@andrehentz Can you help to verify the proposed fix? Thanks.

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label Aug 31, 2018
@jdduke jdduke added the comp:lite TF Lite related issues label Sep 6, 2018
@jdduke
Copy link
Member

jdduke commented Sep 6, 2018

Thanks @zgxnet, you're right that's a bug, and your fix looks good. I'll prepare a change now (with a test) unless you prefer to submit a PR.

@jdduke jdduke assigned jdduke and unassigned andrehentz Sep 6, 2018
@zgxnet
Copy link
Author

zgxnet commented Sep 11, 2018

@jdduke Have you prepared the change? I cannot see it on the master.

@jdduke
Copy link
Member

jdduke commented Sep 11, 2018

The CL is still in review internally, but should land in the next day or two. Thanks for your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues
Projects
None yet
Development

No branches or pull requests

5 participants