Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimize_for_mobile() assert fail: missing op for prepacked::conv2d_clamp_prepack #65490

Open
BryceEakin opened this issue Sep 22, 2021 · 3 comments
Labels
mobile_perf mobile performance oncall: mobile Related to mobile support, including iOS and Android

Comments

@BryceEakin
Copy link

BryceEakin commented Sep 22, 2021

馃悰 Bug

When calling torch.utils.mobile_optimizer.optimize_for_mobile, an internal assert fails:

0INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":532, please report a bug to PyTorch. We don't have an op for prepacked::conv2d_clamp_prepack but it isn't a special case.  Argument types: Tensor, Tensor, int[], str, int[], int, NoneType, NoneType,

To Reproduce

Steps to reproduce the behavior:

Created convolutional model (gapnet plus linear -> sigmoid head). Resulting structure as output by pytorch:

FullGuidanceModel(
  (plan_model): GuidancePlanModel(
    (backbone): GapSmall(
      (model): Sequential(
        (0): TensorflowSamePadForConv2D()
        (1): Conv2d(1, 16, kernel_size=(3, 3), stride=(2, 2), padding=valid)
        (2): ELU(alpha=1.0)
        (3): TensorflowSamePadForConv2D()
        (4): Conv2d(16, 32, kernel_size=(3, 3), stride=(2, 2), padding=valid)
        (5): ELU(alpha=1.0)
        (6): TensorflowSamePadForConv2D()
        (7): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=valid)
        (8): ELU(alpha=1.0)
        (9): TensorflowSamePadForConv2D()
        (10): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=valid)
        (11): ELU(alpha=1.0)
        (12): TensorflowSamePadForConv2D()
        (13): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=valid)
        (14): ELU(alpha=1.0)
      )
    )
    (logits_dense): Linear(in_features=128, out_features=1, bias=True)
  )
  (moving_avg_layer): MovingAverageAcrossBatchWithMemory()
)

Note: TensorflowSamePadForConv2D modules just apply a pre-calculated F.pad() operation to their input and return. GapSmall's output gets averaged over spatial dimensions.

Expected behavior

scripted module compiles normally.

Environment

  • PyTorch Version (e.g., 1.0): 1.9.1+cu102
  • OS (e.g., Linux): Ubuntu 20.04 (WSL)
  • How you installed PyTorch (conda, pip, source): pip
  • Build command you used (if compiling from source):
  • Python version: 3.8.7
  • CUDA/cuDNN version: n/a
  • GPU models and configuration: n/a
  • Any other relevant information:

Additional context

@H-Huang H-Huang added the oncall: mobile Related to mobile support, including iOS and Android label Sep 23, 2021
@xta0 xta0 added module: ios Related to iOS support - build, API, Continuous Integration, document module: android Related to Android support labels Oct 25, 2021
@linbinyu linbinyu added mobile_perf mobile performance and removed module: android Related to Android support module: ios Related to iOS support - build, API, Continuous Integration, document labels Oct 28, 2021
@linbinyu
Copy link
Contributor

@kimishpatel any idea?

@kimishpatel
Copy link
Contributor

@BryceEakin did you build from source and if so did you set USE_XNNPACK=0?

@zrfisher
Copy link

I'm also receiving this error on MacOS 12.1 (Intel Core i7) and when running on Google Colab

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mobile_perf mobile performance oncall: mobile Related to mobile support, including iOS and Android
Projects
None yet
Development

No branches or pull requests

6 participants