Skip to content
This repository has been archived by the owner on Oct 23, 2023. It is now read-only.

how to implement this: torchvision.transforms.functional.perspective #114

Closed
nh9k opened this issue Aug 23, 2022 · 25 comments
Closed

how to implement this: torchvision.transforms.functional.perspective #114

nh9k opened this issue Aug 23, 2022 · 25 comments
Assignees
Labels
🤖 android Issue related to Android ✨ enhancement New feature or request 🆘 help wanted Extra attention is needed 🍏 ios Issue related to iOS 😇 wontfix This will not be worked on

Comments

@nh9k
Copy link

nh9k commented Aug 23, 2022

Area Select

react-native-pytorch-core (core package)

Description

Hello! thanks for contributions!

I have a problem while developing my project.
I need a function like torchvision.transforms.functional.perspective

Could you add this implementation for torchvision.transforms.functional.perspective?
or
Can i implement this function?
There is no implementation of perspective function in playtorch docs

Another solution that i proceed is making pytorch mobile model for this function.
This idea came from @raedle of this issue.
but it has a error at react-native app like this:

{"message": "Calling torch.linalg.lstsq on a CPU tensor requires compiling PyTorch with LAPACK. Please use PyTorch built with LAPACK support.

  Debug info for handle(s): debug_handles:{-1}, was not found.

Exception raised from apply_lstsq at ../aten/src/ATen/native/BatchLinearAlgebraKernel.cpp:559 (most recent call first):
(no backtrace available)"}

Should i question to pytorch github about this error?

My perspective model is like this:
This model is successful at python code.

import torch, torchvision
from typing import List, Dict
import torchvision.transforms.functional as F

class WrapPerspectiveCrop(torch.nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, inputs: torch.Tensor, points: List[List[int]]):
        size_points = [[0,0], [inputs.shape[2],0] , [inputs.shape[2],inputs.shape[1]], [0,inputs.shape[1]]]
        inputs = F.perspective(inputs, points, size_points)
        return inputs

    
crop = WrapPerspectiveCrop()
scripted_model = torch.jit.script(crop)
scripted_model.save("wrap_perspective.pt")


import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

optimized_scripted_module=optimize_for_mobile(scripted_model)
optimized_scripted_module._save_for_lite_interpreter("wrap_perspective.ptl")

How can i solve this problem?
Many many thanks for anyone help!

@raedle
Copy link
Contributor

raedle commented Aug 26, 2022

Hi @nh9k, unfortunately there is no quick workaround. The PyTorch Mobile shared object library is not linked against LAPACK. It would require the PyTorch team to add an options to compile the PyTorch Mobile libraries with LAPACK for Android/iOS.

As suggested, you could post in the PyTorch repo and ask for support. If it's added to the PyTorch Mobile shared object libraries, we can pull in the new libraries into PlayTorch!

Closing the issue as it can't be resolved without PyTorch Mobile being compiled with LAPACK support. Feel free to reopen if this changes

@raedle raedle closed this as completed Aug 26, 2022
@raedle raedle self-assigned this Aug 26, 2022
@raedle raedle added ✨ enhancement New feature or request 🆘 help wanted Extra attention is needed 😇 wontfix This will not be worked on 🤖 android Issue related to Android 🍏 ios Issue related to iOS labels Aug 26, 2022
@nh9k
Copy link
Author

nh9k commented Aug 26, 2022

Thanks @raedle, If I solve this problem from asking the PyTorch team for LAPACK, I report back!

@kimishpatel
Copy link

@raedle the posted issue also talks about XNNPACK but that should not be the issue. Are you not building with USE_XNNPACK=1 for pytorch?

@raedle
Copy link
Contributor

raedle commented Aug 30, 2022

@kimishpatel, the PlayTorch API doesn't build PyTorch Mobile from source but uses the following PyTorch Mobile build artifacts

Are the released PyTorch Mobile build artifacts build with USE_XNNPACK=1?

@kimishpatel
Copy link

I am not entirely sure. Let me get back to you on this. Also is this on ios or android?

@raedle
Copy link
Contributor

raedle commented Aug 30, 2022

Thanks @kimishpatel!

@nh9k, is the issue with XNNPACK on both platforms Android and iOS or just on one of the two platforms?

@nh9k
Copy link
Author

nh9k commented Aug 30, 2022

Thanks @kimishpatel @raedle!
The issue is occured on Android. I haven't tested it on IOS.

@kimishpatel
Copy link

Yeah for android it should be on by default.

@digantdesai
Copy link

Yeah as @kimishpatel said, xnnpack should be there.

FWIW, just to validate, I looked at the build artifact pytorch_android_lite-1.12.2.aar
from maven.org, and looked for xnnpack symbols in the libpytorch_jni_lite.so, I did see them.

@nh9k
Copy link
Author

nh9k commented Sep 5, 2022

@raedle, so, can i solve this problem?
How can i build react-native app with XNNPACK?
Writing dependencies implementation "org.pytorch:pytorch_android_lite:1.12.2" at build.gradle file can solve this problem?

@raedle
Copy link
Contributor

raedle commented Sep 7, 2022

@nh9k, if symbols are part of org.pytorch:pytorch_android_lite:1.12.2, then you shouldn't need to do anything. The latest react-native-pytorch-core release v0.2.1 uses libpytorch_jni_lite.so from org.pytorch:pytorch_android_lite:1.12.2.

What is the react-native-pytorch-core version that you tested?

Can you share a simple model + python export code for us to test as well?

@nh9k
Copy link
Author

nh9k commented Sep 7, 2022

@raedle, Thank you so much!
My react-native-pytorch-core version is v0.0.0-08082022-2231-889b3951d.
I have another problem with the inference of quantized model, which is probably the same problem.
Thank you so much again, I will change pytorch core version to v0.2.1 and report back!

@raedle
Copy link
Contributor

raedle commented Sep 8, 2022

@nh9k, are you using torch.utils.mobile_optimizer.optimize_for_mobile on the model that throws the XNNPACK error (see below)?

XNNPACK Convolution not usable! Reason: The provided input tensor is either invalid or unsupported by XNNPACK

If that's the case, can you try adding optimization_blocklist for INSERT_FOLD_PREPACK_OPS for the optimize_for_mobile function?

optimization_blocklist = {
  MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}

optimized_model = torch.utils.mobile_optimizer.optimize_for_mobile(model, optimization_blocklist)

Insert and Fold prepacked ops (blocklisting option MobileOptimizerType::INSERT_FOLD_PREPACK_OPS): This optimization pass rewrites the graph to replace 2D convolutions and linear ops with their prepacked counterparts. Prepacked ops are stateful ops in that, they require some state to be created, such as weight prepacking and use this state, i.e. prepacked weights, during op execution. XNNPACK is one such backend that provides prepacked ops, with kernels optimized for mobile platforms (such as ARM CPUs). Prepacking of weight enables efficient memory access and thus faster kernel execution. At the moment optimize_for_mobile pass rewrites the graph to replace Conv2D/Linear with 1) op that pre-packs weight for XNNPACK conv2d/linear ops and 2) op that takes pre-packed weight and activation as input and generates output activations. Since 1 needs to be done only once, we fold the weight pre-packing such that it is done only once at model load time. This pass of the optimize_for_mobile does 1 and 2 and then folds, i.e. removes, weight pre-packing ops.

More details: https://pytorch.org/docs/stable/mobile_optimizer.html

@nh9k
Copy link
Author

nh9k commented Sep 8, 2022

@raedle, Thank you so much!!
I have the same error (xnnpack) chaning react-native-pytorch-core version to v0.2.1.
The code with from torch._C import MobileOptimizerType you recommended got a new error,

{"message": "expected scalar type Byte but found Float

  Debug info for handle(s): debug_handles:{-1}, was not found.

Exception raised from data_ptr at aten/src/ATen/core/TensorMethods.cpp:20 (most recent call first):
(no backtrace available)"}

I don't know yet why it is appear, it is probably my fault.
I don't test long time from now on, about a week, i would like to share my projects but i have no enough time for organizing my code. Sorry about this, If you would like to know the model code, My model is related about this repository.
Thank you so much again, I will back next week..

@kimishpatel
Copy link

@raedle can you try this https://pytorch.org/mobile/android/? But you pytorch_lite:1.12.2 that you are using. I want to see if in the app also we get the same issue.

Other question for @nh9k is: Have you tried running the same model using pytorch release say from via pip install?

@nh9k
Copy link
Author

nh9k commented Sep 15, 2022

@kimishpatel yes, converted .ptl model works fine from python code!

@raedle
Copy link
Contributor

raedle commented Sep 15, 2022

@nh9k, can you share the model and the code used to export the model for the lite interpreter runtime?

@nh9k
Copy link
Author

nh9k commented Sep 15, 2022

@raedle, sorry i am late..
Can i share you my project using my private repository?
And my final aim is exporting quantization model with react-native app.
But it also has a problem. so can you test with quantization model?
If possible, i will share this quantized model .pt file(before making .ptl file) from my google drive.
The quantized model works well from python code.

@raedle
Copy link
Contributor

raedle commented Sep 15, 2022

@nh9k, instead of sharing the private repo, can you please create a reproducible example publicly? This way, the community can also benefit if anyone has a similar issue :)

@nh9k
Copy link
Author

nh9k commented Sep 15, 2022

Alright! The code used to export the model is here.

CODE:

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

model = torch.jit.load('model_scripted.pt')
model.eval()

device = torch.device('cpu')
x = torch.randn(1, 3, 768, 768).to(device)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, x)

from torch._C import MobileOptimizerType

optimization_blocklist = {
    MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}

optimized_scripted_module = optimize_for_mobile(traced_script_module, optimization_blocklist)

optimized_scripted_module._save_for_lite_interpreter("model.ptl")

The model .pt file is here(my google drive)

Thank you so much, raedle.

@raedle
Copy link
Contributor

raedle commented Sep 15, 2022

@nh9k, I was somewhat successful. The exported model loads on Android in the PlayTorch app, and it can run inference with a random tensor as input. There is an issue on iOS that I need to look into (i.e., iOS crashes with this model).

Model export

I used the export similar to what you provided. The only change is that the module is already a ScriptModule and doesn't need to be traced but can be used directly. I exported the model with and without optimization_blocklist.

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

model = torch.jit.load('model_dl.pt')
model.eval()

device = torch.device('cpu')
x = torch.randn(1, 3, 768, 768).to(device)

from torch._C import MobileOptimizerType

optimization_blocklist = {
    MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}

optimized_scripted_module = optimize_for_mobile(model)
optimized_scripted_module_with_blocklist = optimize_for_mobile(model, optimization_blocklist)

optimized_scripted_module._save_for_lite_interpreter("optimized_scripted_module.ptl")
optimized_scripted_module_with_blocklist._save_for_lite_interpreter("optimized_scripted_module_with_blocklist.ptl")

In python, I then reloaded the lite interpreter model and ran inference with a random tensor. It outputs a tuple with two tensors (assuming the tensors are in correct shape).

from torch.jit.mobile import _load_for_lite_interpreter

model2 =_load_for_lite_interpreter("optimized_scripted_module_with_blocklist.ptl")

with torch.no_grad():
  a, b = model2(torch.randn(1, 3, 768, 768))

print("a.shape", a.shape)
print("b.shape", b.shape)

I also logged the export_opnames for the input model, the optimized model, and the optimized model with the blocklist to show the ops.

Example:

torch.jit.export_opnames(optimized_scripted_module_with_blocklist)

Output:

['aten::cat',
 'aten::conv2d',
 'aten::max_pool2d',
 'aten::permute',
 'aten::relu_',
 'aten::size.int',
 'aten::upsample_bilinear2d']

Colab notebook with code from above: https://colab.research.google.com/drive/1JzjL7RZd4_ldgoc-7cJ02RUl53sIMhjr

Use model with PlayTorch

async function main() {
  try {
    console.log('loading model');
    const filePath = await MobileModel.download(
      'https://example.com/path/to/optimized_scripted_module_with_blocklist.ptl'
    );
    // or loading the model as project asset
    //const filePath = await MobileModel.download(
    //  require('./path/to/optimized_scripted_module_with_blocklist.ptl')
    //);
    const model = await torch.jit._loadForMobile(filePath);
    const output = await model.forward(torch.randn([1, 3, 768, 768]));
    console.log('output value', output);
  } catch (error) {
    console.error(error);
  }
}
main();
loading model
output value [{"dtype":"float32","shape":[1,384,384,2]},{"dtype":"float32","shape":[1,32,384,384]}]

@nh9k
Copy link
Author

nh9k commented Sep 15, 2022

@raedle, thank you so much!!, i am successful too.
I'm not sure why the previous model didn't work.
Thank you so much!!

@nh9k
Copy link
Author

nh9k commented Sep 15, 2022

@raedle, I haven't tested it on iOS yet. What issues was there?

@nh9k
Copy link
Author

nh9k commented Sep 16, 2022

Hi, @raedle,
I have a problem, would you help me?
I expected a float32 tensor output(e.g., 0. 0.01859372 0. ... 0.), but I got the output data 0 or 1 of integer when i printed the outputTensor.data() using console.log function. It is very strange.

@nh9k
Copy link
Author

nh9k commented Sep 19, 2022

@raedle, I figure out the blocklist was problem. When i removed this blocklist argument optimization_blocklist, the model output works fine as float output on react-native app. Thanks a lot for your help! I need to study more about this blocklist.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
🤖 android Issue related to Android ✨ enhancement New feature or request 🆘 help wanted Extra attention is needed 🍏 ios Issue related to iOS 😇 wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

4 participants