-
Notifications
You must be signed in to change notification settings - Fork 100
how to implement this: torchvision.transforms.functional.perspective #114
Comments
Hi @nh9k, unfortunately there is no quick workaround. The PyTorch Mobile shared object library is not linked against LAPACK. It would require the PyTorch team to add an options to compile the PyTorch Mobile libraries with LAPACK for Android/iOS. As suggested, you could post in the PyTorch repo and ask for support. If it's added to the PyTorch Mobile shared object libraries, we can pull in the new libraries into PlayTorch! Closing the issue as it can't be resolved without PyTorch Mobile being compiled with LAPACK support. Feel free to reopen if this changes |
Thanks @raedle, If I solve this problem from asking the PyTorch team for LAPACK, I report back! |
@raedle the posted issue also talks about XNNPACK but that should not be the issue. Are you not building with USE_XNNPACK=1 for pytorch? |
@kimishpatel, the PlayTorch API doesn't build PyTorch Mobile from source but uses the following PyTorch Mobile build artifacts Are the released PyTorch Mobile build artifacts build with |
I am not entirely sure. Let me get back to you on this. Also is this on ios or android? |
Thanks @kimishpatel! @nh9k, is the issue with XNNPACK on both platforms Android and iOS or just on one of the two platforms? |
Thanks @kimishpatel @raedle! |
Yeah for android it should be on by default. |
Yeah as @kimishpatel said, xnnpack should be there. FWIW, just to validate, I looked at the build artifact |
@raedle, so, can i solve this problem? |
@nh9k, if symbols are part of What is the Can you share a simple model + python export code for us to test as well? |
@raedle, Thank you so much! |
@nh9k, are you using
If that's the case, can you try adding optimization_blocklist = {
MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}
optimized_model = torch.utils.mobile_optimizer.optimize_for_mobile(model, optimization_blocklist)
More details: https://pytorch.org/docs/stable/mobile_optimizer.html |
@raedle, Thank you so much!!
I don't know yet why it is appear, it is probably my fault. |
@raedle can you try this https://pytorch.org/mobile/android/? But you pytorch_lite:1.12.2 that you are using. I want to see if in the app also we get the same issue. Other question for @nh9k is: Have you tried running the same model using pytorch release say from via pip install? |
@kimishpatel yes, converted |
@nh9k, can you share the model and the code used to export the model for the lite interpreter runtime? |
@raedle, sorry i am late.. |
@nh9k, instead of sharing the private repo, can you please create a reproducible example publicly? This way, the community can also benefit if anyone has a similar issue :) |
Alright! The code used to export the model is here. CODE: import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torch.jit.load('model_scripted.pt')
model.eval()
device = torch.device('cpu')
x = torch.randn(1, 3, 768, 768).to(device)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, x)
from torch._C import MobileOptimizerType
optimization_blocklist = {
MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}
optimized_scripted_module = optimize_for_mobile(traced_script_module, optimization_blocklist)
optimized_scripted_module._save_for_lite_interpreter("model.ptl") The model .pt file is here(my google drive) Thank you so much, raedle. |
@nh9k, I was somewhat successful. The exported model loads on Android in the PlayTorch app, and it can run inference with a random tensor as input. There is an issue on iOS that I need to look into (i.e., iOS crashes with this model). Model exportI used the export similar to what you provided. The only change is that the module is already a import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torch.jit.load('model_dl.pt')
model.eval()
device = torch.device('cpu')
x = torch.randn(1, 3, 768, 768).to(device)
from torch._C import MobileOptimizerType
optimization_blocklist = {
MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}
optimized_scripted_module = optimize_for_mobile(model)
optimized_scripted_module_with_blocklist = optimize_for_mobile(model, optimization_blocklist)
optimized_scripted_module._save_for_lite_interpreter("optimized_scripted_module.ptl")
optimized_scripted_module_with_blocklist._save_for_lite_interpreter("optimized_scripted_module_with_blocklist.ptl") In python, I then reloaded the lite interpreter model and ran inference with a random tensor. It outputs a tuple with two tensors (assuming the tensors are in correct shape). from torch.jit.mobile import _load_for_lite_interpreter
model2 =_load_for_lite_interpreter("optimized_scripted_module_with_blocklist.ptl")
with torch.no_grad():
a, b = model2(torch.randn(1, 3, 768, 768))
print("a.shape", a.shape)
print("b.shape", b.shape) I also logged the Example: torch.jit.export_opnames(optimized_scripted_module_with_blocklist) Output: ['aten::cat',
'aten::conv2d',
'aten::max_pool2d',
'aten::permute',
'aten::relu_',
'aten::size.int',
'aten::upsample_bilinear2d'] Colab notebook with code from above: https://colab.research.google.com/drive/1JzjL7RZd4_ldgoc-7cJ02RUl53sIMhjr Use model with PlayTorchasync function main() {
try {
console.log('loading model');
const filePath = await MobileModel.download(
'https://example.com/path/to/optimized_scripted_module_with_blocklist.ptl'
);
// or loading the model as project asset
//const filePath = await MobileModel.download(
// require('./path/to/optimized_scripted_module_with_blocklist.ptl')
//);
const model = await torch.jit._loadForMobile(filePath);
const output = await model.forward(torch.randn([1, 3, 768, 768]));
console.log('output value', output);
} catch (error) {
console.error(error);
}
}
main();
|
@raedle, thank you so much!!, i am successful too. |
@raedle, I haven't tested it on iOS yet. What issues was there? |
Hi, @raedle, |
@raedle, I figure out the blocklist was problem. When i removed this blocklist argument |
Area Select
react-native-pytorch-core (core package)
Description
Hello! thanks for contributions!
I have a problem while developing my project.
I need a function like torchvision.transforms.functional.perspective
Could you add this implementation for
torchvision.transforms.functional.perspective
?or
Can i implement this function?
There is no implementation of perspective function in playtorch docs
Another solution that i proceed is making pytorch mobile model for this function.
This idea came from @raedle of this issue.
but it has a error at react-native app like this:
Should i question to pytorch github about this error?
My perspective model is like this:
This model is successful at python code.
How can i solve this problem?
Many many thanks for anyone help!
The text was updated successfully, but these errors were encountered: