-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does predictImage
support other models besides image classification models?
#18
Comments
predictImage
support other models besides image classification models?
I'm also hoping to use an object detection model that I've developed with PyTorch Mobile and would love to see an example of it being used if it's supported! |
I will take up the challenge. If you only need either Android or iOS, you can checkout the official pytorch android-demo-app or ios-demo-app for object detection. So far as I have tested they work like a charm. I should have something usable for both iOS an Android by the end of the year. Will update here once I'm there. |
Mmmm, that's what I was thinking! I've been looking at the Flutter Android/iOS interfaces in this package and was hoping to compare them to pytorch_mobile. "End of the year" would be a target for me too, since I'm using it for a side project. Happy to trade notes, @cyrillkuettel! |
So I can't speak to the part where you draw the boxes back onto the original image, but as far as actually running the inference goes, I have a long, rambling stream of consciousness as I dove into this plugin, android-demo-app and ios-demo-app, and my tl;dr is that there's absolutely nothing different about running an object detection model compared to running an image classifier - it's just the structure of the output tensor that will be different. I first thought this meant that you could use I see that the difference between |
So, I'm able to sub in
Not sure if I'm just doing something wrong with the model I've trained and exported or if there's some adjustment required to load the model in. (If the code the error is referencing is the |
@jimjam-slam Over the past week, I've read the source code of flutter_pytorch_mobile as well as the android-demo-app and ios-demo-app. The later being two almost identical copies, simply implemented in a different programming language. (well, almost ;)) I have come to the same conclusion as you:
It's important to know the structure of the output tensor. For example, in deeplab Segmentation, the output tensor has the shape As for the errors when loading models: I have discovered that for this particular case, pytorch 11 environment is required else loading models will produce weird behavior. Here is the complete example of exporting the model, using pytorch 11.I have used pretrained models for demonstration purposes, but it should work with any model that can be scripted. import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torch.hub.load('pytorch/vision: v0.11.0', 'deeplabv3_resnet50', pretrained=True)
model.eval()
scripted_module = torch.jit.script(model)
optimized_model = optimize_for_mobile(scripted_module)
optimized_model._save_for_lite_interpreter("deeplabv3_scripted.ptl") I exported them exactly like shown above. Note that the line |
This is great to see, @cyrillkuettel! I'd noted that the Android and iOS apps use LibTorch Lite whereas this plugin uses LibTorch (or is it the other way around?), so I'm glad you caught the thread and unravelled it. I'll double-check my Python and package versions too. I'm away for the weekend, but I'm looking forward to applying your work later next week! Great work! 🥳 |
@jimjam-slam I'm happy to see this is useful to anyone, I spent a lot of time tracking down this issue. Yes the plugin uses LibTorch as far as i can see. |
@cyrillkuettel Thank again for your guidance on this! I'm having a look, and it seems like I trained my model (based on The notebook I had based my work on, one of Google's, actually downloads the pytorch/vision repo and checks 0.8.2 out, but it looks like it only uses this to grab some of the scripts in the I'll see if I get the older version running (I'm not quite as proficient with Python, so I struggle a bit with its package management!) and retry! |
I managed to regree to the older PyTorch and Torchvision versions, but still hit errors loading models:
I think this is probably because I've been training models with CUDA. I'm having a crack at training one on just the CPU (just for one epoch so that I'm not sitting around forever). If that works, I'll revert to using a GPU for training and see if I can convert the model back to a CPU backend after training by saving, switching devices, loading with the |
That's a very peculiar error. I always assumed that training with CUDA backend does not change the actual structure of the exported model. What I do know, is that there do exist some models that simply cannot be scripted in a straightforward way. For example, if you use fancy stuff like |
Yeah, I'll have to do some more testing here. I tried training on the GPU and then switching to the CPU and saving again, but I still got the error, so I'm not sure if I did it properly. Will have to have another crack at this when I can 😮💨 |
@cyrillkuettel So it looks like I didn't regress far enough: I didn't clock that this Flutter plugin uses PyTorch 1.8.0, and I was training on PyTorch 1.11. Unfortunately, I start getting other errors when I try to train all the way back on PyTorch 1.8.0 and Torchvision 0.9.0. I'm wondering whether it'd be hard to do the opposite and bump this plugin's dependencies up a version or two. Two places I see references to the PyTorch version are: Would updating the plugin be as simple as forking, updating those two references, and adding the forked package as a dependency with a local path? |
@jimjam-slam Actually that was my mistake, i thought the pytorch version was higher than that. I'm very sorry.
Yes it's really that simple. You can even add a git url as a plugin dependency: flutter_pytorch_mobile:
git: git@github.com:name/path-to-forked-repo.git One minor thing, on iOS, after updating LibTorch version, I had to: pod update LibTorch For the changes to take effect. |
Not at all! Thanks very much for your guidance on this—I'll have a crack at upgrading the plugin tonight! 🥳 |
Okay, I've made some progress after spending a lot of time trying to get the versions of everything lined up. (This plugin also includes Torchvision in the Android build but not in the iOS build, which was also causing problems.) The changes I've made so far are at: jimjam-slam#1 For folks playing along at home, in my training notebook I'm using: pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl And when getting the training scripts I'm using: git clone https://github.com/pytorch/vision.git
cd vision
git checkout v0.12.0
cp references/detection/utils.py ../
cp references/detection/transforms.py ../
cp references/detection/coco_eval.py ../
cp references/detection/engine.py ../
cp references/detection/coco_utils.py ../ I'm now able to load my custom model, but when I make the inference I get:
I'll have a bit of a closer look at this to make sure I'm providing the plugin with the right type! |
I was just wondering what type of image models
predictImage
actually supports. I'm assuming here image classification models are supported, but what about other types of models, like for example object detection?Is the implementation different for these or does this work as well.
Thank you for this plugin by the way, very nice work.
The text was updated successfully, but these errors were encountered: