-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade pytorch version #1
base: master
Are you sure you want to change the base?
Conversation
Because Torchvision 0.12.0 isn't available
@jimjam-slam Looks promising, I'd like to try it out myself. If you can, provide a download link to the model and properties (input size, expected output etc.) |
…ter_pytorch_mobile into feature-upgrade-pytorch
Thanks! I'll try and get it to you tonight :) |
(I might also try to move the plugin back to PyTorch 1.10.0 and re-train my model in the next few days to see if that ameliorates the inference-time issues I'm having. As you can see in commits 3a8f309..c9bcb1a, I experimented with trying to follow the instructions in the warning, but I'm beginning to get out of my depth modifying the Objective-C in |
You're using a object detection model right? So the structure of the output tensor is slightly different. I think here lies the root cause of the issue. - (NSArray<NSNumber*>*)detectImage:(void*)imageBuffer {
try {
at::Tensor tensor = torch::from_blob(imageBuffer, { 1, 3, input_height, input_width }, at::kFloat);
c10::InferenceMode guard;
auto outputTuple = _impl.forward({ tensor }).toTuple();
auto outputTensor = outputTuple->elements()[0].toTensor();
float* floatBuffer = outputTensor.data_ptr<float>();
Notice also I'm looking forward to try it out myself. |
@cyrillkuettel Thanks! I was so close! 😆 Here's a link to the model: https://drive.google.com/file/d/1q1Kd-tWAtO24um-fMPeLyjY4oVDxPK8f/view?usp=sharing It is indeed an object detection model—it detects four-sided (tetrahedral dice) from roughly overhead images and classifies them according to the vertex facing up (toward the camera). It has four classes corresponding to the faces of the die. I'm training it on images that are 1440x1080, although I don't recall the training notebook actually asking for image dimensions (perhaps they're being inferred, or perhaps the PyTorch training scripts I'm using have them set somewhere and I'm accidentally overriding the image dimensions). Here's a sample prediction from one of my images in the notebook:
|
@jimjam-slam Alright, thanks I'll give it a try tomorrow. I'm guessing
|
@cyrillkuettel Yeah, # `model` is already trained on gpu
torch.save(model, "./d4/dn-set2-d4-test-full.pt")
# remap to cpu
cpu_device = torch.device("cpu")
saved_model = torch.load("./d4/dn-set2-d4-test-full.pt",
map_location = cpu_device)
ts_model = torch.jit.script(saved_model)
torch.jit.save(ts_model, "./d4/dn-set2-d4-test-full-CPU-scripted.pt")
from torch.utils.mobile_optimizer import optimize_for_mobile
# script and optimize (but still for torch, not torch lite? not sure)
scripted_module = torch.jit.script(saved_model)
optimized_model = optimize_for_mobile(scripted_module)
optimized_model.save("./d4/dn-set2-d4-test-full-CPU-scripted-optimized.pt")
# save for pytorch lite produces error
# optimized_model._save_for_lite_interpreter("./d4/dn-set2-d4-test-lite.ptl") I didn't add any image re-scaling or normalisation code, but I did find this comment in the original notebook:
I'll dig out the link to the original notebook! |
I tried to load it but unfortunately I encountered various stubborn build issues on android. https://github.com/cyrillkuettel/flutter_pytorch_mobile
|
I haven't tried an Android build yet, unfortunately, although I'd like to be able to do both for my app. I'll try to dive into it, but it might not be until the weekend! Thanks for pushing through this, though! |
Actually, forget what I said previously about the output tensor. I looked at the thing again. The model output is a dictionary. (That follows form the sample prediction you've provided above.) Since we know it contains a dictionary with labels 'boxes', 'labels' and 'scores', we can extract the values from them. |
Mmm, absolutely! If using |
(I incidentally only heard about D2Go a few days ago, so I'm keen to give that a whirl too!) |
Goodluck @jimjam-slam @cyrillkuettel . I'm a beginner at all this trying to use my YOLOv8 model on my flutter application. Encountered some issues with the plugin and am just following along with this deep conversation for a working plugin 😆 |
Good luck, @bigbaliboy! I'm still hoping to have some success, but man, PyTorch versioning sucks. Just hard to get the bandwidth for a side project! 😅 I'm also hoping to give Detectron a go (or, if all else fails, make native apps). |
@jimjam-slam I have my Yolov8 model running with the flutter_vision package as a tflite model, and my yolov5 model running with Flutter_pytorch package as a torchscript file. Neither of those has iOS support yet but works fine on Android. The latter of these is expecting someone for a PR on the iOS part. Good luck with the Detectron and keep us updated on how it goes 🙌🏻 |
No description provided.