-
Notifications
You must be signed in to change notification settings - Fork 100
yolov5s.torchscript.ptl #10
Comments
Hello @bairock, thank you for your interest in PyTorch Live! As far as I’m aware, there isn’t yet a Live Spec defined by the community for YOLOv5. Are you planning to create one? You can follow the tutorial here for instructions about how to adapt existing models to PyTorch Live (https://pytorch.org/live/docs/tutorials/prepare-custom-model), which includes creating a Live Spec. A Live Spec defines how to convert from an input in JavaScript to the tensor the model expects (this is the When you are successful, please do share a code pointer to the Live Spec you created! If you find that the Live Spec does not offer the operations you require, please let us know. |
Is there an example of unpuck type tuple? found no example anywhere. |
Hi, what the structure of the items in the tuple unpack should be, here is an example:
|
Hi @Aisenapio , have you created Live Specs of yolov5 ? or an example of unpuck type tuple. |
Hello, I am also trying to create the spec.json for yolo_v5. How is your progress? @bairock |
hachiko mode |
Do you have any idea about the unpack? Currently, I guess the output type should be set as object "models.common.Detections". About I don't think torch live recognizes this type as output according to tutorial. |
Any progress on exporting yolov5 to .ptl? |
Just FYI, The official torchscript exported by yolov5 only contains the general model inferencing part, one must implement the pre-preprocess ( We did some experiments on yolort to embed the pre-processing and post-processing into the torchscript graph following the strategy of TorchVision's object detection banks, as such we don't need to write the pre-process and post-process within torchlive. So it will be very easy to deploy YOLOv5 with yolort if we could deploy the models in TorchVision on torchlive. |
We can take a look, but would need a working torchscript-ed model. Does anyone have a working torchscript-ed version of a YOLOv5 model that accepts an image tensor as input and can share it here for download? |
Hi @raedle , I export a YOLOv5 torchscript model that accepts an image tensor as the input with yolort, and I upload the torchscript and optimized model at
I only embed the post-processing ( And you can use the following scripts to test the inference output using this torchscript-ed models (install yolort with import cv2
import torch
import torchvision
from torch.utils.mobile_optimizer import optimize_for_mobile
from yolort.utils import Visualizer, read_image_to_tensor
from yolort.v5 import letterbox, scale_coords, attempt_download
img_size = 640
stride = 32
device = torch.device('cpu')
export_scripted_source = "https://huggingface.co/spaces/zhiqwang/assets/resolve/main/yolov5s_scripted.pt"
export_scripted_path = attempt_download(export_scripted_source)
export_optimized_path = "yolov5s_scriptmodule.ptl"
### Load the TorchScript model
scripted_model = torch.jit.load(export_scripted_path)
optimized_model = optimize_for_mobile(scripted_model)
# Or download the optimized model from https://huggingface.co/spaces/zhiqwang/assets/blob/main/yolov5s_scriptmodule.ptl
optimized_model._save_for_lite_interpreter(export_optimized_path)
scripted_model = scripted_model.eval()
scripted_model = scripted_model.to(device)
### Load the image
img_source = "https://huggingface.co/spaces/zhiqwang/assets/resolve/main/bus.jpg"
# img_source = "https://huggingface.co/spaces/zhiqwang/assets/resolve/main/zidane.jpg"
img_path = attempt_download(img_source)
img_raw = cv2.imread(img_path)
image = letterbox(img_raw, new_shape=(img_size, img_size), stride=stride)[0]
image = read_image_to_tensor(image)
image = image.to(device)
image = image[None]
with torch.no_grad():
out_script = scripted_model(image)
# Don't forget to rescale the coordinates back to original image scale
scale_coords(image.shape[2:], out_script[1][0]['boxes'], img_raw.shape[:-1])
print(out_script) And the
|
Thanks for sharing the inference code, @zhiqwang! That was incredibly helpful! Here is a screencast of an early working version. I will need to finalize the changes and then make it available for everyone as soon as possible. Thanks again for the model and the example! yolov5s.mp4 |
Hi @raedle , It would be my pleasure and I can add a more detailed scripts for exporting the torchscript-ed module of YOLOv5 if you need it.
|
Hi, do anyone know how to deploy a custom model into PyTorch live? I am encountering some bugs. |
@raedle I also used the yolot generate the .ptl file for yolov5, but received the error "Could not convert downloaded file into Torch Module". I compared my model to the example of MobileNetV3, and they are all RecursiveScriptModule. Since yolot and yolov5 have the same model structure, I am not sure the reason for this error. I checked the code of the package, and it said "TorchModule.load will set an empty string if the model file is not bundled inside the model file". Can you provide more explanation? |
Hi @JohnZcp ,
May I ask do you use the yolort model with the pre-processing ( ## Export the TorchScript-ed model
import torch
import torchvision
from torch.utils.mobile_optimizer import optimize_for_mobile
from yolort.models import YOLO
from yolort.v5 import attempt_download
# Prepare some parameters for the exported torchscript-ed module
score_thresh = 0.25
nms_thresh = 0.45
device = torch.device("cpu")
# Downloaded from 'https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt'
model_path = "yolov5s.pt"
checkpoint_path = attempt_download(model_path)
model = YOLO.load_from_yolov5(checkpoint_path, score_thresh=score_thresh, nms_thresh=nms_thresh)
model = model.eval()
model = model.to(device)
export_scripted_path = "yolov5s_scripted.pt"
export_optimized_path = "yolov5s_scriptmodule.ptl"
scripted_model = torch.jit.script(model)
scripted_model.save(export_scripted_path)
optimized_model = optimize_for_mobile(scripted_model)
optimized_model._save_for_lite_interpreter(export_optimized_path) |
@zhiqwang Yes. And I think we still need to provide the live.spec.json as the extra file for generating ptl version. But, I don't think missing this json file will cause the error I mentioned above. It only effects library's data processing for input and output. |
@JohnZcp, @Aisenapio, @dongdv95, @bairock, @jslok, and @zhiqwang, the latest release I create a YOLOv5 example app with the new PyTorch Live API. It would be fantastic if this could also work with the YOLOv5 Runtime Stack! |
Closing this issue because it appears to be resolved. Feel free to re-open if more info is needed. Also note, there is now a YOLOv5 tutorial available: https://playtorch.dev/docs/tutorials/snacks/yolov5/ |
Version
1.1.0
Problem Area
react-native-pytorch-core (core package)
Steps to Reproduce
run model yolov5s.torchscript.ptl
Expected Results
get results (bbox, class)
Code example, screenshot, or link to repository
Error Possible Unhandled Promise Rejection (id: 0):
Error: End of input at character 0 of promiseMethodWrapper
Did I follow this instruction to convert .pt to .ptl or am I doing something wrong?
https://github.com/pytorch/android-demo-app/tree/master/ObjectDetection
I ran android project pytorch mobile model worked there.
As I understand it, you need live.spec.json but where to get it for yolov5?
It's him ? I used it but it still gives an error or I need to project it?
{'config.txt': '{"shape": [1, 3, 640, 640], "stride": 32, "names": ["person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"]}'}
here is the link to the issue
The text was updated successfully, but these errors were encountered: