-
Notifications
You must be signed in to change notification settings - Fork 101
U2-net cloth segmentation model #77
Comments
Hi @lavandaboy I tried with your example and got this error
The exception seems to be thrown when you loaded the model with
|
Hi @liuyinglao, Thanks for the prompt reply. I re-converted the model. The link to it (https://cdn-144.anonfiles.com/85n1Ge0by6/622106f0-1658829111/cloth_segm_live.ptl) and I used the following code in Colab:
I also updated the path to the model in my expo snack. Let me know if I need to share anything else with you. Thanks! |
@lavandaboy your model export code looks good. The "Format Error" in the log is coming from this line, which has failed to recognize a compatible model format, like zip: https://github.com/pytorch/pytorch/blob/v1.12.0/torch/csrc/jit/mobile/import.cpp#L623 That makes sense because the model URL used in the snack is redirecting to an HTML page:
|
Hi Chris, Thanks for your message. Ok, I changed the URL to the direct Dropbox file (https://www.dl.dropboxusercontent.com/s/k9mm1b0c5xewpd7/cloth_segm_live.ptl) Still no changes in the app. It seems to me that the app needs some time to download the model (176 Mb). I also added an alert in my snack to understand when the model is loaded because I do not know how to access the logs. Unfortunately, I never get this alert, waited for 10 minutes :) Thus, I think the model is too big for the Playtorch app. What are your thoughts about this? Thanks! |
Hi @lavandaboy, the Dropbox download didn't work for me. I uploaded the model as a GitHub asset: https://github.com/raedle/test-some/releases/download/v0.0.2.0/cloth_segm_live.ptl The model loads, but it fails with an error on loading:
The errors from the lite interpreter runtime can be opaque at times, and loading them in Python into the lite interpreter runtime can help getting the real error. I created a Google Colab notebook, which downloads the model from GitHub assets and tries to load it into the lite interpreter runtime in Python. Loading the model fails there too, and the error says:
This means that the model was exported with You can test with the Google Colab notebook, and if the model loads there in the lite interpreter runtime, it should also load in PlayTorch |
@lavandaboy, the following script exports a TorchScript model for the lite interpreter that loads in PlayTorch and returns inference results (e.g., run this in Google Colab): %cd /content/
!rm -rf cloth-segmentation
!git clone https://github.com/levindabhi/cloth-segmentation.git
%cd cloth-segmentation
!gdown --id 1mhF3yqd7R-Uje092eypktNl-RoZNuiCJ
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
from utils.saving_utils import load_checkpoint_mgpu
from networks import U2NET
checkpoint_path = 'cloth_segm_u2net_latest.pth'
net = U2NET(in_ch=3, out_ch=4)
net = load_checkpoint_mgpu(net, checkpoint_path)
net = net.eval()
scripted_model = torch.jit.script(net)
optimized_model = optimize_for_mobile(scripted_model)
optimized_model._save_for_lite_interpreter("cloth_segm_live.ptl")
print("model successfully exported") The output is tuple with 7 tensors of rank-4 tensor (e.g., EDIT: The 4 channels are upper body clothes, lower body clothes, full body clothes, and background.
Source: https://github.com/levindabhi/cloth-segmentation I also uploaded the exported model to GitHub assets: https://github.com/raedle/test-some/releases/download/v0.0.2.0/cloth_segm_live_cpu.ptl Hope that helps |
Hi @raedle, Thanks a lot for your help. As I understand, I need to modify the code below to convert info from tensors into actual mapping of clothes using the image captured by the cam?
|
@lavandaboy, yes, that's correct! One caveat is that PlayTorch doesn't support all ops yet that are used in the post-processing for this cloth segmentation model. The good news is that you can still use the model if you create a model wrapper in Python that post-processes the model output and returns tensors that can be transformed in PlayTorch. I prepared a model that works and created a simple demo in PlayTorch:
RPReplay_Final1659329628.MP4Note: I haven't optimized anything and the model inference can take several seconds depending on your device. In my case, I used an iPhone 11 Pro, which takes ~11s for inference and another ~6s to convert masks to images. Wrapped Model ExportThe following shows at a high level what I did to prepare the model for PlayTorch. The full export is in this Google Colab: https://colab.research.google.com/drive/1pTLlcv2fSSQuO6ARdFACWfrttc5lZGQv?usp=sharing#scrollTo=ZwNRQ-38YhXg Example model wrapper: import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import List
class ModelWrapper(nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
def get_tensor(self, output_tensor: torch.Tensor) -> torch.Tensor:
output_tensor = F.log_softmax(output_tensor, dim=1)
output_tensor = torch.max(output_tensor, dim=1, keepdim=True)[1]
output_tensor = torch.squeeze(output_tensor, dim=0)
return torch.squeeze(output_tensor, dim=0)
def forward(self, x: torch.Tensor) -> List[torch.Tensor]:
return [self.get_tensor(res) for res in self.model(x)]
model = ModelWrapper(net) Torchscript, optimize, and export wrapped model: from torch.utils.mobile_optimizer import optimize_for_mobile
scripted_model = torch.jit.script(model)
optimized_model = optimize_for_mobile(scripted_model)
optimized_model._save_for_lite_interpreter("cloth_segm_live_wrapped.ptl") The exported model can be used in PlayTorch and returns a list of 7 tensors, which are image masks with the predicted clothes in white. Hope that helps! |
Hi @raedle, Huge thanks for your help! Everything works on my side. I am actually shocked how Pytorch models could be easily implemented on mobile devices. I did not know about the wrapping of the model. Now it is much easier to export models. Regarding optimization, I have already contacted the author of the model to convert it using u2net_p (the small 8Mb model). Hope it will work much faster and we won't see a huge drop in terms of the results. As soon as I get the update from him, I will let you know in this thread. Thanks once more @raedle! |
Closing this issue due to inactivity. Please reopen if this is still ongoing! |
Hi @raedle, I did some pause with this project. Now I got back to it. I am trying to get the coordinates of the area colored in white from the output image. Are there any solutions using playtorch to get these coordinates? Thanks in advance! |
@lavandaboy, what are you trying to achieve? Depending on you goal there might be other ways to approach the problem (e.g., if you want to subtract backgrounds and only keep salient objects). Are you looking for a way to get the bounding box, a convex hull, or something else? |
Hi @raedle, Thanks for your message. The idea is to get the polygons based on the coordinates and then display new styles of clothes instead of the initial ones. |
Tutorial Select
Prepare Custom Model
Feedback
Hi Playtorch community,
I am trying to implement this model. It is based on U2-net but does the clothes segmentation. I converted it in the same way as I did for the usual U2-net model, using the tutorial I have previously posted.
I am using the U2-net snack snack as core, which perfectly works on my device using Playtorch app. Then I change the path to the converted model (https://cdn-128.anonfiles.com/v5l75ez1yc/431fccf2-1658318807/cloth_segm_live.ptl) in ImageMask.ts When I take a picture, nothing happens, I just see the camera UI.
Here is the link to my expo snack for cloth segmentation model.
I would appreciate any help with this issue.
The text was updated successfully, but these errors were encountered: