New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing on Custom Image #36
Comments
|
|
|
Thanks a lot @cuiaiyu for your kind and continuous support. Testing is done successfully. |
Hello @cuiaiyu , I'm really sorry to interrupt you again, I think there is some issue with the key points generated using Openpose as the results obtained from SHP i.e. human parsing for custom image is perfectly fine but the final result of Try-On is not at all acceptable for custom image, which is great for the preprocessed data provided by you for demo. Kindly help me to resolve this issue. |
It would be easier to identify the problem if you can show the example images of the (pose, parse, img, generated_img). |
I want to thank you for your great work. here is the info you asked for source image
for target image:
thanks in advance |
@m-h34 In the given example, the pose figure seems not aligned with the input image, as it is shifted a little bit left? If that is the case, please the input of the pose conversion (coord -> heatmap) function below. the def load_pose_from_json(pose_json, target_size=(256,256), orig_size=(256,256)):
with open(pose_json, 'r') as f:
anno = json.load(f)
if len(anno['people']) < 1:
a,b = target_size
return torch.zeros((18,a,b))
anno = list(anno['people'][0]['pose_keypoints_2d'])
x = np.array(anno[1::3])
y = np.array(anno[::3])
x[8:-1] = x[9:]
y = np.array(anno[::3])
y[8:-1] = y[9:]
x[x==0] = -1
y[y==0] = -1
coord = np.concatenate([x[:,None], y[:,None]], -1)
pose = pose_utils.cords_to_map(coord, target_size, orig_size)
pose = np.transpose(pose,(2, 0, 1))
pose = torch.Tensor(pose)
return pose[:18] |
@m-h34 The hands issue is probably expected, as the training data doesn't specify finger joints in pose and this target pose is somehow a "rare pose" w.r.t. the training data. Besides, 1) The missing sleeves problem could be an issue of overfitting. The checkpoints |
Ok, I got it |
The source image for pose was taken as: |
If your keypoints is stored as numpy array containing x, y coords then apply the following code to convert your keypoints_25 to keypoints_18 Index = [0,1,2,3,4,5,6,7,9,10,11,12,13,14,15,16,17,18] Keypoints_18 = keypoints_25[index] Also you can check this issue #21 option 2 And the return of this function "load_pose_from_json" should be passed in somehow to the variable "pose" in load_img function which is in setup cell in demo notebook |
Thanks a lot @cuiaiyu I was also having target and original image size mismatching problem. Finally resolved and got the output by following your note. |
Hi @cuiaiyu , Thanks a lot for this great work.
I'm working on Cloth Virtual Try-On as my Final Year Project.
The demo worked fine for me but currently I'm facing some issues in performing virtual try-on on my own image.
Steps I followed:
Resized by full size image .jpg to 750x1101 pixels (as all the images in test folder are of this dimension) and added it to test folder.
Ran openpose on the image and obtained the keypoints in .json file, manually separated x and y keypoints as (x0, y0, c0, x1, y1, c1, ....) and added the file name along with 2D_pose_keypoint y and x keypoints respectively in fasion-annotation-test.csv .
Using SCHP found the human parsing and added it to testM_lip.
Added image name in test.lip and standard_test_anns.txt under print just for testing.
After that I just ran the demo.ipynb and got the following error in data loading step.
I tried a lot to resolve this error but I'm unable to get it also I'm approaching the deadline. Kindly help me to test the model on custom image.
Also I'm unable to understand the use of fasion-pairs-test.csv while running demo.
Hopeful for your kind reply.
Thanks a lot Cuiaiyu !!!
Originally posted by @Djvnit in #21 (comment)
The text was updated successfully, but these errors were encountered: