Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some test results on images in the wild #7

Closed
Tetsujinfr opened this issue Oct 3, 2021 · 9 comments
Closed

Some test results on images in the wild #7

Tetsujinfr opened this issue Oct 3, 2021 · 9 comments

Comments

@Tetsujinfr
Copy link

First: thanks for this amazing repository and research and for sharing it.

So I did test some images with the code/pre-trained model you released. I share some results below.
I have some very good results and some more disappointing ones (I am unreasonably demanding, sorry for that).

One question is that I have some differences on the 3 default sample images results from the repo, i.e. when inferencing on 1.png, 2.png and 3.png): is that due to the paper trained model vs the newly trained model differences?

Another question: do you know why sometimes there is 100% background detection, e.g. with the last two images below?

I am on cuda 10.2 and pytorch 1.9, fyi.
thanks

20211003_015310

20211003_015348

20211003_015529

20211003_015709

20211003_015209

@roimulia2
Copy link

Joining the question about the last two images (which is the most common use-case). Any reason for it to output this result?

@guomingjin
Copy link

I faced the same things, Is there any progress?

@roimulia2
Copy link

@Tetsujinfr Did you manage to work around this?

@eilon-sk
Copy link

eilon-sk commented Oct 6, 2021

Joining the discussion, is there any progress?
P.S. Thank you for this amazing repository and research!

@Tetsujinfr
Copy link
Author

@Tetsujinfr Did you manage to work around this?

No I did not have time to investigate it further. I do not have bandwith to re-train or really deepdive on this. I am not sure if I did something wrong but I do not understand why the 3 test images in the repo (teddy, drop and spider web) do not render exactly as the provided results in the repo. I suspect those repo renders might have been done with the trained model which has been lost, hence the differences when using the new pre-trained model on my machine, but this is just an assumption on my end.
Would love to hear from the repo owner.

@JizhiziLi
Copy link
Owner

Hi there @Tetsujinfr,

Thanks for showing interest in our research and project!

First, for the 3 test images shown in the repo (teddy, drop and spider web), the results are tested by our released pre-trained model. Just follow the instruction in Inference Code - Test on your sample images should show the exact same results. I don't know why you got the different results, perhaps because of using different version of Pytorch? Please note that my test environment is Cuda 10.2, Pytorch 1.7.1, and python 3.7.7. Please also be aware to set up configuration test_choice=Hybrid in file core/scripts/test_samples.sh and make sure that global_ratio=1/4, local_ratio=1/2 in the file core/test.py.

Second, For the other two images. I take screenshot of your images, render the format to jpg, and have run the inference code from my side, I actually get some good results. I show the input image size/format, the test strategy I have used, and the results as follows.

Image 1, input size: 1408x2868, image format: .jpg, test_choice: RESIZE, test resize ratio: 1/4:

Input image:
5

Test result:
5

Image 2, input size: 1500x746, image format: .jpg, test_choice: HYBRID, test resize ratio: global_ratio=1/4, local_ratio=1/2:

Input image:
4

Test result:
4

Third, please note that since our model has been trained only on limited synthetic matting dataset, the performance may vary when adapting to other images, e.g., low-resolution images, images with the salient object shown as a large portion, etc. To get better results for such images, you can try:

  1. modify the samples images' resolutions and formats to be similar to our samples images (e.g. shorter path as 1080 pixels, image format as jpg);
  2. using different test_choice in core/scripts/test_samples.sh and modify the resize ratio (global_ratio, local_ratio, resize_h, resize_w) in core/test.py to fit your images;
  3. re-train the model on other training set while the training code will be released once we finished cleanup the codebase.

Let me know if you still encounter problems, thanks!

Cheers,
Jizhizi Li

@bruinxiong
Copy link

@JizhiziLi Thanks to your explanation for testing non-AIM 500 images. If I want to infer own images, do I just modify the dataset_choice to SAMPLES from AIM_500?

@JizhiziLi
Copy link
Owner

Hi @bruinxiong , yes, modify the data_choice from AIM_500 to SAMPLES will switch the inference images to your own sample images.

@Tetsujinfr
Copy link
Author

@JizhiziLi , I just wanted to say thanks for investigating this issue. I saw your analysis on issue #10, I think it explains everything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants