New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some test results on images in the wild #7
Comments
Joining the question about the last two images (which is the most common use-case). Any reason for it to output this result? |
I faced the same things, Is there any progress? |
@Tetsujinfr Did you manage to work around this? |
Joining the discussion, is there any progress? |
No I did not have time to investigate it further. I do not have bandwith to re-train or really deepdive on this. I am not sure if I did something wrong but I do not understand why the 3 test images in the repo (teddy, drop and spider web) do not render exactly as the provided results in the repo. I suspect those repo renders might have been done with the trained model which has been lost, hence the differences when using the new pre-trained model on my machine, but this is just an assumption on my end. |
Hi there @Tetsujinfr, Thanks for showing interest in our research and project! First, for the 3 test images shown in the repo (teddy, drop and spider web), the results are tested by our released pre-trained model. Just follow the instruction in Inference Code - Test on your sample images should show the exact same results. I don't know why you got the different results, perhaps because of using different version of Pytorch? Please note that my test environment is Cuda 10.2, Pytorch 1.7.1, and python 3.7.7. Please also be aware to set up configuration Second, For the other two images. I take screenshot of your images, render the format to jpg, and have run the inference code from my side, I actually get some good results. I show the input image size/format, the test strategy I have used, and the results as follows. Image 1, input size: 1408x2868, image format: .jpg, test_choice: RESIZE, test resize ratio: 1/4: Image 2, input size: 1500x746, image format: .jpg, test_choice: HYBRID, test resize ratio: global_ratio=1/4, local_ratio=1/2: Third, please note that since our model has been trained only on limited synthetic matting dataset, the performance may vary when adapting to other images, e.g., low-resolution images, images with the salient object shown as a large portion, etc. To get better results for such images, you can try:
Let me know if you still encounter problems, thanks! Cheers, |
@JizhiziLi Thanks to your explanation for testing non-AIM 500 images. If I want to infer own images, do I just modify the dataset_choice to SAMPLES from AIM_500? |
Hi @bruinxiong , yes, modify the data_choice from AIM_500 to SAMPLES will switch the inference images to your own sample images. |
@JizhiziLi , I just wanted to say thanks for investigating this issue. I saw your analysis on issue #10, I think it explains everything. |
First: thanks for this amazing repository and research and for sharing it.
So I did test some images with the code/pre-trained model you released. I share some results below.
I have some very good results and some more disappointing ones (I am unreasonably demanding, sorry for that).
One question is that I have some differences on the 3 default sample images results from the repo, i.e. when inferencing on 1.png, 2.png and 3.png): is that due to the paper trained model vs the newly trained model differences?
Another question: do you know why sometimes there is 100% background detection, e.g. with the last two images below?
I am on cuda 10.2 and pytorch 1.9, fyi.
thanks
The text was updated successfully, but these errors were encountered: