You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I noticed that in the stefann.py file, you have provided three methods for style transfer, transfer_color_pal, transfer_color_max, and the colornet implementation. I uncommented the lines from 480-487 in the stefann.py file to use the colornet model. When I tested the application on images from 'sample_images' folder using the colornet method with the provided pretrained weights, it did not work very well and produced blurry and inconsistent results. The results of colornet did not match the results given in the 'editing_examples' folder.
Given Result (Left - Original Image, Right - stefann generated image)
Output using colornet
There are many more cases where the colornet model is not performing as expected. Could you please help me with this?
The text was updated successfully, but these errors were encountered:
Your observation is correct. This happens due to inadequate domain adaptation. Colornet is trained with only 800 synthetic color filters which is merely ~0.005% of all ~16.8 million possibilities. While the results on synthetic validation set and test set are quite appealing, it struggles to generalize on real scene texts in many occasions. This limitation is found to be more prominent for solid colors than gradient colors. For most of the scene text examples, the color distribution over the character is homogeneous (solid). Due to this reason, the demo application uses the color with maximum occurrence.
Here is a basic comparison among different color transfer schemes on a synthetic image with gradient colors:
Original image:
Edited images:
Note that Colornet can reproduce color gradients but suffers from distortion and bleeding near edges. transfer_color_pal produces sharper edges but suffers from dirty patches created from direct interpolation. transfer_color_max produces sharper edges and cleaner images but completely ignores gradients. There is a trade-off for selecting any one scheme over another.
For most of the real scene text images, we found transfer_color_max to be most appropriate.
Hello, I noticed that in the stefann.py file, you have provided three methods for style transfer, transfer_color_pal, transfer_color_max, and the colornet implementation. I uncommented the lines from 480-487 in the stefann.py file to use the colornet model. When I tested the application on images from 'sample_images' folder using the colornet method with the provided pretrained weights, it did not work very well and produced blurry and inconsistent results. The results of colornet did not match the results given in the 'editing_examples' folder.
Given Result (Left - Original Image, Right - stefann generated image)
Output using colornet
There are many more cases where the colornet model is not performing as expected. Could you please help me with this?
The text was updated successfully, but these errors were encountered: