Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have a question ablout input_img size. #19

Closed
clscy opened this issue Feb 19, 2021 · 2 comments
Closed

I have a question ablout input_img size. #19

clscy opened this issue Feb 19, 2021 · 2 comments

Comments

@clscy
Copy link

clscy commented Feb 19, 2021

Hi, if increase the input size 128128 to 256256, can the project generate more qualitative font img the 128128?
Thank you.

@SanghyukChun
Copy link
Collaborator

I don't know how you define "qualitative" exactly, but I presume that you want to generate glyphs with "better" visual quality, e.g., fewer artifacts.

Also, I don't know how you define the input size. There can be two viewpoints:

Case 1. Train with 128 x 128, test with 256 x 256
Generally, it will not work. I presume that your question means case 2.

Case 2. Train with 256 x 256, test with 256 x 256
We cannot guarantee anything for this.
However, I think it does not bring meaningful advantages despite the memory and computation consumptions.
Note that by resizing the input size twice (4 times in pixel level), you will consume more than 4 times the memory and computation resources.

To sum up, I don't think resizing the input size will not be helpful for your case.
However, in some applications such as image classification, object detection, resizing large input size often brings higher performances (e.g., accuracies). So, I recommend you train your own model with 256 x 256 inputs.

@clscy
Copy link
Author

clscy commented Feb 25, 2021

OK, Thank you very much.

@clscy clscy closed this as completed Feb 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants