New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The results of LFFont are not satisfactory on a custom korean dataset #3
Comments
Hi, sorry for the late reply.
|
This did the trick for me. Setting (
Alright. Means setting
Thanks. I am training the network now and hope to have satisfactory results this time. |
So, I finished the training but the results are not satisfactory. I can see broken content. |
As our experimental results show, our model may not generate perfect contents. |
It seems like there were some problems during the training. These results are much worse than ours. |
The below is the LFFont phase1 200k iteration image Below are the configurations I have used for training LFFont model.
|
I cannot find any problem in the configuration file. How many fonts did you use for the training? |
I used 60 font files for training with the following cfgs I mentioned. With the same cfgs, FUNIT did a good job. I am not sure about the issue. I can train it more and let you know if it gets better. |
OK, thanks. I will check the code again. |
Thanks for this tip. |
Closing the issue, assuming the answer resolves the problem. |
I am trying to train LFFont on a custom Korean dataset consisting of 60 printed font styles.
For the phase 1 training I use all the default configurations of data
(cfgs/data)
and LFFont(cfgs/LF/p1)
except thebatch size
which is set to4
andnumber of workers are 8
on a single 3080 ti GPU with 12 gigz. The training goes normal for200k iterations
. The results are OK considering that the phase 2 training will further improve it.When I train the model for the phase 2 I have OOM problem with the default configurations although the default
batch size is 1
. Finally with some alterations to the p2 default configuration file (default.yaml
) where I set thenum_workers
to 2 andemb_dim
to 6 I can train the model.However, the training results are really bad and doesn't seem to get better till 200k iterations (in the paper i think for Korean characters p2 was trained for 50k iterations). I personally assume that probably its down to the embedding dimension i adopted (6) however in the paper it has been shown that it doesn't effect the performance a lot (from 8 to 6).
So, I tried to train p2 with multiple GPUs by only adding the
use_ddp
inp2/train.yaml
toTrue
andgpus_per_node
to 3 (in train_LF file) but still i get OOMCould you please help me in the following
use_ddp: True, gpus_per_node=3
?emb_dim
andnum_workers
?--resume
value of the p1 last checkpoint?BTW I trained FUNIT with your provided source on the same dataset and it performs well so definitely not down to the dataset I am using.
The text was updated successfully, but these errors were encountered: