Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to transfer my handwriting font into other style #12

Closed
xsmxsm opened this issue May 8, 2017 · 36 comments
Closed

how to transfer my handwriting font into other style #12

xsmxsm opened this issue May 8, 2017 · 36 comments

Comments

@xsmxsm
Copy link

xsmxsm commented May 8, 2017

when doing this step :
python font2img.py --src_font=src.ttf
--dst_font=tgt.otf
it generated 1000 random font image ,can i use my expected font to generate image?
can i use my handwriting font image and transfer it into my expect style?
thank you !

运行font2img.py 时,使用的是下载的两个ttf字体和otf字体,生成的1000张图片中的字都是随机的,能不能选用我想要生成的字?此外,这个模型能不能把我自己手写的字拍成的图片转变为我期待的风格?
另外,我在运行train.py时,总是出错如下:
2017-05-08 14:20:19.059104: I tensorflow/core/common_runtime/bfc_allocator.cc:700] Sum Total of in-use chunks: 1.41GiB
2017-05-08 14:20:19.059115: I tensorflow/core/common_runtime/bfc_allocator.cc:702] Stats:
Limit: 1605828608
InUse: 1517696256
MaxInUse: 1605828608
NumAllocs: 353
MaxAllocSize: 134217728

2017-05-08 14:20:19.059144: W tensorflow/core/common_runtime/bfc_allocator.cc:277] xx*****************************************xx__*******************xx
2017-05-08 14:20:19.059163: W tensorflow/core/framework/op_kernel.cc:1152] Resource exhausted: OOM when allocating tensor with shape[5,5,512,1024]
Traceback (most recent call last):
File "train.py", line 62, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "train.py", line 58, in main
sample_steps=args.sample_steps, checkpoint_steps=args.checkpoint_steps)
File "/home/xsm/zi2zi-master/model/unet.py", line 465, in train
tf.global_variables_initializer().run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1552, in run
_run_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3776, in _run_using_default_session
session.run(operation, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[512]
[[Node: generator/g_d4_deconv/b/Assign = Assign[T=DT_FLOAT, _class=["loc:@generator/g_d4_deconv/b"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](generator/g_d4_deconv/b, generator/g_d4_deconv/b/Initializer/Const)]]

Caused by op u'generator/g_d4_deconv/b/Assign', defined at:
File "train.py", line 62, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "train.py", line 51, in main
model.build_model(is_training=True, inst_norm=args.inst_norm)
File "/home/xsm/zi2zi-master/model/unet.py", line 167, in build_model
inst_norm=inst_norm)
File "/home/xsm/zi2zi-master/model/unet.py", line 133, in generator
output = self.decoder(embedded, enc_layers, embedding_ids, inst_norm, is_training=is_training, reuse=reuse)
File "/home/xsm/zi2zi-master/model/unet.py", line 119, in decoder
d4 = decode_layer(d3, s16, self.generator_dim * 8, layer=4, enc_layer=encoding_layers["e4"])
File "/home/xsm/zi2zi-master/model/unet.py", line 98, in decode_layer
output_width, output_filters], scope="g_d%d_deconv" % layer)
File "/home/xsm/zi2zi-master/model/ops.py", line 35, in deconv2d
biases = tf.get_variable('b', [output_shape[-1]], initializer=tf.constant_initializer(0.0))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 1049, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 948, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 356, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 341, in _true_getter
use_resource=use_resource)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 714, in _get_single_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 197, in init
expected_shape=expected_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 306, in _init_from_args
validate_shape=validate_shape).op
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/state_ops.py", line 270, in assign
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 47, in assign
use_locking=use_locking, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in init
self._traceback = _extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[512]
[[Node: generator/g_d4_deconv/b/Assign = Assign[T=DT_FLOAT, _class=["loc:@generator/g_d4_deconv/b"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](generator/g_d4_deconv/b, generator/g_d4_deconv/b/Initializer/Const)]]

@xsmxsm
Copy link
Author

xsmxsm commented May 8, 2017

2017-05-08 14:38:43.505012: I tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 with properties:
name: GeForce 810A
major: 3 minor: 5 memoryClockRate (GHz) 0.758
pciBusID 0000:01:00.0
Total memory: 1.96GiB
Free memory: 1.69GiB
2017-05-08 14:38:43.505032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-05-08 14:38:43.505038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-05-08 14:38:43.505055: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce 810A, pci bus id: 0000:01:00.0)
2017-05-08 14:39:00.201712: W tensorflow/core/common_runtime/bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.0KiB. Current allocation summary follows.

@windid
Copy link

windid commented May 8, 2017

GPU的显存不够,要么换个显卡,要么缩小模型的规模。

@xsmxsm
Copy link
Author

xsmxsm commented May 8, 2017

模型的规模是不是指--sample_count=1000这个的值?作者训练了29000个图片吧?我把值改成500或300还是出现上述问题。。。是这个原因吗?而且运行train.py时很慢,每次都要二十多分钟。
请问您能给我您的联系方式吗?qq微信或邮箱都行,谢谢!

@kaonashi-tyc
Copy link
Owner

kaonashi-tyc commented May 9, 2017

@xsmxsm

  1. You can point --charset to a file which contains all the characters you want to generate images for in one line. Note do not put filter=1 and shuffle=1 in this case.
  2. If you wish to use your own handwriting, you need to write your own script to crop them into 256x256 images then concat with corresponding source characters. After that is done, run the package.py script to generate training data
  3. When talking about model size, limit the number of characters well not help. You can try 2 things, lower the batch_size, and reduce the embedding_num to 1, the latter takes up a large portion of GPU memory because of the fully connected layer in discriminator, so if u only have only one font, set it to 1. However, your GPU is simply not powerful enough in my opinion, I would recommend u to try at least with GTX 9X0 series, otherwise the training will not complete in reasonable time.

@xsmxsm
Copy link
Author

xsmxsm commented May 11, 2017

  1. sorry! I change the batch_size to 5 and embedding_num to 1,but it does not work.
  2. sorry, I can not understand your NO.1 and NO.2.
    which I mean use my own handwriting don't meaning train my handwriting.
    I just want to use it as a test image, just like this:
    I write a character like "大" on paper, then take a picture of it , and how can I train it into a style what I want ? Or , I enter a character like "大" and save it in a .txt ,how can I change it into a style I want ?
    Thank you very much!

If you can read Chinese :
我想将我手写的汉字,拍成图片后,将其转换成我想要的风格。或者像您在Rewrite 里面的charsets中的txt文档一样输入一些汉字,然后将其转换成我想要的风格。

@windid
Copy link

windid commented May 11, 2017

试了一下,调低batch_size和embedding_num对显存占用几乎没有帮助,显存占用是4.3G,我对这个模型不熟,还在研究,帮不了你。

不过infer只需要1.2G的显存,你倒是可以下载作者训练出来的模型去试试infer.py,或者你可以尝试用AWS、Google Cloud的GPU服务来自己训练,花不了多少钱。

至于说手写字转换风格,很有意思的想法,我之前也想到了这个,比如手写楷体可以变成手写草书之类的,要保留原手写体的「某些特征」,问题是要让机器知道要保留的是哪些特征。这个模型貌似目前还做不到。

如果要做的话,你首先需要收集一定量的数据:同一个人手写的各种字体,如草书、楷书、隶书等,然后再考虑模型怎么来调整吧。

@xsmxsm
Copy link
Author

xsmxsm commented May 15, 2017

when I using font2img.py , It generates a lot of images randomly in sample_dir. How can it generate character I want? And when I use infer.py , it generate images character was same as characters in save_dir , right?

@kaonashi-tyc
Copy link
Owner

kaonashi-tyc commented May 15, 2017

  1. As I have answered in my previous post. Point charset to a one-line file, that contains all the characters you want, like 'ABCDED\n', then run the font2img.py with filter=0 and shuffle=0.
  2. Correct.

@xsmxsm
Copy link
Author

xsmxsm commented May 15, 2017

  1. I make a txt with "千山鸟飞绝万径人踪灭孤舟蓑笠翁独钓寒江雪" in it . After done font2img.py and package.py ,(the order of characters is not I want. )
    When I use infer.py , it generates a image in save_dir, but the order of the characters are also not i want,(it changes to "万千钓山绝江飞雪孤翁笠鸟寒人独灭蓑踪径万" )
    How do you make it in the poem order ?

  2. In infer.py , is it different that --source_obj=train.obj(or val.obj) generation in package.py?
    And what is --embedding_ids meaning? I change it from 1 to 6 , it generates different style images , but I can not find the connection of them...

Thank you very much.

@xsmxsm
Copy link
Author

xsmxsm commented May 16, 2017

I download your Pretrained Model .

  1. python font2img.py --src_font=src.ttf --dst_font=tgt.otf --charset=1.txt --sample_count=20 --sample_dir=dir --label=0 --filter=0 --shuffle=0
    it generate some images about characters in 1.txt, and in every image , it has two style characters (one is same as src.ttf, the other is same as tgt.oft ) right?

  2. python package.py --dir=dir --save_dir=save_dir --split_ratio=0.1
    it generate two .obj files in save_dir.

  3. python infer.py --model_dir=font27 --batch_size=5 --source_obj=save_dir/train.obj --embedding_ids=4 --save_dir=save_dir
    it generate one image in save_dir, the characters in it are same as 1.txt, but the order in it is not I want, and the style will change with different embedding_ids, but it does not like style in src.ttf or tgt.otf , why?

Thanks a lot!

@kaonashi-tyc
Copy link
Owner

@xsmxsm
python package.py --dir=dir --save_dir=save_dir --split_ratio=0.1, please change the split_ratio=0.0, so all the characters will be sampled into train.obj

@xsmxsm
Copy link
Author

xsmxsm commented May 17, 2017

Thank you very much!
But I want to know how to make the order of characters generated in infer.py are same as 1.txt that I entered ?
and what meaning of embedding_ids ?

Thanks a lot .

@kaonashi-tyc
Copy link
Owner

@xsmxsm i don't understand your first question, the order should be the same with the order when u are running package.py

embedding_ids means the corresponding font id, so each number between [1, 27] is one font style

@xsmxsm
Copy link
Author

xsmxsm commented May 17, 2017

I enter "千山鸟飞绝万径人踪灭孤舟蓑笠翁独钓寒江雪" in 1.txt,
python font2img.py , then it generate 20 images in dir. "千" is NO.1 , "山" is NO.2 ,and so on .
python package.py , it processes "万" first , not "千" , why?
python infer.py --model_dir=font27 --batch_size=5 --source_obj=save_dir/train.obj --embedding_ids=4 --save_dir=save_dir
it generate a image with
万 江 翁 独
千 飞 笠 灭
钓 舟 鸟 蓑
山 雪 寒 踪
绝 孤 人 径
the order is not as same as 1.txt .

@kaonashi-tyc
Copy link
Owner

this might be a glob's implementation problem. I will fix it later once i got time.

@xsmxsm
Copy link
Author

xsmxsm commented May 22, 2017

  1. use different src.ttf and tgt.otf , does it influence the result of infer.py?
    for example:
    condition 1: font2img.py , src_font=迷你简启体.ttf --dst_font=123.otf
    package.py --dir=dir5 --save_dir=save_dir5 --split_ratio=0
    infer.py --model_dir=font27 --batch_size=1 --source_obj=train.obj
    --embedding_ids=1
    condition 2: font2img.py , src_font=楷体.ttf --dst_font=简楷体.otf
    package.py --dir=dir5 --save_dir=save_dir5 --split_ratio=0
    infer.py --model_dir=font27 --batch_size=1 --source_obj=train.obj
    --embedding_ids=1
    In the two conditions , the src_font and dst_font are different, others are same, then , does the image are same that generated by infer.py ? In my experiment , there are a little difference between them , can you tell me why ?

  2. The images that generated by infer.py are not perfect , some strokes were missing in it , does it because the memory of my computer is not enough ?Or the other reasons?

@windid
Copy link

windid commented May 22, 2017

@xsmxsm
The embedding_ids in infer.py is pointing to the font in pretrained model, in your case, font27.
So if you want to inference your own fonts, train them first.

@kaonashi-tyc
Am I right?
What the 'source_obj' in infer.py used for? Providing the characters?

@xsmxsm
Copy link
Author

xsmxsm commented May 22, 2017

@windid
Because my computer can not train, so I use the pretrained model. Change the embedding_ids from 1 to 27, dose it means the 27 fonts in pretrained model ?

I think the "source_obj" refer to the generated train.obj in package.py, it is used to provided the characters that you want to use.

@kaonashi-tyc
Copy link
Owner

kaonashi-tyc commented May 22, 2017

@windid yes your interpretation is correct. It refers to a single font id.

source_obj is a pickled object that produced by package.py, could be either train.obj or val.obj.

@kaonashi-tyc
Copy link
Owner

@xsmxsm the source should not be changed once trained. Infer.py is an already trained model, you should stick with the SIMSUN font, otherwise the result won't be optimal, since it is trained on that font.

For your second question. Try to use SIMSUN(中易宋体) as the source font. The current algorithm does not generalize beyond its source font.

@xsmxsm
Copy link
Author

xsmxsm commented May 22, 2017

@kaonashi-tyc
Sorry , I can not download SIMSUN font , it maybe purchase, can you provide it ?

You say , use SIMSUN(中易宋体) as the source font. Does it mean download SIMSUN font as the --src_font in font2img.py ? And then --tgt_font can be any style of ttf or otf that I want ?
But , I still don't understand what the embedding_ids means ? Can you tell me which value can it use?Change the embedding_ids from 1 to 27, dose it means the 27 fonts in pretrained model ?

@xsmxsm
Copy link
Author

xsmxsm commented May 22, 2017

@kaonashi-tyc
I use --src_font=simsun.ttf --dst_font=simsun.ttf in font2img.py , and use embedding_ids=1 to 27 , it can generate 27 different style font , right ? But I make embedding_ids=40 , it also can generate a style font , why ? And what is "label[s] of the font, separate by comma" mean ?can I make embedding_ids=1,2,3 at the same time ? What does it mean ?

@jdsgomes
Copy link

jdsgomes commented Jun 5, 2017

Hi @kaonashi-tyc , nice work!
I have two questions about this network:

  1. When you mention "we can choose to fine-tune the interesting individual fonts." do you mean that you fine tune for one individual font at a time or for a selected number?
  2. The labels in the training data correspond to the target font? How are these translated in the embedded space?

@wysuperfly
Copy link

wysuperfly commented Jun 6, 2017

@xsmxsm
You may edit line 42 of package.py like this
pickle_examples(sorted(glob.glob(os.path.join(args.dir, "*.jpg"))), train_path=train_path, val_path=val_path, train_val_split=args.split_ratio)
this will solve the image order problem.

@vVVtreasure
Copy link

package.py
def pickle_examples(paths, train_path, val_path, train_val_split=0.1):
"""
Compile a list of examples into pickled format, so during
the training, all io will happen in memory
"""
with open(train_path, 'wb') as ft:
with open(val_path, 'wb') as fv:
for p in paths:
label = int(os.path.basename(p).split("_")[0])
with open(p, 'rb') as f:
print("img %s" % p, label)
img_bytes = f.read()
r = random.random()
example = (label, img_bytes)
if r < train_val_split:
pickle.dump(example, fv)
else:
pickle.dump(example, ft)

you can modift it and get an right def pickle_examples(paths, train_path, val_path, train_val_split=0.1):
"""
Compile a list of examples into pickled format, so during
the training, all io will happen in memory
"""
with open(train_path, 'wb') as ft:
with open(val_path, 'wb') as fv:
for p in paths:
label = int(os.path.basename(p).split("_")[0])
with open(p, 'rb') as f:
print("img %s" % p, label)
img_bytes = f.read()
r = random.random()
example = (label, img_bytes)
if r < train_val_split:
pickle.dump(example, fv)
else:
pickle.dump(example, ft)

you can modifit it and get Correct order

@xsmxsm
Copy link
Author

xsmxsm commented Jun 8, 2017

@wysuperfly
Thank you very much!
It solved my problems.

@vVVtreasure
Copy link

我把解决方案放在了新的提问上,可以看一下,很容易解决

@xsmxsm
Copy link
Author

xsmxsm commented Jun 26, 2017

@kaonashi-tyc
in font2img.py , it can generate characters which in a text file , but how can I use the pictures about my handwriting instead of the text file to generate the style fonts ?

@wysuperfly @vVVtreasure
在font2img.py 中,使用的目标汉字可以是一个text文档中的汉字,如何能使用我手写的汉字拍成的图片代替这个text文档,生成一些具有包含我的手写汉字内容同时又有不同风格的汉字?

@xsmxsm
Copy link
Author

xsmxsm commented Jun 26, 2017

@kaonashi-tyc
I skip font2img.py , take pictures of my handwriting and save them in 512*256 in dir , then do package.py use the dir , then do infer.py use :
python infer.py --model_dir=font27 --batch_size=1 --source_obj=save_dir/train.obj --embedding_ids=8 --save_dir=save_dir
but the generate fonts in pictures are fuzzy and stroke break .

@wysuperfly @vVVtreasure

我跳过了font2img.py这一步,用拍摄的手写汉字保存成512*256大小放在dir文件夹中,然后执行package.py 在save_dir中生成train.obj 文件,然后执行python infer.py --model_dir=font27 --batch_size=1 --source_obj=save_dir/train.obj --embedding_ids=8 --save_dir=save_dir
最后生成的汉字和我的手写字体确实相似,只是风格不同,但是,字体会出现模糊的现象,并且部分笔画会断裂,能解决吗?

@kaonashi-tyc
Copy link
Owner

@xsmxsm
The infer is for inference only so it can only produce what is being trained for in this case a certain font. However, since you want to model to output your own style, then you need to train your own model to learn how to transfer source font style to your own, not reusing the pretrained model since they are trained on other fonts.

Infer是用来推导已经训练好的字体的样式的,你的手写字体是你自己的字体,你需要train一个新的model来达到这个目的

@xsmxsm
Copy link
Author

xsmxsm commented Jul 4, 2017

@kaonashi-tyc

sorry, what I mean is not transfer source font style to my own , I mean transfer my own handwriting to the pretrained model , use my handwriting replace the source font style , can I do this ?

@kaonashi-tyc
Copy link
Owner

No you cannot transfer your style to pretrained model. That model is frozen there is no learning happens during inference. And the pretrained model is trained only on one font style, SIMSUN, if you feed your own handwriting to it, chances are it is probably not going to work and the output is bad quality.

@xsmxsm
Copy link
Author

xsmxsm commented Jul 10, 2017

@kaonashi-tyc

Yes , you are right.
Then if I want to transfer my handwriting to other style font that download , do I need to write more than 2000 characters to train them? If so , sometimes maybe I can't control every characters of my handwriting , does it influence the train? And , in your project , there are two steps before train.py , can I use them with the pictures of my handwriting ? hope you advice, thank you so much.

如果我希望把我手写的字体拍成的图片转换成其他的艺术风格,我虚否需要手写2000字以上来进行训练?但是有时候手写字体可能不会每个字的风格都一样,这会不会影响训练效果?在您的工程项目里,训练之前有两个其他步骤,font2img.py 和 package.py 第一步里面目标汉字书写在一个txt文档里面,我怎么能用我的手写汉字的图片来代替它呢? 希望能收到您详细的建议,非常感谢!

@windid
Copy link

windid commented Jul 10, 2017

@xsmxsm
我最后再回复你一次,中文字符集有超过2万个字符,所以中文字体,或者说东亚字体的制作成本要比西文字体要高很多很多。作者做这个模型出来,就是为了降低这个成本,你只需要制作少量的字符,剩下的就可以用模型训练出来。

所以,这个模型不是用来把你的手写字体转换成其他的艺术风格。他是帮你把你手写2000字的风格迁移到超过20000个字的整个中文字符集。

然后,你是否需要2000个手写字,理论上不需要,但实际取决于最后能不能达到你理想的效果,增加样本量是改善效果可能的途径之一。你需要的是实验。

同理,如果你2000个字里面风格差异太大,训练出来什么样也是没人知道。你需要的还是实验。

我一路看下来,你好像几乎没有机器学习的基础,也不知道你到底想干嘛。如果你是想好好学习机器学习,这个项目显然是不适合你入门的。

@xsmxsm
Copy link
Author

xsmxsm commented Jul 10, 2017

@windid

我的目的就是想让普通人的手写汉字风格可以具有艺术家的风格。就像 ougishi 软件的效果一样,既保留了原有手写体的内容,又有艺术家的风格。

在图片中这种效果已经实现了,将一个拍摄的图片迁移成艺术家的风格,同时保留了原图的内容。我想在书法这方面如何也能达到这种效果。

我刚接触深度学习不久,没啥基础,问的问题可能都是很低级的,还请您见谅。谢谢!

@kaonashi-tyc
Copy link
Owner

kaonashi-tyc commented Jul 10, 2017

@xsmxsm @windid
在issue里面讨论还是尽量不要情绪化。不过 @windid 这里说得还是有一些道理的。 @xsmxsm 你的目标和手段是不一致的。这个模型的假设就是有一个“标准”的,不变的源字体,然后有一些事先写好的目标字体,通过训练和学习,模型可以学习到原字体笔画到目标字体的映射。整个network是一个f(x) = y的函数,x在这里是原字体,y在这个是目标字体。你如果用你自己的手写字体替换掉这里的原字体,由于模型并没有在这个基础上进行训练,这个大概率是是会失败的。换句话说,模型一旦训练好,源字体是不可变的,否则效果就没有保证。我觉得这个issue已经diverge了,有很多不必要的争论,缺乏对于具体问题的解决方案,在这里先把它close,你以后有什么后续的具体的问题可以开新的issue。

I think we should avoid emotional talks in the issue discussion. But I agree with windid to some degree. Your objective and approach are not consistent in this case. One of the assumption of the model is that the source font is invariant. Via training, the model will learn a mapping to transform the source font to the target font. In that sense the whole network is just a function f(x) = y, where the x is the source font and y being the target font. If you use your own handwriting to substitute the source font here, since the model is not trained on your handwriting, it is likely to fail miserably. In other words, once the model is trained, the source font is fixed and should not be changed, otherwise there is no guarantee what the result would look like. I think this issue have diverged far enough, involving a lot of unnecessary argument, while lacking the substantial solution that will be useful to other people. I will close it for now, if you have any specific follow-up issue, feel free to open new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants