python3.5,
tensorflow r1.4 or r1.2,
svgwrite (installable with pip),
ipython (installable with pip),
Necessary for training / copying writing style from training set:
IAM Handwriting Database the ascii dataset ascii-all.tar.gz
and xml dataset data/lineStrokes-all.tar.gz
Extract them in the data/
dir.
Generating Sequences With Recurrent Neural Networks
https://github.com/hardmaru/write-rnn-tensorflow
Based on this, I built the synthesis net, enable it to generate characters as specified. To mimic a specific handwriting in training set is also possible.
I also got some useful inspirations from the following repo
https://github.com/snowkylin/rnn-handwriting-generation
Two modes available, --mode prediction
or --mode synthesis
, for freely generating or generating with character supervision.
python train.py
, and try python train.py -h
for possible input arguments.
python sample.py
, and try --texts "<the texts you want to write>"
when given a synthesis model.
--model_dir <your_model_dir>
specifies which model should be loaded.
--bias
is a non-negative float that specifies how risky the model will be during generating, e.g. --bias 0
for as random as possible.
--copy_style
for specifying a sample in training set to copy style from, e.g. --copy_style 2
.
python sample.py
the result would be a handwriting as follows.
And a window alignment figure will be generated, which tells the connection between characters (vertical axis) and respective strokes (horizontal axis). This should reveal if the generation is performing well.
(For this you need to download the training dataset first, see dependencies.)
python sample.py --copy_style 2
the result would be a handwriting that mimics the second training example.
Similarly, a window alignment figure will be generated.
A reference handwriting will be drawn. One can therefore tell if the copying was good enough.
It could be that the allocated tensors are too big, try a smaller batch size etc.
Copying only from good training examples can increase the quality a bit. Also, I didn't try a lot of hyperparameter settings. Train a model of your own using better hyperparameters if you like.