Implementation of pix2pix
Based on the Tensorflow implementation
tf_pix2pix.ipynb notebook for training. You can run it on colab using this link
Training depends on the dataset. Here you can find many datasets
Make sure to choose the correct direction
BtoA depending on the dataset.
Convert the model to TensorFlow.js
- First export the model by changing the
export. This will create export files.
- use the
convert_keras.pyscript to convert the model to keras
python convert_keras.py --dir input_dir --out output_dir
- Install tensorflowjs package using
pip install tensorflowjs
- Convert the model
tensorflowjs_converter --input_format keras keras.h5 output_directory
Train on your dataset
Use these scripts
The process first uses a caffe model to create mat files. Then you can use matlab to generate edges. If you faced some difficulties with that you can use
cv2.canny to extract the edge map of the input iamges.
cats.zip which contains 1000 images of cats. It was obtained from http://www.robots.ox.ac.uk/~vgg/data/pets/ by
first using the segmentation to extract the cats and replace the background with white. Then the previous step was used
to generate the edges.
pokemon.zip contains 800 images of Pokemons that were optained from https://www.kaggle.com/kvpratama/pokemon-images-dataset. The edges were extracted using canny edge extractor.