Skip to content

This repo presents the final project for Computer Vision course in INE / UFSC

License

Notifications You must be signed in to change notification settings

tldrafael/ManipulateFacialAttributesUsingVAE

Repository files navigation

Manipulate Facial Attributes Using VAE

This repo presents the final project for Computer Vision course in INE / UFSC.

The reports are written in Portuguese and saved in reports.

The trained model is available here.

For environment compatibility, check the environment.yml out.

The Architecture Solution

The proposed architecture has a common VAE structure besides an additional decoder branch that predicts face masks. During training it uses face masks as labels too, which are used to replace the background of the reconstructed image such that the loss function is applied only on the face pixels. On the other hand, on the prediction mode the background replacement is made straightly by the predicted mask, not requiring any extra input but an image.

On training

On prediction

Playing on Colab

Test the project on Colab!! Generate spectrums or add specific attributes on your own, give a trial here.

Dataset

This project used the CelebA dataset.

Besides, as said in the Architecture session, the solution uses face masks during training. Although they are not required during prediction. For the masks extraction, I used the project face-parsing.PyTorch. The masks data are available for downloading here.

Training

To train new models, set the CelebA dataset directory to the environment variable celeba. Make sure this directory contains the images and masks under the folder imgs and mask_faceparsing respectively.

python training.py

The new training artifacts (logs and checkpoints) are stored in the traindir folder inside a subfolder named with training starting timestamp.

Afterwards, the saved model need to be converted to correctly load on the new testing NN architecture. To guarantee it, run:

python convert_model_toprediction.py <NEW_TRAINFILE>.h5

It saves a new object on the same directory with the suffix *.predict.* on the filename.

Testing

Ensure that you have the prediction version of the pretrained model:

python convert_model_toprediction.py traindir/pretrained/checkpoints/weights.235-1.23-1.30-1.12-0.07-0.10-0.00.h5

During testing, you don't need the ground truth of the face mask, the NN generates a reconstructed version of the image and a mask in the cache dir.

python testing.py cache/samples/077771.jpg

Adding Attribute

The present available attributs are: Bald, Bangs, Black_Hair, Blond_Hair, Eyeglasses, Gray_Hair, Heavy_Makeup, Mustache, Pale_Skin, Pointy_Nose, Smiling, Wearing_Hat, Young.

The command bellow generates a spectrum of that attribute, and saves a file spectrum.jpg inside the cache directory.

python add_attributes.py -f cache/samples/077771.jpg -a Smiling

The attributes are represented by vectors stored in cache/vector_attrs/. You can generate more on your own.

Create new Vector Attributes

To generate more vector attributes available on the celebA, run:

python create_attributevectors.py Bags_Under_Eyes 

It creates a cache/Bags_Under_Eyes.npy file. Put it in the cache/vector_attrs/ subfolder to make it available for the add_attribute.py script.

About

This repo presents the final project for Computer Vision course in INE / UFSC

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published