Skip to content

Implementation of a VAE from scratch. Face generation model provided.

Notifications You must be signed in to change notification settings

Honyant/vae-torch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VAE Face Generation

This project trains a Variational Autoencoder (VAE) on the CelebA dataset to generate facial images.

Setup

  1. Install Dependencies: Ensure you have Python 3.x installed and then install the required Python packages:
pip install -r requirements.txt
  1. Data Preparation: Download the CelebA dataset and place it in ../data. The script expects this specific structure.

Training

To start the VAE training, run:

./train.sh

This script uses screen to run training in a detached session. Use screen -r vae_training_run to reattach to the session.

Sampling

After training, generate interpolated samples with:

python create_samples.py

This script loads the trained model, generates interpolations between two latent points, and saves the output as interpolations.png.

Sample generation: picture 0

Utilities

  • tmux Sessions: If you prefer tmux over screen, you can use the provided tmux commands within train.sh for session management.
  • Reattach to Training Session: Use the provided reattach_to_vae_training_run.sh script tailored for tmux.

Note

Ensure CUDA is available for GPU acceleration, falling back to CPU if not present.

tmux new-session -d -s vae_training_run "python main.py" echo "tmux VAE training started."

tmux attach-session -t vae_training_run

About

Implementation of a VAE from scratch. Face generation model provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published