This project trains a Variational Autoencoder (VAE) on the CelebA dataset to generate facial images.
- Install Dependencies: Ensure you have Python 3.x installed and then install the required Python packages:
pip install -r requirements.txt
- Data Preparation: Download the CelebA dataset and place it in
../data
. The script expects this specific structure.
To start the VAE training, run:
./train.sh
This script uses screen
to run training in a detached session. Use screen -r vae_training_run
to reattach to the session.
After training, generate interpolated samples with:
python create_samples.py
This script loads the trained model, generates interpolations between two latent points, and saves the output as interpolations.png
.
- tmux Sessions: If you prefer
tmux
overscreen
, you can use the providedtmux
commands withintrain.sh
for session management. - Reattach to Training Session: Use the provided
reattach_to_vae_training_run.sh
script tailored fortmux
.
Ensure CUDA is available for GPU acceleration, falling back to CPU if not present.
tmux new-session -d -s vae_training_run "python main.py" echo "tmux VAE training started."
tmux attach-session -t vae_training_run