Skip to content

Latest commit

 

History

History
7 lines (4 loc) · 1.82 KB

README.md

File metadata and controls

7 lines (4 loc) · 1.82 KB

When people think of “generative AI”, most probably think of ChatGPT and other large language models. However, the generation of realistic images through artificial intelligence is on the rise and has become a critical area of research and development not only in the computer science field, but also in pop culture. Various machine learning models have been developed for image generation, including Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). The current research aims to develop, compare, and analyze the VAE vs GAN models in terms of structure, strengths, weaknesses, and performance characteristics. Specifically, we will look at the generation of images similar to “Pokémon” given the models’ proper development and training.

The current research developed, analyzed, and compared a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN) in terms of structure, strengths, weaknesses, and performance characteristics. Specifically, we will looked at the generation of images similar to “Pokémon” given the models’ proper development and training. The main findings are that for image generation, the implementation of VAE vs GAN is ultimately up to user preference. The VAE is more accurate in its reconstruction of Pokémon, while the GAN is more creative in generating unseen images. While the models could not produce perfect output, it is possible they can be used to at least offer inspiration to human artists in their creation of new Pokémon.

The dataset used can be found on Kaggle https://www.kaggle.com/datasets/hlrhegemony/pokemon-image-dataset

The images in the dataset were extracted from their class folders and brought up to the root folder (here, called "pokemonallimages") using a python script. Make sure to have the dataset in the same directory as this notebook.