-
In this project, I get to build a neural network from scratch to carry out a prediction problem on a real dataset! By building a neural network from the ground up, we'll have a much better understanding of gradient descent, backpropagation, and other concepts that are important to know before we move to higher level tools such as Tensorflow. We'll also get to see how to apply these networks to solve real prediction problems! For implementaion and the project result, please check the link above!
-
dog breed classifcation using CNN model and transfer learning idea
In this project, I will learn how to build a pipeline that can be used within a web or mobile app to process real-world, user-supplied images. Given an image of a dog, my algorithm will identify an estimate of the canine’s breed. If supplied an image of a human, the code will identify the resembling dog breed.
-
In this project, I'll generate my own Simpsons TV scripts using RNNs. I'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network I'll build will generate a new TV script for a scene at Moe's Tavern.
-
In this project, I'll use generative adversarial networks to generate new images of faces.
-
In this project, I implemented a WGAN to generate human faces, and it is an improved version of DCGAN project (previous face generation project)
-
Pix2PixHD model for style transfer
I implemented a Pix2Pix HD model for style transfer, that is transferring a image from one style to another style. Howerver, my impelmentation is highly correlated with the data set I use, so you need to change data pipeline by yourself so that that model can be applied into your project.
- Sentiment Analysis with Numpy: Andrew Trask leads you through building a sentiment analysis model, predicting if some text is positive or negative.
- Intro to TensorFlow: Starting building neural networks with Tensorflow.
- Weight Intialization: Explore how initializing network weights affects performance.
- Autoencoders: Build models for image compression and denoising, using feed-forward and convolution networks in TensorFlow.
- Transfer Learning (ConvNet). In practice, most people don't train their own large networkd on huge datasets, but use pretrained networks such as VGGnet. Here you'll use VGGnet to classify images of flowers without training a network on the images themselves.
- Intro to Recurrent Networks (Character-wise RNN): Recurrent neural networks are able to use information about the sequence of data, such as the sequence of characters in text.
- Embeddings (Word2Vec): Implement the Word2Vec model to find semantic representations of words for use in natural language processing.
- Sentiment Analysis RNN: Implement a recurrent neural network that can predict if a text sample is positive or negative.
- Tensorboard: Use TensorBoard to visualize the network graph, as well as how parameters change through training.
- Reinforcement Learning (Q-Learning): Implement a deep Q-learning network to play a simple game from OpenAI Gym.
- Sequence to sequence: Implement a sequence-to-sequence recurrent network.
- Batch normalization: Learn how to improve training rates and network stability with batch normalizations.
- Generative Adversatial Network on MNIST: Train a simple generative adversarial network on the MNIST dataset.
- Deep Convolutional GAN (DCGAN): Implement a DCGAN to generate new images based on the Street View House Numbers (SVHN) dataset.
- Intro to TFLearn: A couple introductions to a high-level library for building neural networks.
Each directory has a requirements.txt
describing the minimal dependencies required to run the notebooks in that directory.
To install these dependencies with pip, you can issue pip3 install -r requirements.txt
.
You can find Conda environment files for the Deep Learning program in the environments
folder. Note that environment files are platform dependent. Versions with tensorflow-gpu
are labeled in the filename with "GPU".