This repo holds my solutions (in Python) for the programming assignments of the edX's Deep Learning with Python and PyTorch Course by IBM
Course URL: https://www.edx.org/course/deep-learning-with-python-and-pytorch
Covers basics of Tensors (in 1-dimension), Data types, Indexing and Slicing, Basic Operations in Pytorch like addition, multiplication, dot product, broadcasting and plotting functions
Covers examples of 2-dimension tensors, Tensor creation in 2D, Iindexing and Slicing in 2D and basic operations on 2D tensors
Covers derivatives and how to find derivatives in Pytorch using .backward(), partial derivatives wrt different variables and a cool way to find derivatives wrt to entire function (¬ being limited to Pytorch which takes derivatives of only scalar functions)
Covers how to build a simple dataset object. We will define a dataset class and override the Python's getitem() and len() methods in our class for the objects.
Covers using prebuilt datasets (MNIST) and also using some Transform operations on the dataset
Covers Linear Regression in 1D and making a prediction for a given value of x .ie. finding yhat using 1. Pytorch Class 'Linear' and using 2. Custom Modules
Covers a lot of basics: assuming linear relationship between x and y, we are going to train a model. Covers Loss functions, Gradient Descent to find loss function, Cost fn: Mean Square Error and training parameters in Pytorch manually
Covers the case when both weight (.ie. slope) and bias have to be trained
Covers the need for Stochastic Gradient Descent, problem with Stochastic Gradient Descent and implementing Stochastic Gradient Descent in Pytorch
Implementing Mini-batch gradient descent in Pytorch (also includes batch GD and Stochastic GD for comparison)
This covers implementing the above code the Pytorch way using in-built Pytorch functions/methods for our loss function and optimizer
This covers details about why we need training and validation datasets. Also, tells us how to implement PyTorch code to find ideal hyperparameters (here: learning_rate) for the same training dataset using the validation dataset. Then, base on lowest loss on validation dataset, we pick one of the models finally
This covers the implementation of early stopping .ie. using the model for an epoch which has the lowest loss on validation data, instead of running it for maximum epochs. Also, covers saving and loading a model
This covers introduction to multiple linear regression for 2-dimensional x (.ie. x1, x2)- particularly. the prediction part using in-built nn.Linear as well as using custom modules.
This covers training MLR of in_size=2 and out_size=1.
This covers multiple output linear regression. In this, we have in_size=1 and out_size=10. ie. for each x-value, we will have 10 y-values
This covers training of Multiple Output LR with in_size of 2 and out_size of 2
-
Here, we cover logistic regression. Also, we cover a cool module called nn.sequential to build really fast models.
-
Now, two ways to get sigmoid function o/p- 1. using nn.sigmoid and 2. using functional.sigmoid()
- nn.sequential is a really fast way to build modules, but not as flexible as nn.modules. To use it, we use something like model=nn.sequential(firstmodelconstructor,secondmodelconstructor). Here, it will be model=nn.sequential(nn.Linear(1,1),nn.sigmoid())
- If we use custom modules, will use F.sigmoid(self.linear(x)) in our forward fn. Note that here, the out_size should always be 1 and can be hardcoded in Class itself.
The course contents and code are provided by IBM under MIT License