We started with basic of AI .We Learned the below things.
- Why do neural networks work so well?
- Kind of data we "actually" deal with
- Feed forward Networks
- Neurons
- Mathematics
- Architectural Ingredients
Skill_Test:
- Rewrite the Colab file and:
- remove the last activation function
- make sure there are in total 44 parameters
- run it for 2001 epochs
- Upload your work to a public GitHub Repository and share the link
- Add a readme file to your project and describe these things:
- What is a neural network neuron?
- What is the use of the learning rate?
- How are weights initialized?
- What is "loss" in a neural network?
- What is the "chain rule" in gradient flow?
Put Questions : #1
GitHub: https://github.com/Code-Trees/END-GAME/tree/main/Session_1
Session 2 was shocking .Why ? Because we made a working neural network in Excel sheet.
We learned about
- Back propagation
- Embeddings - An analogy from Visual Domain
- Embeddings
- Neural Networks
- Heavy Math - Forward Propagation
- Back propagation
- Word Embeddings
- Modern Embeddings
- Word Embeddings vs Language Model
- Word2Vec
Skill_Test:
-
Rewrite the whole excel sheet showing back propagation. Explain each major step, and write it on Github.
- Use exactly the same values for all variables as used in the class
- Take a screenshot, and show that screenshot in the readme file
- Excel file must be there for us to cross-check the image shown on readme (no image = no score)
- Explain each major step
- Show what happens to the error graph when you change the learning rate from [0.1, 0.2, 0.5, 0.8, 1.0, 2.0]
-
Submit the GitHub link. Github link must be public (check the link without logging in, if it opens, then only share the link ).
- Sanity, please.
Put Questions : #1
Github:https://github.com/Code-Trees/END-GAME/tree/main/Session_2
Session 3 is All about PyTorch. We learned a lot of things about PyTorch , TensorFlow. Also we worked with Tensors ,AutoGrad, Numpy functionality ,Tensor Indexing Model building in Pytorch Step by Step.
After Learning We got Task to Do. Trust Me it is Really Easy . Joking .... :-) :-)
Skill_Test
- Write a neural network that can:
- take 2 inputs:
- an image from MNIST data set, and
- a random number between 0 and 9
- and gives two outputs:
- you can mix fully connected layers and convolution layers
- you can use one-hot encoding to represent the random number input as well as the "summed" output.
- take 2 inputs:
- Your code MUST be:
- well documented (via reamedme file on GitHub and comments in the code)
- must mention the data representation
- must mention your data generation strategy
- must mention how you have combined the two inputs
- must mention how you are evaluating your results
- must mention "what" results you finally got and how did you evaluate your results
- must mention what loss function you picked and why!
- training MUST happen on the GPU
Put Questions : #1
GitHub: https://github.com/Code-Trees/END-GAME/blob/main/Session_3
We will learn about Recurrent Neural Networks and LSTMS. But let's keep in mind that how far RNNs/LSTMs are from the modern state of art.
Skill_Test:
- Remove RNN and add LSTM to the model .
- Refer to this .
- The questions this time are already mentioned in the file. Once you are done, then write your solutions.
Put Questions : #1
GitHub: https://github.com/Code-Trees/END-GAME/tree/main/Session_4