This repository contains my personal notes, experiments, and code implementations while reading Deep Learning with Python by François Chollet.
I’m using this repo as a structured learning journal — combining theory with practice and pushing all code snippets, exercises, and summaries as I progress through the book.
I’ve completed the following chapters so far:
-
Chapter 1 – What is Deep Learning? Overview of the core ideas behind deep learning, historical context, and why neural networks work.
-
Chapter 2 – The Mathematical Building Blocks of Neural Networks (Tensors) Covered tensors, tensor operations, broadcasting, tensor shapes, and how they form the foundation of all deep learning computations.
-
Chapter 3 – The Engine of Neural Networks: Gradient Descent and Backpropagation Understood how forward and backward passes work, what loss functions do, and how gradient descent optimizes weights.
-
Chapter 4 – Classifying Data with Neural Networks Implemented basic models for binary and multiclass classification using Keras and NumPy. Learned about activation functions, loss metrics, and model evaluation.
-
Chapter 5 – The Fundamentals of Machine Learning Learned about generalization, overfitting, underfitting, validation sets, and how to make models robust and efficient.
- Language: Python 3.x
- Main Libraries: TensorFlow / Keras, NumPy, Matplotlib
- Editor: VS Code (with Jupyter Notebooks extension)
My goal is not just to read the book but to deeply understand each concept through code — replicating examples, experimenting with parameters, and documenting every insight here.
This repo will eventually serve as my personal deep learning reference and showcase of foundational understanding before I move into large language models (LLMs) and advanced AI topics.
- Continue from Chapter 6 – Introduction to Deep Learning for Computer Vision
- Start experimenting with small CNNs on datasets like MNIST or CIFAR-10.
Feel free to fork, open issues, or discuss concepts — I’m keeping this repo beginner-friendly but hands-on.