-
Notifications
You must be signed in to change notification settings - Fork 0
KerasLearning
Setting up a clean virtual environment is the right first step. Since you’re on Windows, here’s what you need to do in the Command Prompt (cmd):
cd C:\Users\IAGhe\OneDrive\Documents\Learning\Python
python -m venv keras-env
keras-env\Scripts\activateOnce activated, you should see (keras-env) at the start of your command line.
python -m pip install --upgrade pipKeras is now part of TensorFlow, so you only need to install TensorFlow. I’ll also include some common scientific packages useful for learning:
pip install tensorflow keras numpy pandas matplotlib scikit-learn jupyter
python -c "import tensorflow as tf; print(tf.__version__)"
Once your virtual environment is activated ((keras-env) should appear in your prompt), just run:
jupyter notebook
📘 Keras Learning Curriculum Phase 1: Foundations
Environment setup & basics
Install TensorFlow/Keras (done ✅).
Learn about Tensors and how TensorFlow represents data.
Understand the Sequential API (stacking layers linearly).
First model: a simple feed-forward network for MNIST digit classification.
Core building blocks
Layers: Dense, Activation, Dropout, Flatten.
Loss functions & metrics.
Optimisers (SGD, Adam, RMSProp).
Training loop (model.fit, model.evaluate, model.predict).
Goal: Be able to build, train, and evaluate a simple neural network.
Phase 2: Deep Learning Essentials
Data handling
Use ImageDataGenerator / tf.data pipelines.
Normalisation, one-hot encoding, shuffling, batching.
Model Architectures
Convolutional Neural Networks (CNNs): for image data.
Recurrent Neural Networks (RNNs): LSTM/GRU for sequences.
Embedding layers for text.
Regularisation & generalisation
Dropout, L2 regularisation, batch normalisation.
Early stopping & checkpointing.
Goal: Understand how to handle different data types (images, text, time-series).
Phase 3: Advanced Keras Usage
Functional API
Building non-sequential models (multi-input, multi-output, skip connections).
Residual connections (ResNet-style).
Customisation
Writing custom layers and loss functions.
Using callbacks (ReduceLROnPlateau, TensorBoard).
Transfer Learning & Fine-tuning
Using pre-trained models (VGG, ResNet, MobileNet).
Feature extraction vs fine-tuning.
Goal: Gain flexibility in designing complex and efficient models.
Phase 4: Research & Production
Advanced architectures
Attention mechanisms.
Transformer models with Keras.
Autoencoders & Variational Autoencoders (VAEs).
Generative Adversarial Networks (GANs).
Deployment & Scaling
Export models (SavedModel, HDF5).
Serving models with TensorFlow Serving.
Converting to TensorFlow Lite / ONNX for edge deployment.
Best practices
Hyperparameter tuning (Keras Tuner).
Experiment tracking (TensorBoard, Weights & Biases).
Reproducibility & version control.
Goal: Be able to adapt Keras for research experiments and deployment.
Phase 5: Capstone Projects
Choose projects aligned with your interests:
Computer Vision: Image classifier (e.g. medical imaging).
NLP: Sentiment analysis, machine translation, or text summarisation.
Time Series: Stock prediction, energy demand forecasting.
Generative AI: Train a GAN or VAE on custom data.
Hybrid Models: Multi-modal input (e.g. text + image).
Suggested Timeline
Weeks 1–2: Phase 1 (basics).
Weeks 3–4: Phase 2 (CNNs, RNNs, embeddings).
Weeks 5–6: Phase 3 (functional API, transfer learning).
Weeks 7–9: Phase 4 (advanced models & deployment).
Weeks 10+: Capstone project.
👉 Along the way, use the official Keras tutorials: 🔗 https://keras.io/examples/