Skip to content

tensorflowSyllabus

igheyas edited this page Aug 12, 2025 · 1 revision

TensorFlow in Python — Full Syllabus Course goals Understand tensors, automatic differentiation, and computational graphs in TF2.

Build, train, evaluate, and deploy deep learning models using tf.keras and custom training loops.

Optimise input pipelines and training performance on CPU/GPU/TPU.

Apply TF to vision, text, time series, recommendation, and probabilistic modelling.

Package, serve, and monitor models in production.

Prerequisites Python (functions, classes, virtual environments), NumPy, basic pandas & plotting.

Maths: linear algebra (vectors/matrices), calculus (gradients), probability.

ML basics: train/validation/test, loss/optimisers, overfitting/regularisation.

Environment & tools Python 3.x, virtualenv/conda; Jupyter/Colab or VS Code.

tensorflow (GPU build if available), tensorflow-datasets, matplotlib, pandas, scikit-learn.

Optional: tensorflow-addons, tensorflow-probability, tensorflow-text, keras-nlp, tensorflow-recommenders, ml-dtypes.

Core programme (12 weeks) Week 1 — Setup & Tensor fundamentals Outcomes: Install TF; manipulate tensors; grasp eager execution and broadcasting.

Topics: installing TF CPU/GPU; tf.Tensor basics; dtypes/shapes; slicing; broadcasting; tf.math.

Lab: re-implement basic linear regression with tensors and manual gradient checks.

Assessment: short quiz + notebook.

Week 2 — AutoDiff & Custom training loops Outcomes: Use tf.GradientTape; write a full training step from scratch.

Topics: tf.Variable; tf.GradientTape; custom losses/metrics; numerical stability.

Lab: softmax regression on MNIST with a from-scratch loop (no Keras Model.fit).

Week 3 — Keras models, layers & training API Outcomes: Build models with Sequential/Functional APIs; compile/fit/evaluate.

Topics: tf.keras.layers, Functional API, model summaries; callbacks (EarlyStopping, ReduceLROnPlateau, ModelCheckpoint); saving SavedModel vs H5.

Lab: MLP on Fashion-MNIST; experiment with dropout/batch norm; learning-rate schedules.

Week 4 — Data pipelines with tf.data & TFRecords Outcomes: Build performant input pipelines.

Topics: Dataset.from_tensor_slices/from_generator; map/batch/prefetch/cache; AUTOTUNE; interleave; TFRecords + tf.train.Example; deterministic vs non-deterministic order.

Lab: convert CIFAR-10 to TFRecords; profile throughput improvements.

Week 5 — CNNs for computer vision Outcomes: Train CNNs; apply data augmentation & transfer learning.

Topics: Conv/Pool, padding/stride; RandomFlip/RandomCrop/RandomContrast; transfer from keras.applications (e.g., ResNet50); fine-tuning vs feature extraction.

Lab: CIFAR-10 baseline CNN, then fine-tune a pretrained network; confusion matrices & error analysis.

Week 6 — Sequence models: RNNs, LSTM/GRU & 1D CNNs Outcomes: Build sequence models for text/time series.

Topics: Embeddings; masking; LSTM/GRU; teacher forcing; 1D CNNs; sequence padding/bucketing.

Lab: IMDB sentiment classifier with embeddings + LSTM; compare to 1D CNN.

Week 7 — Transformers with Keras Outcomes: Implement and train Transformer-based models using high-level APIs.

Topics: self-attention, positional encoding; keras-nlp text preprocessing/tokenisers; transfer via pretrained encoders; fine-tuning best practices.

Lab: fine-tune a small Transformer for news classification; examine attention patterns.

Week 8 — Regularisation, optimisation & experiment tracking Outcomes: Improve generalisation; run controlled experiments.

Topics: L1/L2, dropout, label smoothing, data augmentation; optimisers (SGD+m, Adam, AdamW, RMSProp); cosine/OneCycle LR; gradient clipping; mixed precision.

Lab: hyper-parameter sweeps (manual or with KerasTuner); integrate TensorBoard (scalars, histograms, PR curves).

Week 9 — Performance & distributed training Outcomes: Scale training; profile bottlenecks.

Topics: tf.function & autograph; XLA; mixed precision (dtype_policy); tf.distribute strategies (MirroredStrategy, MultiWorkerMirroredStrategy); TPUs (Cloud/Colab); checkpointing and determinism.

Lab: port a Week-5 model to multi-GPU; profile with TensorBoard profiler.

Week 10 — Model evaluation, robustness & responsible ML Outcomes: Evaluate beyond accuracy; handle drift & bias.

Topics: calibration, ROC/PR, class imbalance; robustness (augmentations, adversarial noise at a high level); dataset shift & drift monitors; fairness metrics; reproducibility (seeds, hashing datasets).

Lab: build an evaluation suite; create a simple data-drift detector on validation features.

Week 11 — Deployment: Serving, TF Lite, TF.js & ONNX Outcomes: Export and serve models across environments.

Topics: SavedModel signatures; TensorFlow Serving REST/gRPC; batching; model versioning; TFLite conversion, quantisation (dynamic/int8), Edge constraints; TF.js for web; ONNX export basics.

Lab: serve a CNN via TF Serving locally; convert a model to TFLite and test on sample inputs.

Week 12 — End-to-end pipelines with TFX (intro) & MLOps basics Outcomes: Automate data→train→evaluate→deploy.

Topics: TFX components (ExampleGen, Transform, Trainer, Evaluator, Pusher); metadata & lineage; CI for models; model registry; A/B or shadow deployments.

Lab: a minimal TFX pipeline locally; automate evaluation gates before “push”.

Advanced electives (pick any 3–5) TensorFlow Probability (TFP) & Bayesian deep learning Variational inference, reparameterisation trick, distributions, probabilistic layers, uncertainty estimation; VAE with TFP.

Reinforcement Learning with TF-Agents Bandits, DQN/PPO; replay buffers; environment wrappers; logging & evaluation.

Time-series & forecasting Sliding windows with tf.data; seq2seq; probabilistic forecasting with TFP; evaluation (sMAPE, MASE).

Recommendation systems with TensorFlow Recommenders (TFRS) Two-tower retrieval, ranking tasks, negative sampling, evaluation at k, candidate generation.

Object detection & segmentation TF Object Detection API, data annotation formats, training/fine-tuning SSD/Faster-RCNN; U-Net/DeepLab for segmentation.

Graph Neural Networks (GNNs) Graph tensors, message passing, popular layers (GCN/GAT); datasets; training tricks.

Audio & speech Spectrogram/Log-Mel features with tf.signal; models for keyword spotting; pre-trained audio embeddings.

Large-scale training Pipeline parallelism concepts; sharded datasets; checkpointing strategies; reading from cloud storage.

Capstone project (3–4 weeks) Choose a real-world problem; deliver:

Proposal (problem, data, metrics, risks).

Reproducible repo (env file, scripts, seed control).

Model card (intended use, limitations, fairness/robustness notes).

Deployment artefact (SavedModel + simple serving demo or TFLite).

Report (methods, experiments/ablation, results, error analysis, next steps).

Example capstones

Vision: defect detection with transfer learning; saliency for error analysis.

NLP: news multi-label classifier with Transformer; calibration and threshold tuning.

Time-series: probabilistic demand forecasting with TFP; decision-focused metrics.

Recsys: retrieval + ranking with TFRS; offline vs simulated online metrics.

Assessments & milestones Weekly labs (12): hands-on notebooks with targets (✅ pass/fail + stretch goals).

Two mini-projects (after Weeks 6 and 10): small end-to-end tasks (data → model → evaluation).

Capstone: graded on problem framing (15%), methodology (35%), empirical rigour (30%), and deployment/readiness (20%).

Reading & reference (use any recent edition) TensorFlow & Keras official guides and API docs.

Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow — Aurélien Géron.

Deep Learning with Python — François Chollet.

Probabilistic Machine Learning — Kevin P. Murphy (for TFP track).

TensorBoard & TFX official tutorials.

Common pitfalls & “gotchas” Silent shape mismatches → assert shapes & use model.summary() early.

Data input bottlenecks → always add prefetch(AUTOTUNE); watch CPU ↔ GPU utilisation.

Non-determinism in experiments → set seeds, control inter-op/intra-op threads when needed.

Metric leakage → compute metrics on held-out data; beware data augmentation at eval time.

Saving/loading: prefer SavedModel; keep preprocessing inside the graph when deploying.

Suggested progression checklist Install TF, verify GPU/TPU if available.

Master tensors & GradientTape.

Comfort with Keras Functional API + callbacks.

Build efficient tf.data pipelines and TFRecords.

Train/fine-tune CNNs; do serious error analysis.

Train an LSTM/Transformer; understand tokenisation & masking.

Use mixed precision + tf.distribute for speed.

Export SavedModel, run TF Serving; convert to TFLite.

Automate evaluation gates; track experiments.

Deliver a robust capstone with a model card.

Clone this wiki locally