Skip to content

edgeimpulse/courseware-embedded-machine-learning

Repository files navigation

Embedded Machine Learning Courseware

Markdown link check status badge Markdown linter status badge Spellcheck status badge HitCount

Welcome to the Edge Impulse open courseware for embedded machine learning! This repository houses a collection of slides, reading material, project prompts, and sample questions to get you started creating your own embedded machine learning course. You will also have access to videos that cover much of the material. You are welcome to share these videos with your class either in the classroom or let students watch them on their own time.

This repository is part of the Edge Impulse University Program. Please see this page for more information on how to join: edgeimpulse.com/university.

How to Use This Repository

Please note that the content in this repository is not intended to be a full semester-long course. Rather, you are encouraged to pull from the modules, rearrange the ordering, make modifications, and use as you see fit to integrate the content into your own curriculum.

For example, many of the lectures and examples from the TinyML Courseware (given by [3]) go into detail about how TensorFlow Lite works along with advanced topics like quantization. Feel free to skip those sections if you would just like an overview of embedded machine learning and how to use it with Edge Impulse.

In general, content from [3] cover theory and hands-on Python coding with Jupyter Notebooks to demonstrate these concepts. Content from [1] and [2] cover hands-on demonstrations and projects using Edge Impulse to deploy machine learning models to embedded systems.

Content is divided into separate modules. Each module is assumed to be about a week's worth of material, and each section within a module contains about 60 minutes of presentation material. Modules also contain example quiz/test questions, practice problems, and hands-on assignments.

If you would like to see more content than what is available in this repository, please refer to the Harvard TinyMLedu site for additional course material.

License

Unless otherwise noted, slides, sample questions, and project prompts are released under the Creative Commons Attribution NonCommercial ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are welcome to use and modify them for educational purposes.

The YouTube videos in this repository are shared via the standard YouTube license. You are allowed to show them to your class or provide links for students (and others) to watch.

Professional Development

Much of the material found in this repository is curated from a collection of online courses with permission from the original creators. You are welcome to take the courses (as professional development) to learn the material in a guided fashion or refer students to the courses for additional learning opportunities.

Prerequisites

Students should be familiar with the following topics to complete the example questions and hands-on assignments:

  • Algebra
    • Solving linear equations
  • Probability and Statistics
    • Expressing probabilities of independent events
    • Normal distributions
    • Mean and median
  • Programming
    • Arduino/C++ programming (conditionals, loops, arrays/buffers, pointers, functions)
    • Python programming (conditionals, loops, arrays, functions, NumPy)

Optional prerequisites: many machine learning concepts can be quite advanced. While these advanced topics are briefly discussed in the slides and videos, they are not required for quiz questions and hands-on projects. If you would like to dig deeper into such concepts in your course, students may need to be familiar with the following:

  • Linear algebra
    • Matrix addition, subtraction, and multiplication
    • Dot product
    • Matrix transposition and inversion
  • Calculus
    • The derivative and chain rule are important for backpropagation (a part of model training)
    • Integrals and summation are used to find the area under a curve (AUC) for some model evaluations
  • Digital signal processing (DSP)
    • Sampling rate
    • Nyquist–Shannon sampling theorem
    • Fourier transform and fast Fourier transform (FFT)
    • Spectrogram
  • Machine learning
    • Logistic regression
    • Neural networks
    • Backpropagation
    • Gradient descent
    • Softmax function
    • K-means clustering
  • Programming
    • C++ programming (objects, callback functions)
    • Microcontrollers (hardware interrupts, direct memory access, double buffering, real-time operating systems)

Feedback and Contributing

If you find errors or have suggestions about how to make this material better, please let us know! You may create an issue describing your feedback or create a pull request if you are familiar with Git.

This repo uses automatic link checking and spell checking. If continuous integration (CI) fails after a push, you may find the dead links or misspelled words, fix them, and push again to re-trigger CI. If dead links or misspelled words are false positives (i.e. purposely malformed link or proper noun), you may update .mlc_config.json for links to ignore or .wordlist.txt for words to ignore.

Required Hardware and Software

Students will need a computer and Internet access to perform machine learning model training and hands-on exercises with the Edge Impulse Studio and Google Colab. Students are encouraged to use the Arduino Tiny Machine Learning kit to practice performing inference on an embedded device.

A Google account is required for Google Colab.

An Edge Impulse is required for the Edge Impulse Studio.

Students will need to install the latest Arduino IDE.

Preexisting Datasets and Projects

This is a collection of preexisting datasets, Edge Impulse projects, or curation tools to help you get started with your own edge machine learning projects. With a public Edge Impulse project, note that you can clone the project to your account and/or download the dataset from the Dashboard.

Motion

Sound

Image Classification

Object Detection

Syllabus

Course Material

Module 1: Machine Learning on the Edge

This module provides an overview of machine learning and how it can be used to solve problems. It also introduces the idea of running machine learning algorithms on resource-constrained devices, such as microcontrollers. It covers some of the limitations and ethical concerns of machine learning. Finally, it demonstrates a few examples of Python in Google Colab, which will be used in early modules to showcase how programming is often performed for machine learning with TensorFlow and Keras.

Learning Objectives

  1. Describe the differences between artificial intelligence, machine learning, and deep learning
  2. Provide examples of how machine learning can be used to solve problems (that traditional deterministic programming cannot)
  3. Provide examples of how embedded machine learning can be used to solve problems (where other forms of machine learning would be limited or inappropriate)
  4. Describe the limitations of machine learning
  5. Describe the ethical concerns of machine learning
  6. Describe the differences between supervised and unsupervised machine learning

Section 1: Machine Learning on the Edge

Lecture Material
ID Description Links Attribution
1.1.1 What is machine learning video slides [1]
1.1.2 Machine learning on embedded devices video slides [1]
1.1.3 What is tiny machine learning slides [3]
1.1.4 Tinyml case studies doc [3]
1.1.5 How do we enable tinyml slides [3]
Exercises and Problems
ID Description Links Attribution
1.1.7 Example assessment questions doc [3]

Section 2: Limitations and Ethics

Lecture Material
ID Description Links Attribution
1.2.1 Limitations and ethics video slides [1]
1.2.2 What am I building? slides [3]
1.2.3 Who am I building this for? slides [3]
1.2.4 What are the consequences? slides [3]
1.2.5 The limitations of machine learning blog
1.2.6 The future of AI; bias amplification and algorithm determinism blog
Exercises and Problems
ID Description Links Attribution
1.2.7 Example assessment questions doc [3]

Section 3: Getting Started with Colab

Lecture Material
ID Description Links Attribution
1.3.1 Getting Started with Google Colab video
1.3.2 Intro to colab slides [3]
1.3.3 Welcome to Colab! colab
1.3.4 Colab tips doc [3]
1.3.5 Why TensorFlow? video
1.3.6 Sample tensorflow code doc [3]
Exercises and Problems
ID Description Links Attribution
1.3.7 101 exercises for Python fundamentals colab

Module 2: Getting Started with Deep Learning

This module provides an overview of neural networks and how they can be used to make predictions. Simple examples are given in Python (Google Colab) for students to play with and learn from. If you do not wish to explore basic machine learning with Keras in Google Colab, you may skip this module to move on to using the Edge Impulse graphical environment. Note that some later exercises rely on Google Colab for curating data, visualizing neural networks, etc.

Learning Objectives

  1. Provide examples of how machine learning can be used to solve problems (that traditional deterministic programming cannot)
  2. Provide examples of how embedded machine learning can be used to solve problems (where other forms of machine learning would be limited or inappropriate)
  3. Describe challenges associated with running machine learning algorithms on embedded systems
  4. Describe broadly how a mathematical model can be used to generalize trends in data
  5. Explain how the training process results from minimizing a loss function
  6. Describe why datasets should be broken up into training, validation, and test sets
  7. Explain how overfitting occurs and how to identify it
  8. Demonstrate the ability to train a dense neural network using Keras and TensorFlow

Section 1: Machine Learning Paradigm

Lecture Material
ID Description Links Attribution
2.1.1 The machine learning paradigm slides [3]
2.1.2 Finding patterns doc [3]
2.1.3 Thinking about loss slides [3]
2.1.4 Minimizing loss slides [3]
2.1.5 First neural network slides [3]
2.1.6 More neural networks doc [3]
2.1.7 Neural networks in action doc [3]
Exercises and Problems
ID Description Links Attribution
2.1.8 Exploring loss colab [3]
2.1.9 Minimizing loss colab [3]
2.1.10 Linear regression colab [3]
2.1.11 Solution linear regression doc [3]
2.1.12 Example assessment questions doc [3]

Section 2: Building Blocks of Deep Learning

Lecture Material
ID Description Links Attribution
2.2.1 Introduction to neural networks slides [1]
2.2.2 Initialization and learning doc [3]
2.2.3 Understanding neurons in code slides [3]
2.2.4 Neural network in code doc [3]
2.2.5 Introduction to classification slides [3]
2.2.6 Training validation and test data slides [3]
2.2.7 Realities of coding doc [3]
Exercises and Problems
ID Description Links Attribution
2.2.8 Neurons in action colab [3]
2.2.9 Multi layer neural network colab [3]
2.2.10 Dense neural network colab [3]
2.2.11 Challenge: explore neural networks colab [3]
2.2.12 Solution: explore neural networks doc [3]
2.2.13 Example assessment questions doc [3]

Section 3: Embedded Machine Learning Challenges

Lecture Material
ID Description Links Attribution
2.3.1 Challenges for tinyml a slides [3]
2.3.2 Challenges for tinyml b slides [3]
2.3.3 Challenges for tinyml c slides [3]
2.3.4 Challenges for tinyml d slides [3]
Exercises and Problems
ID Description Links Attribution
2.3.5 Example assessment questions doc [3]

Module 3: Machine Learning Workflow

In this module, students will get an understanding of how data is collected and used to train a machine learning model. They will have the opportunity to collect their own dataset, upload it to Edge Impulse, and train a model using the graphical interface. From there, they will learn how to evaluate a model using a confusion matrix to calculate precision, recall, accuracy, and F1 scores.

Learning Objectives

  1. Provide examples of how embedded machine learning can be used to solve problems (where other forms of machine learning would be limited or inappropriate)
  2. Describe challenges associated with running machine learning algorithms on embedded systems
  3. Describe why datasets should be broken up into training, validation, and test sets
  4. Explain how overfitting occurs and how to identify it
  5. Describe broadly what happens during machine learning model training
  6. Describe the difference between model training and inference
  7. Describe why test and validation datasets are needed
  8. Evaluate a model's performance by calculating accuracy, precision, recall, and F1 scores
  9. Demonstrate the ability to train a machine learning model with a given dataset and evaluate its performance

Section 1: Machine Learning Workflow

Lecture Material
ID Description Links Attribution
3.1.1 Tinyml applications slides [3]
3.1.2 Role of sensors doc [3]
3.1.3 Machine learning lifecycle slides [3]
3.1.4 Machine learning lifecycle doc [3]
3.1.5 Machine learning workflow doc [3]
Exercises and Problems
ID Description Links Attribution
3.1.6 Example assessment questions doc [3]

Section 2: Data Collection

Lecture Material
ID Description Links Attribution
3.2.1 Introduction to data engineering doc [3]
3.2.2 What is data engineering slides [3]
3.2.3 Using existing datasets slides [3]
3.2.4 Responsible data collection slides [3]
3.2.5 Getting started with edge impulse video slides [1]
3.2.6 Data collection with edge impulse video slides [1]
Exercises and Problems
ID Description Links Attribution
3.2.7 Example assessment questions doc [1]

Section 3: Model Training and Evaluation

Lecture Material
ID Description Links Attribution
3.3.1 Feature extraction from motion data video slides [1]
3.3.2 Feature selection in Edge Impulse video tutorial [1]
3.3.3 Machine learning pipeline video slides [1]
3.3.4 Model training in edge impulse video slides [1]
3.3.5 How to evaluate a model video slides [1]
3.3.6 Underfitting and overfitting video slides [1]
Exercises and Problems
ID Description Links Attribution
3.3.7 Project: Motion detection doc [1]
3.3.8 Example assessment questions doc [1]

Module 4: Model Deployment

This module covers why quantization is important for models running on embedded systems and some of the limitations. It also shows how to use a model for inference and set an appropriate threshold to minimize false positives or false negatives, depending on the system requirements. Finally, it covers the steps to deploy a model trained on Edge Impulse to an Arduino board.

Learning Objectives

  1. Provide examples of how embedded machine learning can be used to solve problems (where other forms of machine learning would be limited or inappropriate)
  2. Describe challenges associated with running machine learning algorithms on embedded systems
  3. Describe broadly what happens during machine learning model training
  4. Describe the difference between model training and inference
  5. Demonstrate the ability to perform inference on an embedded system to solve a problem

Section 1: Quantization

Lecture Material
ID Description Links Attribution
4.1.1 Why quantization doc [3]
4.1.2 Post-training quantization slides [3]
4.1.3 Quantization-aware training slides [3]
4.1.4 TensorFlow vs TensorFlow Lite slides [3]
4.1.5 TensorFlow computational graph doc [3]
Exercises and Problems
ID Description Links Attribution
4.1.6 Post-training quantization colab [3]
4.1.7 Example assessment questions doc [3]

Section 2: Embedded Microcontrollers

Lecture Material
ID Description Links Attribution
4.2.1 Embedded systems slides [3]
4.2.2 Diversity of embedded systems doc [3]
4.2.3 Embedded computing hardware slides [3]
4.2.4 Embedded microcontrollers doc [3]
4.2.5 TinyML kit peripherals doc [3]
4.2.6 TinyML kit peripherals slides [3]
4.2.7 Arduino core, frameworks, mbedOS, and bare metal doc [3]
4.2.8 Embedded ML software slides [3]
Exercises and Problems
ID Description Links Attribution
4.2.8 Embedded ml software slides [3]
4.2.9 Example assessment questions doc [3]

Section 3: Deploying a Model to an Arduino Board

Lecture Material
ID Description Links Attribution
4.3.1 Using a model for inference video slides [1]
4.3.2 Testing inference with a smartphone video [1]
4.3.3 Deploy model to arduino video slides [1]
4.3.4 Deploy model to Arduino tutorial
Exercises and Problems
ID Description Links Attribution
4.3.5 Example assessment questions doc [1]

Module 5: Anomaly Detection

This module describes several approaches to anomaly detection and why we might want to use it in embedded machine learning.

Learning Objectives

  1. Provide examples of how embedded machine learning can be used to solve problems (where other forms of machine learning would be limited or inappropriate)
  2. Describe challenges associated with running machine learning algorithms on embedded systems
  3. Describe broadly what happens during machine learning model training
  4. Describe the difference between model training and inference
  5. Demonstrate the ability to perform inference on an embedded system to solve a problem
  6. Describe how anomaly detection can be used to solve problems

Section 1: Introduction to Anomaly Detection

Lecture Material
ID Description Links Attribution
5.1.1 Introduction to anomaly detection doc [3]
5.1.2 What is anomaly detection? slides [3]
5.1.3 Challenges with anomaly detection slides [3]
5.1.4 Industry and TinyML doc [3]
5.1.5 Anomaly detection datasets slides [3]
5.1.6 MIMII dataset paper doc [3]
5.1.7 Real and synthetic data doc [3]
Exercises and Problems
ID Description Links Attribution
5.1.8 Example assessment questions doc [3]

Section 2: K-means Clustering and Autoencoders

Lecture Material
ID Description Links Attribution
5.2.1 K-means clustering slides
5.2.2 Autoencoders slides [3]
5.2.3 Autoencoder model architecture doc [3]
Exercises and Problems
ID Description Links Attribution
5.2.4 K-means clustering for anomaly detection colab [3]
5.2.5 Autoencoders for anomaly detection colab [3]
5.2.6 Challenge autoencoders colab [3]
5.2.7 Solution autoencoders doc [3]
5.2.8 Example assessment questions doc [3]

Section 3: Anomaly Detection in Edge Impulse

Lecture Material
ID Description Links Attribution
5.3.1 Anomaly detection in edge impulse video slides [1]
5.3.2 Industrial embedded machine learning demo video [1]
Exercises and Problems
ID Description Links Attribution
5.3.3 Project: Motion classification and anomaly detection doc [1]

Module 6: Image Classification with Deep Learning

This module introduces the concept of image classification, why it is important in machine learning, and how it can be used to solve problems. Convolution and pooling operations are covered, which form the building blocks for convolutional neural networks (CNNs). Saliency maps and Grad-CAM are offered as two techniques for visualizing the inner gs of CNNs. Data augmentation is introduced as a method for generating new data from existing data to train a more robust model. Finally, transfer learning is shown as a way of reusing pretrained models.

Learning Objectives

  1. Describe the differences between image classification, object detection, and image segmentation
  2. Describe how embedded computer vision can be used to solve problems
  3. Describe how convolution and pooling operations are used to filter and downsample images
  4. Describe how convolutional neural networks differ from dense neural networks and how they can be used to solve computer vision problems

Section 1: Image Classification

Lecture Material
ID Description Links Attribution
6.1.1 What is computer vision? video slides [2]
6.1.2 Overview of digital images video slides [2]
6.1.3 Dataset collection video slides [2]
6.1.4 Overview of image classification video slides [2]
6.1.5 Training an image classifier with Keras video [2]
Exercises and Problems
ID Description Links Attribution
6.1.6 Example assessment questions doc [2]

Section 2: Convolutional Neural Network (CNN)

Lecture Material
ID Description Links Attribution
6.2.1 Image convolution video slides [2]
6.2.2 Pooling layer video slides [2]
6.2.3 Convolutional neural network video slides [2]
6.2.4 CNN in keras slides [3]
6.2.5 Mapping features to labels doc [3]
6.2.6 Training a CNN in Edge Impulse video doc [2]
Exercises and Problems
ID Description Links Attribution
6.2.7 Exploring convolutions colab [3]
6.2.8 Convolutional neural networks colab [3]
6.2.9 Challenge: CNN colab [3]
6.2.10 Solution: CNN doc [3]
6.2.11 Example assessment questions doc [2]

Section 3: Analyzing CNNs, Data Augmentation, and Transfer Learning

Lecture Material
ID Description Links Attribution
6.3.1 CNN visualizations video slides [2]
6.3.2 Data augmentation video slides [2]
6.3.3 TensorFlow datasets doc [3]
6.3.4 Avoiding overfitting with data augmentation slides [3]
6.3.5 Dropout regularization doc [3]
6.3.6 Exploring loss functions and optimizers doc [3]
6.3.7 Transfer learning and MobileNet video slides [2]
6.3.8 Transfer learning with Edge Impulse video slides [2]
Exercises and Problems
ID Description Links Attribution
6.3.9 Saliency and Grad-CAM colab [2]
6.3.10 Image transforms demo colab [2]
6.3.11 Challenge: image data augmentation colab [2]
6.3.12 Solution: image data augmentation colab [2]
6.3.13 Example assessment questions doc [2]

Module 7: Object Detection and Image Segmentation

In this module, we look at object detection, how it differs from image classification, and the unique set of problems it solves. We also briefly examine image segmentation and discuss constrained object detection. Finally, we look at responsible AI as it relates to computer vision and AI at large.

Learning Objectives

  1. Describe the differences between image classification, object detection, and image segmentation
  2. Describe how embedded computer vision can be used to solve problems
  3. Describe how image segmentation can be used to solve problems
  4. Describe how convolution and pooling operations are used to filter and downsample images
  5. Describe how convolutional neural networks differ from dense neural networks and how they can be used to solve computer vision problems
  6. Describe the limitations of machine learning
  7. Describe the ethical concerns of machine learning
  8. Describe the requirements for collecting a good dataset and what factors can create a biased dataset

Section 1: Introduction to Object Detection

Lecture Material
ID Description Links Attribution
7.1.1 Introduction to object detection video slides [2]
7.1.2 Object detection performance metrics video slides [2]
7.1.3 Object detection models video slides [2]
7.1.4 Training an object detection model video slides [2]
7.1.5 Digging deeper into object detection doc [2]
Exercises and Problems
ID Description Links Attribution
7.1.6 Example assessment questions doc [2]

Section 2: Image Segmentation and Constrained Object Detection

Lecture Material
ID Description Links Attribution
7.2.1 Image segmentation video slides [2]
7.2.2 Multi-stage Inference Demo video [2]
7.2.3 Reusing Representations with Mat Kelcey video [2]
7.2.4 tinyML Talks: Constrained Object Detection on Microcontrollers with FOMO video
Exercises and Problems
ID Description Links Attribution
7.2.5 Project: Deploy an object detection model doc [2]
7.2.6 Example assessment questions doc [2]

Section 3: Responsible AI

Lecture Material
ID Description Links Attribution
7.3.1 Dataset collection slides [3]
7.3.2 The many faces of bias doc [3]
7.3.3 Biased datasets slides [3]
7.3.4 Model fairness slides [3]
Exercises and Problems
ID Description Links Attribution
7.3.5 Google what if tool colab [3]
7.3.6 Example assessment questions doc [3]

Module 8: Keyword Spotting

In this module, we create a functioning keyword spotting (also known as "wake word detection") system. To do so, we must introduce several concepts unique to audio digital signal processing and combine it with image classification techniques.

Learning Objectives

  1. Describe how machine learning can be used to classify sounds
  2. Describe how sound classification can be used to solve problems
  3. Describe the major components in a keyword spotting system
  4. Demonstrate the ability to train and deploy a sound classification system

Section 1: Audio Classification

Lecture Material
ID Description Links Attribution
8.1.1 Introduction to audio classification slides [1]
8.1.2 Audio data capture slides [1]
8.1.3 What is keyword spotting slides [3]
8.1.4 Keyword spotting challenges slides [3]
8.1.5 Keyword spotting application architecture doc [3]
8.1.6 Keyword spotting datasets slides [3]
8.1.7 Keyword spotting dataset creation doc [3]
Exercises and Problems
ID Description Links Attribution
8.1.8 Example assessment questions doc [1] [3]

Section 2: Spectrograms and MFCCs

Lecture Material
ID Description Links Attribution
8.2.1 Keyword spotting data collection slides [3]
8.2.2 Spectrograms and mfccs doc [3]
8.2.3 Keyword spotting model slides [3]
8.2.4 Audio feature extraction slides [1]
8.2.5 Review of convolutional neural networks slides [1]
8.2.6 Modifying the neural network slides [1]
Exercises and Problems
ID Description Links Attribution
8.2.7 Spectrograms and mfccs colab [3]
8.2.8 Example assessment questions doc [1] [3]

Section 3: Deploying a Keyword Spotting System

Lecture Material
ID Description Links Attribution
8.3.1 Deploy audio classifier slides [1]
8.3.2 Implementation strategies slides [1]
8.3.3 Metrics for keyword spotting slides [3]
8.3.4 Streaming audio slides [3]
8.3.5 Cascade architectures slides [3]
8.3.6 Keyword spotting in the big picture doc [3]
Exercises and Problems
ID Description Links Attribution
8.3.7 Project: Sound classification doc [1]
8.3.8 Example assessment questions doc [1] [3]

Attribution

[1] Slides and written material for "Introduction to Embedded Machine Learning" by Edge Impulse is licensed under CC BY-NC-SA 4.0

[2] Slides and written material for "Computer Vision with Embedded Machine Learning" by Edge Impulse is licensed under CC BY-NC-SA 4.0

[3] Slides and written material for "TinyML Courseware" by Prof. Vijay Janapa Reddi of Harvard University is licensed under CC BY-NC-SA 4.0

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published