Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Coral Edge TPU guide #8160

Merged
merged 4 commits into from
Feb 12, 2024
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
37 changes: 22 additions & 15 deletions docs/en/guides/coral-edge-tpu-on-raspberry-pi.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,26 @@
---
comments: true
description: Using Ultralytics with a Coral Edge TPU on a Raspberry Pi for increased inference performance.
description: Guide on how to use Ultralytics with a Coral Edge TPU on a Raspberry Pi for increased inference performance.
keywords: Ultralytics, YOLOv8, Object Detection, Coral, Edge TPU, Raspberry Pi, embedded, edge compute, sbc, accelerator, mobile
---

# Coral Edge TPU on a Raspberry Pi with Ultralytics YOLOv8. 🚀
# Coral Edge TPU on a Raspberry Pi with Ultralytics YOLOv8 🚀

<p align="center">
<img width="800" src="https://images.ctfassets.net/2lpsze4g694w/5XK2dV0w55U0TefijPli1H/bf0d119d77faef9a5d2cc0dad2aa4b42/Edge-TPU-USB-Accelerator-and-Pi.jpg?w=800" alt="Raspberry Pi single board computer with USB Edge TPU accelerator">
</p>

## What is a Coral Edge TPU?

The Coral USB Accelerator is a compact device that adds an Edge TPU coprocessor to your system. It enables low-power, high-performance ML inferencing for TensorFlow Lite models. Read more at the [Coral Edge TPU home page](https://coral.ai/products/accelerator).
The Coral Edge TPU is a compact device that adds an Edge TPU coprocessor to your system. It enables low-power, high-performance ML inferencing for TensorFlow Lite models. Read more at the [Coral Edge TPU home page](https://coral.ai/products/accelerator).

## Why?
## Boost Raspberry Pi Model Performance with Coral Edge TPU

Many people want to run their models on an embedded or mobile device such as a Raspberry Pi, since they are very power efficient and can be used in many different applications. However, the inference performance on these devices is usually poor even when using formats like `onnx` or `openvino`. The Coral Edge TPU is a great solution to this problem, since it can be used with a Raspberry Pi and accelerate inference performance greatly.
Many people want to run their models on an embedded or mobile device such as a Raspberry Pi, since they are very power efficient and can be used in many different applications. However, the inference performance on these devices is usually poor even when using formats like [onnx](../integrations/onnx.md) or [openvino](../integrations/openvino.md). The Coral Edge TPU is a great solution to this problem, since it can be used with a Raspberry Pi and accelerate inference performance greatly.

## Why this guide?
## Edge TPU on Raspberry Pi with TensorFlow Lite (New)⭐

The [current guide](https://coral.ai/docs/accelerator/get-started/) by Coral on how to use the Edge TPU with a Raspberry Pi is outdated, and the current Coral Edge TPU runtime builds do not work with the current TensorFlow Lite runtime versions anymore. In addition to that, Google seems to have completely abandoned the Coral project, and there have not been any updates between 2021 and 2024. This guide will show you how to get the Edge TPU working with the latest versions of the TensorFlow Lite runtime and an updated Coral Edge TPU runtime on a Raspberry Pi single board computer (SBC).
The [existing guide](https://coral.ai/docs/accelerator/get-started/) by Coral on how to use the Edge TPU with a Raspberry Pi is outdated, and the current Coral Edge TPU runtime builds do not work with the current TensorFlow Lite runtime versions anymore. In addition to that, Google seems to have completely abandoned the Coral project, and there have not been any updates between 2021 and 2024. This guide will show you how to get the Edge TPU working with the latest versions of the TensorFlow Lite runtime and an updated Coral Edge TPU runtime on a Raspberry Pi single board computer (SBC).

## Prerequisites

Expand All @@ -29,13 +29,13 @@ The [current guide](https://coral.ai/docs/accelerator/get-started/) by Coral on
- [Coral USB Accelerator](https://coral.ai/products/accelerator/)
- A non-ARM based platform for exporting an Ultralytics PyTorch model

## Installation
## Installation Walkthrough

This guide assumes that you already have a working Raspberry Pi OS install and have installed `ultralytics` and all dependencies. To get `ultralytics` installed, visit the [quickstart guide](../quickstart.md) to get setup before continuing here.

### Installing the Edge TPU runtime

First, we need to install the Edge TPU runtime. There are many different versions available, so you need to choose the version for your device and operating system.
First, we need to install the Edge TPU runtime. There are many different versions available, so you need to choose the right version for your operating system.

| Raspberry Pi OS | High frequency mode | Version to download |
|-----------------|:-------------------:|------------------------------------------|
Expand All @@ -48,19 +48,20 @@ First, we need to install the Edge TPU runtime. There are many different version
| Bookworm 32bit | Yes | libedgetpu1-max_ ... .bookworm_armhf.deb |
| Bookworm 64bit | Yes | libedgetpu1-max_ ... .bookworm_arm64.deb |

[Download the latest version](https://github.com/feranick/libedgetpu/releases).
[Download the latest version from here](https://github.com/feranick/libedgetpu/releases).

After downloading the file, you can install it with the following command:

```bash
sudo dpkg -i path/to/file
sudo dpkg -i path/to/package.deb
```

After installing the runtime, you need to plug in your Coral Edge TPU into a USB 3.0 port on your Raspberry Pi. This is because, according to the official guide, a new `udev` rule needs to take effect after installation.

???+ warning "Important"

If you already have the Coral Edge TPU runtime installed, uninstall it using the following command.

```bash
# If you installed the standard version
sudo apt remove libedgetpu1-std
Expand All @@ -69,9 +70,9 @@ After installing the runtime, you need to plug in your Coral Edge TPU into a USB
sudo apt remove libedgetpu1-max
```

## Exporting your model to a Edge TPU compatible model
## Export your model to a Edge TPU compatible model

To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md) or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
glenn-jocher marked this conversation as resolved.
Show resolved Hide resolved

!!! Exporting the model

Expand Down Expand Up @@ -107,13 +108,19 @@ After exporting your model, you can run inference with it using the following co
from ultralytics import YOLO

# Load a model
model = YOLO('path/to/model.tflite') # Load a official model or custom model
model = YOLO('path/to/edgetpu_model.tflite') # Load a official model or custom model

# Run Prediction
model.predict("path/to/source.png")
```

For a more detailed guide on how to use the Predict Mode for inference, visit the [Prediction Mode](../modes/predict.md) page.
=== "CLI"

```bash
yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png # Load a official model or custom model
```

Find comprehensive information on the [Predict](../modes/predict.md) page for full prediction mode details.

???+ warning "Important"

Expand Down