Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 38 additions & 3 deletions docs/source/backends-nxp.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,40 @@
# NXP eIQ Neutron Backend

See
[NXP eIQ Neutron Backend](https://github.com/pytorch/executorch/blob/main/backends/nxp/README.md)
for current status about running ExecuTorch on NXP eIQ Neutron Backend.
This manual page is dedicated to introduction of using the ExecuTorch with NXP eIQ Neutron Backend.
NXP offers accelerated machine learning models inference on edge devices.
To learn more about NXP's machine learning acceleration platform, please refer to [the official NXP website](https://www.nxp.com/applications/technologies/ai-and-machine-learning:MACHINE-LEARNING).

<div class="admonition tip">
For up-to-date status about running ExecuTorch on Neutron Backend please visit the <a href="https://github.com/pytorch/executorch/blob/main/backends/nxp/README.md">manual page</a>.
</div>

## Features

Executorch v1.0 supports running machine learning models on selected NXP chips (for now only i.MXRT700).
Among currently supported machine learning models are:
- Convolution-based neutral networks
- Full support for MobileNetv2 and CifarNet

## Prerequisites (Hardware and Software)

In order to succesfully build executorch project and convert models for NXP eIQ Neutron Backend you will need a computer running Windows or Linux.

If you want to test the runtime, you'll also need:
- Hardware with NXP's [i.MXRT700](https://www.nxp.com/products/i.MX-RT700) chip or a testing board like MIMXRT700-AVK
- [MCUXpresso IDE](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE) or [MCUXpresso Visual Studio Code extension](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-for-visual-studio-code:MCUXPRESSO-VSC)

## Using NXP backend

To test converting a neural network model for inference on NXP eIQ Neutron Backend, you can use our example script:

```shell
# cd to the root of executorch repository
./examples/nxp/aot_neutron_compile.sh [model (cifar10 or mobilenetv2)]
```

For a quick overview how to convert a custom PyTorch model, take a look at our [exmple python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py).

## Runtime Integration

To learn how to run the converted model on the NXP hardware, use one of our example projects on using executorch runtime from MCUXpresso IDE example projects list.
For more finegrained tutorial, visit [this manual page](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/example_applications.html).
2 changes: 1 addition & 1 deletion docs/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ ExecuTorch provides support for:
- [MediaTek](backends-mediatek)
- [Cadence](backends-cadence)
- [OpenVINO](build-run-openvino)
- [NXP](backend-nxp)
- [NXP](backends-nxp)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We just did a big refactoring (and will cherry-pick onto release/1.0 branch soon)

#14720

here's the preview:
https://docs.pytorch.org/executorch/main/

Please rebase to latest main and change appropriately.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not sure if the new look of documentation is supposed to be a part of the release so I raised PRs to both main and release/1.0. I am closing this one since the PR to main was merged already.

#### Developer Tools
- [Overview](devtools-overview)
- [Bundled IO](bundled-io)
Expand Down
Loading