Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Extension package of TVM Deep Learning Compiler for Renesas DRP-AI accelerators powered by EdgeCortix MERA™ (DRP-AI TVM)

TVM Documentation | TVM Community | TVM github |

DRP-AI TVM 1 is Machine Learning Compiler plugin for Apache TVM with AI accelerator DRP-AI provided by Renesas Electronics Corporation.


(C) Copyright EdgeCortix, Inc. 2022
(C) Copyright Renesas Electronics Corporation 2022
Contributors Licensed under an Apache-2.0 license.

Supported Embedded Platforms



This compiler stack is an extension of the DRP-AI Translator to the TVM backend. CPU and DRP-AI can work together for the inference processing of the AI models.

File Configuration

Directory Details
tutorials Sample compile script
apps Sample inference application on the target board
setup Setup scripts for building a TVM environment
obj Pre-build runtime binaries
docs Documents, i.e., Model list and API list
img Image files used in this document
tvm TVM repository from github
3rd party 3rd party tools
how-to Sample to solve specific problems, i.e., How to run validation between x86 and DRP-AI


Deploy AI models on DRP-AI


Following video shows brief tutorial for how to deploy ONNX model on RZ/V series board.
RZ/V DRP-AI TVM Tutorial - How to Run ONNX Model (YouTube)

Tutorial Video


To deploy the AI model to DRP-AI on the target board, you need to compile the model with DRP-AI TVM1 to generate Runtime Model Data (Compile).
SDK generated from RZ/V Linux Package and DRP-AI Support Package is required to compile the model.

After compiled the model, you need to copy the file to the target board (Deploy).
You also need to copy the C++ inference application and DRP-AI TVM1 Runtime Library to run the AI model inference.

Following pages show the example to compile the ResNet18 model and run it on the target board.

Compile model with DRP-AI TVM1

Please see Tutorial.

Run inference on board

Please see Application Example page.

Sample Application

To find more AI examples, please see How-to page.

Confirmed AI Model

If you want to know which models have been tested by Renesas, please refer to Model List.


Error List

If error occurred at compile/runtime operation, please refer error list.


How-to page includes following explanation.

  • profilier;
  • validation between x86 and DRP-AI;
  • etc.


If you have any questions, please contact Renesas Technical Support.


  1. DRP-AI TVM is powered by EdgeCortix MERA™ Compiler Framework. 2 3 4