Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Micro] Run model for Keyword Spotting with microTVM #24

Open
Red-Caesar opened this issue Aug 12, 2023 · 3 comments
Open

[Micro] Run model for Keyword Spotting with microTVM #24

Red-Caesar opened this issue Aug 12, 2023 · 3 comments

Comments

@Red-Caesar
Copy link

I use this two tutorial: tvmc and demo.

For implementation, I use the following commands:

(1)

tvmc compile models/yes_no.tflite \
    --target='c -keys=cpu -model=host' \
    --runtime=crt \
    --runtime-crt-system-lib 1 \
    --executor='aot' \
    --output output/model.tar \
    --output-format mlf \
    --pass-config tir.disable_vectorize=1

(2)

tvmc micro create -f output/project output/model.tar arduino \
		--project-option project_type=example_project board=nano33ble

Before the next command, we should take from examples/yes_no following files: yes_no.ino, yes.h, no.h, src . And add to project directory. Rename yes_no.ino to project.ino

(3)

tvmc micro build project arduino

After the command we will have:

Image

And the last command:

(4)

tvmc micro flash project arduino

It will be work fine.

This the part if we will try to compile model on our own.

Repeat (1) and (2) commands. We will have next files:

project.ino

#include "src/standalone_crt/include/tvm/runtime/crt/platform.h"

void setup() {
  TVMPlatformInitialize();
  // If desired, initialize the RNG with random noise
  // randomSeed(analogRead(0));
}

void loop() {
  //TVMExecute(input_data, output_data);
}

src/platform.h

#ifdef __cplusplus
extern "C" {
#endif

/* TODO template this function signature with the input and output
 * data types and sizes. For example:
 *
 * void TVMExecute(uint8_t input_data[9216], uint8_t output_data[3]);
 *
 * Note this can only be done once MLF has JSON metadata describing
 * inputs and outputs.
 */
void TVMExecute(void* input_data, void* output_data);

#ifdef __cplusplus
}  // extern "C"
#endif

src/platform.c

#include "Arduino.h"
#include "standalone_crt/include/dlpack/dlpack.h"
#include "standalone_crt/include/tvm/runtime/crt/stack_allocator.h"

#define TVM_WORKSPACE_SIZE_BYTES $workspace_size_bytes

// AOT memory array, stack allocator wants it aligned
static uint8_t g_aot_memory[TVM_WORKSPACE_SIZE_BYTES]
    __attribute__((aligned(TVM_RUNTIME_ALLOC_ALIGNMENT_BYTES)));
tvm_workspace_t app_workspace;

// Called when an internal error occurs and execution cannot continue.
// Blink code for debugging purposes
void TVMPlatformAbort(tvm_crt_error_t error) {
  TVMLogf("TVMPlatformAbort: 0x%08x\n", error);
  for (;;) {
#ifdef LED_BUILTIN
    digitalWrite(LED_BUILTIN, HIGH);
    delay(250);
    digitalWrite(LED_BUILTIN, LOW);
    delay(250);
    digitalWrite(LED_BUILTIN, HIGH);
    delay(250);
    digitalWrite(LED_BUILTIN, LOW);
    delay(750);
#endif
  }
}

// Allocate memory for use by TVM.
tvm_crt_error_t TVMPlatformMemoryAllocate(size_t num_bytes, DLDevice dev, void** out_ptr) {
  return StackMemoryManager_Allocate(&app_workspace, num_bytes, out_ptr);
}

// Free memory used by TVM.
tvm_crt_error_t TVMPlatformMemoryFree(void* ptr, DLDevice dev) {
  return StackMemoryManager_Free(&app_workspace, ptr);
}

// Internal logging API call implementation.
void TVMLogf(const char* msg, ...) {}

unsigned long g_utvm_start_time_micros;
int g_utvm_timer_running = 0;

// Start a device timer.
tvm_crt_error_t TVMPlatformTimerStart() {
  if (g_utvm_timer_running) {
    return kTvmErrorPlatformTimerBadState;
  }
  g_utvm_timer_running = 1;
  g_utvm_start_time_micros = micros();
  return kTvmErrorNoError;
}

// Stop the running device timer and get the elapsed time (in microseconds).
tvm_crt_error_t TVMPlatformTimerStop(double* elapsed_time_seconds) {
  if (!g_utvm_timer_running) {
    return kTvmErrorPlatformTimerBadState;
  }
  g_utvm_timer_running = 0;
  unsigned long g_utvm_stop_time = micros() - g_utvm_start_time_micros;
  *elapsed_time_seconds = ((double)g_utvm_stop_time) / 1e6;
  return kTvmErrorNoError;
}

// Fill a buffer with random data.
tvm_crt_error_t TVMPlatformGenerateRandom(uint8_t* buffer, size_t num_bytes) {
  for (size_t i = 0; i < num_bytes; i++) {
    buffer[i] = rand();
  }
  return kTvmErrorNoError;
}

// Initialize TVM inference.
tvm_crt_error_t TVMPlatformInitialize() {
  StackMemoryManager_Init(&app_workspace, g_aot_memory, sizeof(g_aot_memory));
  return kTvmErrorNoError;
}

void TVMExecute(void* input_data, void* output_data) {
  int ret_val = tvmgen_default___tvm_main__(input_data, output_data);
  if (ret_val != 0) {
    TVMPlatformAbort(kTvmErrorPlatformCheckFailure);
  }
}

And we’ve already had differences from demo.

In demo’s src/model.h . We have a defenition of void TVMInitialize() . In our project.ino we haven’t, so we should include in this file: #include "src/standalone_crt/include/tvm/runtime/crt/platform.h"

Also, demo’s src/model.c has library #include "standalone_crt/include/tvm/runtime/crt/internal/aot_executor/aot_executor.h". Which we don’t have:

Image

So it’s a question for me. What is tvmgen_default___tvm_main__() function. We can find a reminding here and find in our model’s libs.

src/model/default_lib0.c

TVM_DLL int32_t tvmgen_default___tvm_main__(TVMValue* args, int* type_code, int num_args, TVMValue* out_value, int* out_type_code, void* resource_handle);

int32_t tvmgen_default_run(TVMValue* args, int* type_code, int num_args, TVMValue* out_value, int* out_type_code, void* resource_handle) {
TVMValue tensors[4];
tensors[0] = ((TVMValue*)args)[0];
tensors[1] = ((TVMValue*)args)[1];
DLTensor global_const_workspace_dltensor = {
.data = &global_const_workspace
};
TVMValue global_const_workspace_tvm_value = {
.v_handle = &global_const_workspace_dltensor
};
tensors[2] = global_const_workspace_tvm_value;
DLTensor global_workspace_dltensor = {
.data = &global_workspace
};
TVMValue global_workspace_tvm_value = {
.v_handle = &global_workspace_dltensor
};
tensors[3] = global_workspace_tvm_value;
return tvmgen_default___tvm_main__((void*)tensors, type_code, num_args, out_value, out_type_code, resource_handle);
}

In demo: lib.

We can compile with generate_project.py. But it will be have same result.

Flags for compiling .tflite model, I take from this tutorial.

Also I try to install TVM 0.8.0 and 0.9.0 version.

At 0.8.0 version I had problem with python setup:

Image

At 0.9.0 version I had a following problem, when creating project:

Image

I need a file (launch_microtvm_api_server.sh), which appears only in 0.12.0 version. I don’t know why, to be honest.

Also my discussion on TVM.

@Red-Caesar
Copy link
Author

About 0.8.0 TVM version. I found this discussion. It says that there was problem with python setup early. I found this commit. So the problem with python setup solved, after changing code to state of this commit.

But after this, I also had problem with launch_microtvm_api_server.sh(There were only one python version (3.10) and one tvm version on PC)

So I just copy this file from the next version and change generate_project.py script to find a new location of file.

After this there were some problems with libraries, but it's fixable. Now, I can compile .tflite model, which libs’ files similar to guberti libs’ files.. I’ve also checked that micro_speech.tflite, compiled by this tvm version, works (compilation doesn’t work with the latest version). But kws.tflite has a problem anyway. So I will check that I can do else. My suggestions:

  • check 0.9.0 version
  • maybe check how it all works with some new network. Only for testing workflow
  • can be memory problem.

@Red-Caesar
Copy link
Author

@vvchernov

@Red-Caesar
Copy link
Author

Red-Caesar commented Aug 21, 2023

The next update will be about building and flashing with Zephyr.

I've again used this tutorial for installing Zephyr and tvmc commands.

After a step, where we should compile micro_speech.tflite, I used following commands:

tvmc micro create -f project model.tar zephyr --project-option project_type=host_driven board=nrf5340dk_nrf5340_cpuapp zephyr_base=/home/andrey/Documents/ZephyrTutorial/content/zephyrproject/zephyr/

After that, we should put path to sdk in FindZephyr-sdk.cmake:

Image

And run commands from directory content/zephyrproject/zephyr :

tvmc micro build -f ../../project zephyr
tvmc micro flash ../../project zephyr

It gives an error:

Image

It happens, because command 'nrfjprog —ids' can't find the board. (Even if board is fine and can be checked by lsusb ). My suggestion: nrfjprog utility isn't working for arduino nano.

If we used west for flashing, we will have:

west flash --bossac=/home/andrey/snap/arduino/85/.arduino15/packages/arduino/tools/bossac/1.9.1-arduino2/bossac  --build-dir=../project/build

And an error:

Image

I think, it happens, because of tvmc micro create and a flag board=nrf5340dk_nrf5340_cpuapp. My suggestion: tvmc command creates a project with specific flashing settings required for this board. So we can't just use it with arduino nano.
Also I can't directly use flag board=nano33ble , because this is not supported by zephyr_base command.

So, to solve the problem I can try to add nano33ble support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant