You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I created a simple MNIST test case to evaluate the performance of different DL Compilers on an Raspberry Pi 4. I was able to build a standalone bundle by modifying the bundle example at /apps/bundle.
But this fails, if I generated the model.o, params.bin and graph.json with autoTVM.
The binary still gets created, but fails when I try to run it on the device
The error message is (6 different versions): function handle for fused_... not found
Thanks for reporting the problem, the community uses https://discuss.tvm.ai/ for related trouble shooting questions, please open a new thread there and provide a few mode steps.
I created a simple MNIST test case to evaluate the performance of different DL Compilers on an Raspberry Pi 4. I was able to build a standalone bundle by modifying the bundle example at /apps/bundle.
But this fails, if I generated the model.o, params.bin and graph.json with autoTVM.
The binary still gets created, but fails when I try to run it on the device
The error message is (6 different versions):
function handle for fused_... not found
transpose_multiply_round_clip_cast_layout_transform
nn_contrib_conv2d_NCHWc_add_right_shift_clip_cast_1
cast_multiply_add_nn_relu_1
nn_contrib_conv2d_NCHWc_add_right_shift_clip_cast
nn_max_pool2d_multiply_round_clip_cast
cast_multiply_add_nn_relu
I guess, these functions have been created by the autontuning step, but somehow do not end up in the executable. How can I add them?
The text was updated successfully, but these errors were encountered: