Skip to content

ns_autodeploy [ERROR] Model Configuration Send Status = 1 #258

Open
@pfeatherstone

Description

@pfeatherstone

I'm evaluating my TFLite model an i get the following debug messages:

ns_autodeploy --tflite-file=classifier.tflite
2025-06-27 13:04:09.425420: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-06-27 13:04:09.428410: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-06-27 13:04:09.438774: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:479] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2025-06-27 13:04:09.460133: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:10575] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2025-06-27 13:04:09.460197: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1442] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-06-27 13:04:09.472212: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-06-27 13:04:10.151783: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
projects/autodeploy
[NS] Model Name automatically set to: classifier
[NS] Running 2 Stage Autodeploy for Platform: apollo510_evb
[NS] Max Arena Size for apollo510_evb: 2458 KB
[NS] Best 142KB model location for apollo510_evb: TCM
[NS] Using AmbiqSuite Version: R5.2.0
[NS] Using TensorFlow Version: ns_tflm_2025_03_06
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[NS] Static Analysis Warning: TANH is not in optimized in mainstream TFLM, consider using Ambiq NS-TFLM
[NS] *** Stage [1/2]: Create and fine-tune EVB model characterization image
[NS] Compiling and deploying Baseline image: arena size = 2458k, arena location = SRAM model_location = TCM, Resource Variables count = 0
[ERROR] Model Configuration Send Status = 1
[ERROR] This may be caused by allocating too little memory for the tensor arena.
[ERROR] This script uses TFLM's arena_used_bytes() function to determine the arena size,
[ERROR] which has a bug where it does not account for scratch buffer padding.
[ERROR] To manually add a padding for scratch buffers, use the --arena-size-scratch-buffer-padding option.
Model Configuration Failed

Is there a way to get a more helpful message than:

[ERROR] Model Configuration Send Status = 1

The TFLite model is correct and runs fine in a normal enviornment using LiteRT

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions