You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to run export.py but I got some errors. At the end, I could get the blob model by using the online blobconverter . The error that I got is:
python3 export.py --architecture ewasr_resnet18_imu --weights-file models/ewasr_resnet18_imu.pth --output-dir output
/opt/anaconda/anaconda3/envs/yolo/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/opt/anaconda/anaconda3/envs/yolo/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None`for'weights' are deprecated since 0.13 and may be removedin the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
/opt/anaconda/anaconda3/envs/yolo/lib/python3.9/site-packages/torchvision/transforms/functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
warnings.warn(
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
ONNX stored at: output/ewasr_resnet18_imu.onnx
Downloading /home/jm/.cache/blobconverter/ewasr_resnet18_imu_openvino_2022.1_6shave.blob...
{
"exit_code": 1,
"message": "Command failed with exit code 1, command: /app/venvs/venv2022_1/bin/python /app/model_compiler/openvino_2022.1/converter.py --precisions FP16 --output_dir /tmp/blobconverter/6f2c9fed7eeb4019a2d2c3c3c8eeedf0 --download_dir /tmp/blobconverter/6f2c9fed7eeb4019a2d2c3c3c8eeedf0 --name ewasr_resnet18_imu --model_root /tmp/blobconverter/6f2c9fed7eeb4019a2d2c3c3c8eeedf0",
"stderr": "usage: main.py [options]\nmain.py: error: unrecognized arguments: --mean_values image[123.675,116.28,103.53],imu[0,0,0] --scale_values image[58.395,57.12,57.375],imu[1,1,1] --output prediction\n",
"stdout": "========== Converting ewasr_resnet18_imu to IR (FP16)\nConversion command: /app/venvs/venv2022_1/bin/python -- /app/venvs/venv2022_1/bin/mo --framework=onnx --data_type=FP16 --output_dir=/tmp/blobconverter/6f2c9fed7eeb4019a2d2c3c3c8eeedf0/ewasr_resnet18_imu/FP16 --model_name=ewasr_resnet18_imu --input= --reverse_input_channels '--mean_values image[123.675,116.28,103.53],imu[0,0,0]' '--scale_values image[58.395,57.12,57.375],imu[1,1,1]' '--output prediction' --data_type=FP16 --input_model=/tmp/blobconverter/6f2c9fed7eeb4019a2d2c3c3c8eeedf0/ewasr_resnet18_imu/FP16/ewasr_resnet18_imu.onnx\n\nFAILED:\newasr_resnet18_imu\n"
}
Traceback (most recent call last):
File "/home/jm/Programming/CollisionAvoidence/mods-yolov5/segmentation/eWaSR/export.py", line 102, in<module>main()
File "/home/jm/Programming/CollisionAvoidence/mods-yolov5/segmentation/eWaSR/export.py", line 99, in main
export(args)
File "/home/jm/Programming/CollisionAvoidence/mods-yolov5/segmentation/eWaSR/export.py", line 81, inexport
blob_path_temp = blobconverter.from_onnx(
File "/opt/anaconda/anaconda3/envs/yolo/lib/python3.9/site-packages/blobconverter/__init__.py", line 424, in from_onnx
return compile_blob(blob_name=Path(model_name).stem, req_data={"name": Path(model_name).stem}, req_files=files, data_type=data_type, **kwargs)
File "/opt/anaconda/anaconda3/envs/yolo/lib/python3.9/site-packages/blobconverter/__init__.py", line 318, in compile_blob
response.raise_for_status()
File "/opt/anaconda/anaconda3/envs/yolo/lib/python3.9/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2022.1&no_cache=False
Thanks in advance!
The text was updated successfully, but these errors were encountered:
@JuanFuriaz
Thanks, should be fixed in the latest commit. There was an issue with input names and how mean_values and scale_values were provided to blobconverter.
The code worked without problems with and without IMU! Thanks!
In order to run the export.py, I found a minor issue in the readme parser values. It is --output-dir instead of --output-dir. Should I make a pull request with this?
Hi everyone!
I tried to run
export.py
but I got some errors. At the end, I could get the blob model by using the online blobconverter . The error that I got is:Thanks in advance!
The text was updated successfully, but these errors were encountered: