You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
scripts/export_model.py exports PyTorch models to ONNX, but the API still loads .pth checkpoints directly. ONNX Runtime would provide faster inference and lower memory usage.
Scope
Update _load_model() in src/climatevision/inference/pipeline.py to prefer .onnx files when present
Fall back to PyTorch .pth if ONNX file is missing
Verify ONNX input/output tensor signatures match per analysis type:
Overview
scripts/export_model.pyexports PyTorch models to ONNX, but the API still loads.pthcheckpoints directly. ONNX Runtime would provide faster inference and lower memory usage.Scope
_load_model()insrc/climatevision/inference/pipeline.pyto prefer.onnxfiles when present.pthif ONNX file is missing(N, n_channels, 256, 256)→(N, n_classes, 256, 256)(N, C, 256, 256)× 2 →(N, 2, 256, 256)scripts/export_model.pyto export all analysis-specific modelsAcceptance Criteria
.onnxcheckpoint exists, API loads it via ONNX RuntimeResources
src/climatevision/inference/pipeline.py—_load_model()functionscripts/export_model.py— existing ONNX export scriptreferences/model-architectures.md— tensor shape specsDifficulty: Intermediate
Labels:
help wanted,backend,mlops,performance