Skip to content

ibaiGorordo/ONNX-FastACVNet-Depth-Estimation

Repository files navigation

ONNX-FastACVNet-Stereo-Depth-Estimation

Python scripts performing stereo depth estimation using the Fast-ACVNet model in ONNX.

!Fast-ACVNet detph estimation Stereo depth estimation on the cones images from the Middlebury dataset (https://vision.middlebury.edu/stereo/data/scenes2003/)

Requirements

  • Check the requirements.txt file.
  • For ONNX, if you have a NVIDIA GPU, then install the onnxruntime-gpu, otherwise use the onnxruntime library.

Installation

git clone https://github.com/ibaiGorordo/ONNX-FastACVNet-Depth-Estimation.git
cd ONNX-FastACVNet-Depth-Estimation
pip install -r requirements.txt

ONNX Runtime

For Nvidia GPU computers: pip install onnxruntime-gpu

Otherwise: pip install onnxruntime

ONNX model

The models were converted from the Pytorch implementation below by PINTO0309, download the models from the download script in his repository and save them into the models folder.

Pytorch model

The original Pytorch model can be found in this repository: https://github.com/gangweiX/Fast-ACVNet

Examples

  • Image inference:
python image_depth_estimation.py
  • Video inference:
python video_depth_estimation.py

!Fast-ACVNet depth estimation

Original video: Málaga Stereo and Laser Urban dataset, reference below

python driving_stereo_test.py

!CREStereo depth estimation

Original video: Driving stereo dataset, reference below

References:

Releases

No releases published

Packages

No packages published

Languages