Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
dataset/
depthai.calib
11 changes: 5 additions & 6 deletions python-api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ This is python API of DepthAI and examples.
Files with extention `.so` are python modules:
- `depthai.cpython-36m-x86_64-linux-gnu.so` built for Ubuntu 18.04 & Python 3.6
- `depthai.cpython-37m-arm-linux-gnueabihf.so` built for Raspbian 10 & Python 3.7

# Examples
`test.py` - depth example
`test_cnn.py` - CNN inference example
Expand All @@ -24,10 +24,10 @@ Example of the conversion:

# Disparity Depth Calibration
For better depth image quality, you need a stereo calibration. To do it, you have to:
1. Print the chessboard for calibration. The picture can be found in the `resources` folder (resources/calibration-chess-board.png)
2. Start python3 script: type `python3 calibration_pipeline.py` in the terminal. Two streams left and right will show up. Each window will contain a polygon.
1. Print the chessboard for calibration. The picture can be found in the `resources` folder (resources/calibration-chess-board.png). Measure the square size in centimeters and insert the value in the command below.
2. Start python3 script: type `python3 calibrate.py -s [SQUARE_SIZE_IN_CM]` in the terminal. Two streams left and right will show up. Each window will contain a polygon.
3. Put a printed chessboard within the polygon and press barspace. It will take a photo. There will be 13 positions of polygons.
4. After it, the calibration will automatically start based on taken pictures. If calibration is a successful file named `depthai.calib` will be generated.
4. After it, the calibration will automatically start based on taken pictures. If calibration is a successful file named `depthai.calib` will be generated.

Depthai has the default calibration file. There are two ways to change it:
1. Easy way: rename your calibration file to `default.calib` and move it the resources folder.
Expand All @@ -36,12 +36,11 @@ Depthai has the default calibration file. There are two ways to change it:
# Issue reporting
We are developing depthai framework, and it's crucial for us to know what kind of problems users are facing.
So thanks for testing DepthAI! The information you give us, the faster we will help you and make depthai better!

Please, do the following steps:
1. Run script `log_system_information.sh` and provide us the output (`log_system_information.txt`, it's system version & modules versions);
2. Take a photo of a device you are using (or provide us a device model);
3. Describe the expected results;
4. Describe the actual running results (what you see after started your script with depthai);
5. Provide us information about how you are using the depthai python API (code snippet, for example);
6. Send us consol outputs;

199 changes: 199 additions & 0 deletions python-api/calibrate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,199 @@
import depthai
from calibration_utils import *
import argparse
from argparse import ArgumentParser
from time import time
import numpy as np
import os
from pathlib import Path
import shutil
import consts.resource_paths

use_cv = True
try:
import cv2
except ImportError:
use_cv = False

def parse_args():
epilog_text = '''
Captures and processes images for disparity depth calibration, generating a `dephtai.calib` file
that should be loaded when initializing depthai. By default, captures one image across 13 polygon positions.

Image capture requires the use of a printed 7x9 OpenCV checkerboard applied to a flat surface (ex: sturdy cardboard).
When taking photos, ensure the checkerboard fits within both the left and right image displays. The board does not need
to fit within each drawn red polygon shape, but it should mimic the display of the polygon.

If the checkerboard does not fit within a captured image, the calibration will generate an error message with instructions
on re-running calibration for polygons w/o valid checkerboard captures.

The script requires a RMS error < 1.0 to generate a calibration file. If RMS exceeds this threshold, an error is displayed.

Example usage:

Run calibration with a checkerboard square size of 2.35 cm:
python calibrate.py -s 2.35

Run calibration for only the first and 3rd polygon positions:
python calibrate.py -p 0 2

Only run image processing (not image capture) w/ a 2.35 cm square size. Requires a set of polygon images:
python calibrate.py -s 2.35 -m process

Delete all existing images before starting image capture:
python calibrate.py -i delete

Capture 3 images per polygon:
python calibrate.py -c 3
'''
parser = ArgumentParser(epilog=epilog_text,formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument("-p", "--polygons", default=list(np.arange(len(setPolygonCoordinates(1000,600)))), nargs='*',
type=int, required=False,
help="Space-separated list of polygons (ex: 0 5 7) to restrict image capture. Default is all polygons.")
parser.add_argument("-c", "--count", default=1,
type=int, required=False,
help="Number of images per polygon to capture. Default is 1.")
parser.add_argument("-s", "--square_size_cm", default="2.5",
type=float, required=False,
help="Square size of calibration pattern used in centimeters. Default is 2.5.")
parser.add_argument("-i", "--image_op", default="modify",
type=str, required=False,
help="Whether existing images should be modified or all images should be deleted before running image capture. The default is 'modify'. Change to 'delete' to delete all image files.")
parser.add_argument("-m", "--mode", default=['capture','process'], nargs='*',
type=str, required=False,
help="Space-separated list of calibration options to run. By default, executes the full 'capture process' pipeline. To execute a single step, enter just that step (ex: 'process').")
options = parser.parse_args()

return options

args = vars(parse_args())
print("Using Arguments=",args)

if 'capture' in args['mode']:

# Delete Dataset directory if asked
if args['image_op'] == 'delete':
shutil.rmtree('dataset/')

# Creates dirs to save captured images
try:
for path in ["left","right"]:
Path("dataset/"+path).mkdir(parents=True, exist_ok=True)
except OSError as e:
print ("An error occurred trying to create image dataset directories:",e)
exit(0)

cmd_file = consts.resource_paths.device_depth_cmd_fpath

# Create Depth AI Pipeline to start video streaming
streams_list = ['left', 'right', 'depth']
pipieline = depthai.create_pipeline(
streams=streams_list,
cmd_file=cmd_file,
calibration_file=consts.resource_paths.calib_fpath,
config_file=consts.resource_paths.pipeline_config_fpath
)

num_of_polygons = 0
polygons_coordinates = []

image_per_polygon_counter = 0 # variable to track how much images were captured per each polygon
complete = False # Indicates if images have been captured for all polygons

polygon_index = args['polygons'][0] # number to track which polygon is currently using
total_num_of_captured_images = 0 # variable to hold total number of captured images

capture_images = False # value to track the state of capture button (spacebar)
captured_left_image = False # value to check if image from the left camera was capture
captured_right_image = False # value to check if image from the right camera was capture

run_capturing_images = True # value becames False and stop the main loop when all polygon indexes were used

calculate_coordinates = False # track if coordinates of polynoms was calculated
total_images = args['count']*len(args['polygons'])

while run_capturing_images:
data_list = pipieline.get_available_data_packets()
for packet in data_list:
if packet.stream_name == 'left' or packet.stream_name == 'right':
frame = packet.getData()
if calculate_coordinates == False:
height, width = frame.shape
polygons_coordinates = setPolygonCoordinates(height, width)
# polygons_coordinates = select_polygon_coords(polygons_coordinates,args['polygons'])
num_of_polygons = len(args['polygons'])
print("Will take %i total images, %i per each polygon." % (total_images,args['count']))
calculate_coordinates = True

frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

if capture_images == True:
if packet.stream_name == 'left':
filename = image_filename(packet.stream_name,polygon_index,total_num_of_captured_images)
cv2.imwrite("dataset/left/" + str(filename), frame)
print("py: Saved image as: " + str(filename))
captured_left_image = True

elif packet.stream_name == 'right':
filename = image_filename(packet.stream_name,polygon_index,total_num_of_captured_images)
cv2.imwrite("dataset/right/" + str(filename), frame)
print("py: Saved image as: " + str(filename))
captured_right_image = True

if captured_right_image == True and captured_left_image == True:
capture_images = False
captured_left_image = False
captured_right_image = False
total_num_of_captured_images += 1
image_per_polygon_counter += 1

if image_per_polygon_counter == args['count']:
image_per_polygon_counter = 0
try:
polygon_index = args['polygons'][args['polygons'].index(polygon_index)+1]
except IndexError:
complete = True

if complete == False:
cv2.putText(frame, "Align cameras with callibration board and press spacebar to capture the image", (0, 25), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 255, 0))
cv2.putText(frame, "Polygon Position: %i. " % (polygon_index) + "Captured %i of %i images." % (total_num_of_captured_images,total_images), (0, 700), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (255, 0, 0))
cv2.polylines(frame, np.array([getPolygonCoordinates(polygon_index, polygons_coordinates)]), True, (0, 0, 255), 4)
# original image is 1280x720. reduce by 2x so it fits better.
aspect_ratio = 1.5
new_x, new_y = int(frame.shape[1]/aspect_ratio), int(frame.shape[0]/aspect_ratio)
resized_image = cv2.resize(frame,(new_x,new_y))
cv2.imshow(packet.stream_name, resized_image)
else:
# all polygons used, stop the loop
run_capturing_images = False

key = cv2.waitKey(1)

if key == ord(" "):
capture_images = True

elif key == ord("q"):
print("py: Calibration has been interrupted!")
exit(0)


del pipieline # need to manualy delete the object, because of size of HostDataPacket queue runs out (Not enough free space to save {stream})

cv2.destroyWindow("left")
cv2.destroyWindow("right")

else:
print("Skipping capture.")

if 'process' in args['mode']:
print("Starting image processing")
cal_data = StereoCalibration()
try:
cal_data.calibrate("dataset", args['square_size_cm'], "./depthai.calib")
except AssertionError as e:
print("[ERROR] " + str(e))
exit(0)
else:
print("Skipping process.")

print('py: DONE.')
Loading