Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes and improvements for StereoDepth, ColorCamera #82

Merged
merged 32 commits into from
Apr 28, 2021

Conversation

alex-luxonis
Copy link
Contributor

  • StereoDepth: allow streaming both disparity and depth (but still not possible to have both when Subpixel mode is enabled)
  • Fix some crashes when the Stereo extended modes (LR-check/extended-disparity/subpixel) were used in parallel with other Shaves (NN or RGB postprocessing). May not be fully solved yet.
  • Fix a crash in Stereo standard mode, with depth output enabled, but median filtering disabled.
  • Add API to override baseline and FOV config (from EEPROM / defaults), useful for depth calculation with .setEmptyCalibration() and already rectified frames streamed from host
  • Deprecated setOutputRectified/setOutputDepth, the outputs will be automatically enabled when connected in the pipeline
  • ColorCamera: add isp and raw outputs, scale config for isp. Currently when the scaling is enabled, will need to disable preview scaling (if the output is used). TODO fix.
  • Other FW changes picked from PR Wip gen2 detection 3d #74:
    • undefined/invalid depth set to 0 instead of infinity (65535 mm)
    • focal length for depth calculation taken from calib data (right camera intrinsic), instead of using the H-FOV in board config
    • depth output changed to U16 instead of FP16 for all modes

Related PR: luxonis/depthai-shared#23

Works with LRcheck or LRcheck+Subpixel for now.
The updated FW also fixes some crashes with LRcheck mode enabled
/**
* @param align Set the disparity/depth alignment to the perspective of one camera
*/
void setDepthAlign(Properties::DepthAlign align);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use CameraBoardSocket to which to align to, instead of new DepthAlign enum?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently DepthAlign contains: { AUTO = -1, RIGHT, LEFT, CENTER, RGB }

  • RIGHT (default) and LEFT would map to the respective 'rectified' streams.
  • CENTER would just displace the disparity map (each pixel by its disparity_value / 2), while RGB (not yet implemented, defaulting with a warning to CENTER) would need an additional 'warp' step (extrinsics alignment / resolution scaling). For boards like BW1092 where the RGB camera is not centered, the disparity displacement would need to be done by a different amount.

Mapping to the real LEFT and RIGHT (from CameraBoardSocket), we would need to do and inverse homography transform, which should be easily doable with the mechanism for RGB in place. But it incurs the additional warp overhead, and in many cases it's not needed (as could be used directly with the rectified streams).

Should then we extend DepthAlign as: { AUTO = -1, RECTIFIED_RIGHT, RECTIFIED_LEFT, CENTER, RGB, LEFT, RIGHT } ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about if we separate this two to:

  • CameraBoardSocket <- which camera it references (can be up to number of MIPI lanes, not added yet to enum)
  • DepthAlign <- "mode" to referenced, like: RECTIFIED, ORIGINAL, CENTER, .. (TBD)

I think we'd cover all cases with this, without too much repetition.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely adding a separate config with CameraBoardSocket is good, I will add it.

But one case where combining these two might be confusing is with e.g. CameraBoardSocket::RGB and mode: RECTIFIED.

Or with images from host, or running Stereo on OAK-1: one input from the RGB camera, the other from host (actually forwarded by another OAK-1), as one customer wants to use. Actually I think here we should have a way to pass a minimal StereoCalibration structure to StereoDepth, that won't refer to CameraBoardSocket entries, but simply to the 'left' and 'right' inputs to the node. We can discuss further on luxonis/depthai-shared#19

So I was thinking to reference based on the I/Os of the StereoDepth node. Something like
DepthAlign: RECTIFIED_RIGHT, RECTIFIED_LEFT, RECTIFIED_CENTER, INPUT_LEFT, INPUT_RIGHT, OTHER_CAMERA.

When OTHER_CAMERA is set, will specify it by CameraBoardSocket. And for example running standard stereo on OAK-D, INPUT_RIGHT would be equivalent to OTHER_CAMERA + CameraBoardSocket::RIGHT.

Thoughts on that? Also how to have the API, two separate functions, one with two parameters, or an overloaded function (pass either DepthAlign or CameraBoardSocket, and then remove OTHER_CAMERA)?
CC @saching13

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What martin suggested works best if we ignore the concept that left, and right are our two global shutter cameras.

In stereo node we set left camera and right camera using CameraBoardSocket and we have

DepthAlign <- "mode" to referenced, like: RECTIFIED, ORIGINAL, CENTER, .. (TBD)

In DepthAlign mode let's have only RECTIFIED_LEFT, RECTIFIED_RIGHT and CENTER.

And setAlignment(cameraBoardSocket) will make sure to align to any camera's that we have.
And another overload function or something like setExtendedAlignment takes DepthAlign mode. But in this case RECTIFIED_LEFT will be the rectified_left frame of the camera/image passed to the stereo left. and RECTIFIED_RIGHT will be the rectified_right frame of the camera/image passed as the right image to the stereo node.

One such case is where rgb camera can be stereo left and right camera can be stereo right.
And CENTER would not be the center of the device but the center of the stereo baseline.

Thoughts ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any resolution here? I think I've understood, and a lot of these are sounding good to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed as an overloaded function, as the options (DepthAlign and CameraBoardSocket) cannot be combined, and would be confusing to set both.

Copy link
Collaborator

@themarpe themarpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also don't forget to add outputs 'isp' & 'raw' to the list of outputs

std::vector<Node::Output> ColorCamera::getOutputs() {
    return {video, preview, still};
}

Comment on lines 137 to 150
void ColorCamera::setIspScale(int numerator, int denominator) {
properties.ispScale.horizNumerator = numerator;
properties.ispScale.horizDenominator = denominator;
properties.ispScale.vertNumerator = numerator;
properties.ispScale.vertDenominator = denominator;
}

void ColorCamera::setIspScaleFull(int horizNum, int horizDenom, int vertNum, int vertDenom) {
properties.ispScale.horizNumerator = horizNum;
properties.ispScale.horizDenominator = horizDenom;
properties.ispScale.vertNumerator = vertNum;
properties.ispScale.vertDenominator = vertDenom;
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets overload these two as well.
As well as an tuple overload (plays nicely with initializer lists and tuples/lists in python)

setIspScale(int numerator, int denominator)
setIspScale(std::tuple<int, int> scale)

setIspScale(int horizNum, int horizDenom, int vertNum, int vertDenom)
setIspScale(std::tuple<int, int> horizScale, std::tuple<int, int> vertScale)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also same for setPreviewSize, setVideoSize and setStillSize

Comment on lines 75 to 76
Output isp{*this, "isp", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}};
Output raw{*this, "raw", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing docstring

- add `isp` and `raw` to ColorCamera list of outputs, add docstrings
- overloaded `setIspScale`, with tuple inputs as options
- also overloaded `setPreviewSize`, `setVideoSize` and `setStillSize` with tuple inputs
remove non-implemented setNoiseReductionStrength.
Updated FW: all initial controls can be applied for Mono/ColorCamera
(no longer limited to focus settings for Color)
@alex-luxonis alex-luxonis marked this pull request as ready for review March 31, 2021 13:20
@@ -118,7 +118,7 @@ cv::Mat ImgFrame::getCvFrame() {
} break;

case Type::YUV420p:
cv::cvtColor(frame, output, cv::ColorConversionCodes::COLOR_YUV420p2BGR);
cv::cvtColor(frame, output, cv::ColorConversionCodes::COLOR_YUV2BGR_IYUV);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About the change here, it may look counter-intuitive, but the YUV420p frames we receive from device (currently from ColorCamera::isp output that is added on this PR) have the plane order: Y, U, V.

In OpenCV the format YUV420p is defined as an equivalent to YV12:
cv::COLOR_YUV420p2BGR = COLOR_YUV2BGR_YV12,
https://docs.opencv.org/4.5.1/d8/d01/group__imgproc__color__conversions.html

While YV12 has the plane order: Y, V, U:
https://www.fourcc.org/pixel-format/yuv-yv12/

Copy link
Collaborator

@themarpe themarpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd only apply the --parallel 8 option to CI: main.workflow.yml and
And leave --parallel elsewhere.

@alex-luxonis
Copy link
Contributor Author

So I added --parallel 8 in the Readme command snippets more as a guideline for the users to limit the number of build jobs, as by default --parallel seems to create as many processes as the underlying build tool (make, etc) needs (I see over 40 build jobs fired at once on my 8 CPU threads machine), and needs several GB of RAM, memory that if isn't free/available, this could easily hang up the system.
But can remove it. Not sure then if we should put a note somewhere else.

@themarpe
Copy link
Collaborator

themarpe commented Apr 2, 2021

The problem with --parallel 8 is that it won't solve the issue for eg RPi, etc... and the value is arbitrary.
Should we leave it as --parallel but add a note something like:

ℹ️ Use --parallel [num CPU cores] to reduce memory consumption

Or add this directly into the code block.

On other hand, for script like in depthai-python/docs/docker/update_docs.sh we can do:

cmake --build build --target sphinx --parallel $(nproc)

Also apparently ninja uses num cpu + 2 as default multithreading number of concurrent builds, which is nice:)

@alex-luxonis alex-luxonis merged commit 8b7eaf7 into develop Apr 28, 2021
@alex-luxonis alex-luxonis deleted the stereo_fixes branch April 28, 2021 11:28
@Luxonis-Brandon Luxonis-Brandon added the Gen2 Feature for or issue with Gen2 label Apr 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Gen2 Feature for or issue with Gen2
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants