You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 1, 2022. It is now read-only.
Currently, web developers are expected to do some math themselves (see below) to convert the quantized depth value they get from the API to the actual depth measurement in millimeter units that tell the distance from the camera to the object at that particular point.
I'm asking for feedback whether we should:
Option 1: provide a convenience function that takes a quantized depth value d8bit and returns the corresponding depth measurement d in millimeters, or
Option 2: add a non-normative section that gives explicit guidance to web developers how they should do this conversion themselves, along with some practical examples.
This is what web developers are currently expected to do (option 2):
The depth measurement d (in millimeter units) is recovered by solving the depth to grayscale conversion for d as follows:
If the depth to grayscale conversion is linear, given d8bit, near and far, we first normalize d8bit to [0, 1] range:
This translates into a couple of lines of boilerplate JavaScript code, that I assume, will be rolled into a JS library as usual at some point. I think I'm leaning toward option 2 for v1 at least, but wanted to loop you in before baking this into the spec.
(As a recap, the reason why we have two ways to do the depth to grayscale conversion is that inverse allocates more bits to the near depth values and fewer bits to the far values (think GPU z-buffer), more appropriate if the source depth map bit depth is greater than 8, while linear allocates the bits evenly, better to be used if the source depth map bit depth is 8 or less.)
The text was updated successfully, but these errors were encountered:
Currently, web developers are expected to do some math themselves (see below) to convert the quantized depth value they get from the API to the actual depth measurement in millimeter units that tell the distance from the camera to the object at that particular point.
I'm asking for feedback whether we should:
This is what web developers are currently expected to do (option 2):
The depth measurement d (in millimeter units) is recovered by solving the depth to grayscale conversion for d as follows:
linear
, given d8bit, near and far, we first normalize d8bit to [0, 1] range:... and then solve the rules to convert using range linear for d:
inverse
, given d8bit, near and far, we similarly first normalize d8bit to [0, 1] range:... and then solve the rules to convert using range inverse for d:
This translates into a couple of lines of boilerplate JavaScript code, that I assume, will be rolled into a JS library as usual at some point. I think I'm leaning toward option 2 for v1 at least, but wanted to loop you in before baking this into the spec.
(As a recap, the reason why we have two ways to do the depth to grayscale conversion is that
inverse
allocates more bits to the near depth values and fewer bits to the far values (think GPU z-buffer), more appropriate if the source depth map bit depth is greater than 8, whilelinear
allocates the bits evenly, better to be used if the source depth map bit depth is 8 or less.)The text was updated successfully, but these errors were encountered: