Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Platform review of server #34

Merged
merged 1 commit into from
May 16, 2023

Conversation

jwnimmer-tri
Copy link
Contributor

@jwnimmer-tri jwnimmer-tri commented May 11, 2023

The whitespace is to nudge Reviewable.
It will be reverted prior to merge.


This change is Reviewable

@SeanCurtis-TRI SeanCurtis-TRI self-assigned this May 15, 2023
Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+@SeanCurtis-TRI

Reviewable status: LGTM missing from assignee SeanCurtis-TRI

Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:LGTM: With some small stuff and some feature-ish comments.

Reviewed 5 of 5 files at r1, all commit messages.
Reviewable status: 9 unresolved discussions (waiting on @jwnimmer-tri)


CONTRIBUTING.md line 8 at r1 (raw file):

When you make a contribution to this project, you are agreeing to offer your
contribition under the same [LICENSE.TXT](LICENSE.TXT) as the project.

nit typo

Suggestion:

contribution

server.py line 141 at r1 (raw file):

        # Import a glTF file. Note that the Blender glTF importer imposes a
        # 90-degree rotation along X-axis when loading meshes. Thus, we

nit: A couple of notes:

  1. The rotation would be around the X-axis and not "along".
  2. While the document isn't wrong, per se, it's imprecise. The sign of rotation matters and, in this case, the comment language implies a positive rotation. (However, the code likewise imparts a positive rotation, so that suggests the text is overly vague.)

Does this feel like a defect in VTK's glTF exporter? glTF is documented as y-up. Blender is z-up. If we need to rotate positive 90 degrees around the x-axis, it's because blender applied, correctly, the negative 90 degree rotation to go back from y-up to z-up.

Admittedly, as long as the server knows about the idiosyncracies of the client's glTF flavor, they can form a coherent whole. But it does mean we're producing non-compliant glTF files.

Code quote:

90-degree rotation along X-axis

server.py line 171 at r1 (raw file):

        # Set the clipping planes using {min, max}_depth when rendering depth
        # images; otherwise, `near` and `far` are set for color and label
        # images.

This constitutes a change in the depth image that Drake would return.

In VTK (and GL), it is possible to get a depth image where the value is "too close" or "too far". This is because the clipping range can lie outside of the the depth range. When the depth range becomes the clipping range, the behavior changes.

Where this matters most is on geometry that is too close. With a near clipping plane closer than the min depth plane, we can't get "too close" returns. Now, they'll all get clipped away and depending on what's behind the clipped geometry, we could get arbitrary depth returns (including up to infinity). But they definitely won't be "too close".

This is a significant enough change, that even it's necessary, it should be documented in some way here. If it's not necessary, we should put in a TODO to address it.


server.py line 187 at r1 (raw file):

        # be set if the values of fov_x and fov_y are different. The code path
        # needs to be tested.
        if params.fov_x == params.fov_y:

It's worth noting that fov_{x|y} and focal_{x|y} are duals of each other. Given the image size and one of those values, you can compute the others. This code seems to imply that they are independent degrees of freedom (focal used above, fov used here).

I believe that you should always set the lens_unit to "FOV" and define the vertical field of view and rely on the rendering pixel aspect ratio to handle non-symmetrical lenses.

Am I missing something such that this is not the case?


server.py line 189 at r1 (raw file):

        if params.fov_x == params.fov_y:
            camera.data.lens_unit = "FOV"
            camera.data.angle = params.fov_x

nit: This is correct (in this specific case) but misleading. I believe the semantics of the camera's field of view on the vertical (or y) axis. So, let's assign fov_y to be clear that we understand those semantics.

(Although, this comment probably gets subsumed in my previous comment.)


server.py line 297 at r1 (raw file):

        # Convert depth measurements via a MapValueNode. The depth values are
        # measured in meters, and thus they are converted to millimeters first.
        # Blender scales the pixel values by 65535 (2^16 -1) when producing an

nit It looks right orthographically, but reads horribly phonetically.

Suggestion:

a

test/server_test.py line 185 at r1 (raw file):

    # Test color and depth image rendering given a rgba and a textured mesh.
    # Note: Rendering a label image is not applicable for any textured objects.

I'm not sure what this means. Not applicable in what sense?

Code quote:

 Note: Rendering a label image is not applicable for any textured objects.

test/server_test.py line 206 at r1 (raw file):

        requests. Each image type is first rendered and compared with the
        ground truth images. A second image is then rendered and expected to be
        pixel-identical as the first one.

BTW This assertion does depend on rendering properties. (E.g., if the rendering did global illumination via sampling and denoising, it wouldn't be.) Probably worth being clear about this requirement.


test/server_test.py line 230 at r1 (raw file):

        for index, test_case in enumerate(test_cases):
            second_image = self._render_and_check(

nit: second_image isn't used.

Copy link
Contributor Author

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checkpoint on fixes, just to get the whitespace revert pushed to CI.

I also staged a README tidy-up that I've been meaning to do for a little while.

Reviewable status: 7 unresolved discussions (waiting on @SeanCurtis-TRI)


test/server_test.py line 185 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

I'm not sure what this means. Not applicable in what sense?

Hopefully better now?

Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 5 of 5 files at r2, all commit messages.
Reviewable status: 7 unresolved discussions (waiting on @jwnimmer-tri)


test/server_test.py line 185 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Hopefully better now?

The counter argument is that the glTF has a textured object. What we should be testing is that the "make a label image" logic doesn't get interfered with in the presence of broadcast images, right?

Copy link
Contributor Author

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 6 unresolved discussions (waiting on @SeanCurtis-TRI)


server.py line 141 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

nit: A couple of notes:

  1. The rotation would be around the X-axis and not "along".
  2. While the document isn't wrong, per se, it's imprecise. The sign of rotation matters and, in this case, the comment language implies a positive rotation. (However, the code likewise imparts a positive rotation, so that suggests the text is overly vague.)

Does this feel like a defect in VTK's glTF exporter? glTF is documented as y-up. Blender is z-up. If we need to rotate positive 90 degrees around the x-axis, it's because blender applied, correctly, the negative 90 degree rotation to go back from y-up to z-up.

Admittedly, as long as the server knows about the idiosyncracies of the client's glTF flavor, they can form a coherent whole. But it does mean we're producing non-compliant glTF files.

Working

I am not sure if this is a bug in Drake's glTF exporter, or in Blender's glTF importer.


server.py line 171 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

This constitutes a change in the depth image that Drake would return.

In VTK (and GL), it is possible to get a depth image where the value is "too close" or "too far". This is because the clipping range can lie outside of the the depth range. When the depth range becomes the clipping range, the behavior changes.

Where this matters most is on geometry that is too close. With a near clipping plane closer than the min depth plane, we can't get "too close" returns. Now, they'll all get clipped away and depending on what's behind the clipped geometry, we could get arbitrary depth returns (including up to infinity). But they definitely won't be "too close".

This is a significant enough change, that even it's necessary, it should be documented in some way here. If it's not necessary, we should put in a TODO to address it.

This is a good point. I've added a TODO for kTooClose.

From what I can tell, there is no bug relating to kTooFar? From code reading at least, it seems correct. For now the TODO calls for testing it but does not claim there is a bug. Let me know if that matches your understanding.


test/server_test.py line 185 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

The counter argument is that the glTF has a textured object. What we should be testing is that the "make a label image" logic doesn't get interfered with in the presence of broadcast images, right?

I am not sure what you mean. If an RPC label request contains a texture, it's a precondition violation and a bug in the client. Are you saying that the server should detect this condition and fail-fast with an exception?


test/server_test.py line 206 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

BTW This assertion does depend on rendering properties. (E.g., if the rendering did global illumination via sampling and denoising, it wouldn't be.) Probably worth being clear about this requirement.

Hopefully better now?

Copy link
Contributor Author

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 5 unresolved discussions (waiting on @SeanCurtis-TRI)


server.py line 141 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Working

I am not sure if this is a bug in Drake's glTF exporter, or in Blender's glTF importer.

Done (via TODO).

Copy link
Contributor Author

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All done now (hopefully).

Reviewable status: 5 unresolved discussions (waiting on @SeanCurtis-TRI)


server.py line 187 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

It's worth noting that fov_{x|y} and focal_{x|y} are duals of each other. Given the image size and one of those values, you can compute the others. This code seems to imply that they are independent degrees of freedom (focal used above, fov used here).

I believe that you should always set the lens_unit to "FOV" and define the vertical field of view and rely on the rendering pixel aspect ratio to handle non-symmetrical lenses.

Am I missing something such that this is not the case?

Hopefully somewhat better now?

I also filed #40 to track the missing test coverage.

I didn't make any meaningful change to the aspect ratio (computed using focal lengths). Did I miss anything?


server.py line 189 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

nit: This is correct (in this specific case) but misleading. I believe the semantics of the camera's field of view on the vertical (or y) axis. So, let's assign fov_y to be clear that we understand those semantics.

(Although, this comment probably gets subsumed in my previous comment.)

Hopefully better now?

Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last concern regarding the camera intrinsics: fov vs pixel aspect ratio.

Reviewed 1 of 2 files at r3, 1 of 1 files at r5, all commit messages.
Reviewable status: 1 unresolved discussion (waiting on @jwnimmer-tri)


server.py line 141 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Done (via TODO).

Definitely VTK export. Blender does the right thing on export and import of gltf. gltf has a clear spec. Blender transform when it writes out, and transforms back when it imports it back. VTK is the one that isn't playing the game. I'll elaborate on the issue as necessary.


server.py line 171 at r1 (raw file):
Correct, the far side is perfectly happy. I'd rephrase it slightly:

When there is geometry in the range [near, min], it should return zero (too close). As is, it will return non-zero.

That said, this references an issue, so all elaboration can go there.


server.py line 187 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Hopefully somewhat better now?

I also filed #40 to track the missing test coverage.

I didn't make any meaningful change to the aspect ratio (computed using focal lengths). Did I miss anything?

See my comment below.


server.py line 189 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Hopefully better now?

Hmmm....now this has me worried. The combination of independent fields of view + pixel aspect ratio seems highly suspicious. I'd expect either symmetric fields of view or symmetric pixel ratio, but both being asymmetric seems like an error.

If these fields of view work as expected, then I suspect the pixel aspect ratio should go away entirely.

Out of curiosity, what documentation are you using to discover these members? When I look on pages like https://docs.blender.org/api/current/bpy.types.Camera.html#bpy.types.Camera, it looks different from what's here (and the semantics are woefully lacking). Is there a better resource I haven't stumbled across?


test/server_test.py line 185 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

I am not sure what you mean. If an RPC label request contains a texture, it's a precondition violation and a bug in the client. Are you saying that the server should detect this condition and fail-fast with an exception?

Ah...I hadn't realized we produce a different gltf based on the image type.


test/server_test.py line 206 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Hopefully better now?

Definitely.

Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 1 unresolved discussion (waiting on @jwnimmer-tri)


server.py line 189 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

Hmmm....now this has me worried. The combination of independent fields of view + pixel aspect ratio seems highly suspicious. I'd expect either symmetric fields of view or symmetric pixel ratio, but both being asymmetric seems like an error.

If these fields of view work as expected, then I suspect the pixel aspect ratio should go away entirely.

Out of curiosity, what documentation are you using to discover these members? When I look on pages like https://docs.blender.org/api/current/bpy.types.Camera.html#bpy.types.Camera, it looks different from what's here (and the semantics are woefully lacking). Is there a better resource I haven't stumbled across?

Ok, I just tinkered with blender directly and I'm now concinced that this is wrong.

The fix is easy. Set either angle_x or angle_y but not both and leave the pixel aspect ratio intact.

details: it appears that angle_x and angle_y are merely indirections for setting angle. It does not introduce anisotropy.

Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 2 unresolved discussions (waiting on @jwnimmer-tri)


server.py line 156 at r5 (raw file):

        scene.render.resolution_y = params.height
        scene.render.pixel_aspect_x = 1.0
        scene.render.pixel_aspect_y = params.focal_y / params.focal_x

In retrospect, I now believe this is wrong.

  1. The values for pixel aspect ratio must be >= 1.
  2. Therefore, if focal_x > focal_y, we will attempt to assign an invalid value to pixel_aspect_y (which will silently turn into 1).
  3. Furthermore, even if focal_y > focal_x so that the ratio > 1, we've got the values reversed. If focal_y > focal_x, pixel_aspect_y should be 1 and, counterintuitively, pixel_aspect_x should be focal_y / focal_x.
    • THe converse is likewise true to make sure one value is always one and the other is always >= 1.

Copy link
Contributor Author

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 2 unresolved discussions (waiting on @SeanCurtis-TRI)


server.py line 189 at r1 (raw file):
I've pushed my best attempt at the fix.

If I set only one of angle_x or angle_y, the regression tests fail. This was true no matter how I adjusted the pixel aspect code in the other discussion.

If you have a better idea, possibly making a patch file would best communicate what I've missed.

Out of curiosity, what documentation are you using to discover these members? ...

(Yes, that's the link I was using as a reference.)


server.py line 156 at r5 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

In retrospect, I now believe this is wrong.

  1. The values for pixel aspect ratio must be >= 1.
  2. Therefore, if focal_x > focal_y, we will attempt to assign an invalid value to pixel_aspect_y (which will silently turn into 1).
  3. Furthermore, even if focal_y > focal_x so that the ratio > 1, we've got the values reversed. If focal_y > focal_x, pixel_aspect_y should be 1 and, counterintuitively, pixel_aspect_x should be focal_y / focal_x.
    • THe converse is likewise true to make sure one value is always one and the other is always >= 1.

I removed this stanza entirely; my understanding is that setting the fov should cover it, and the test passes without this.

Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 1 files at r6, all commit messages.
Reviewable status: 2 unresolved discussions (waiting on @jwnimmer-tri)


server.py line 189 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

I've pushed my best attempt at the fix.

If I set only one of angle_x or angle_y, the regression tests fail. This was true no matter how I adjusted the pixel aspect code in the other discussion.

If you have a better idea, possibly making a patch file would best communicate what I've missed.

Out of curiosity, what documentation are you using to discover these members? ...

(Yes, that's the link I was using as a reference.)

This works just fine for me:

--- a/server.py
+++ b/server.py
@@ -152,6 +152,12 @@ class Blender:
         scene.render.filepath = str(output_path)
         scene.render.resolution_x = params.width
         scene.render.resolution_y = params.height
+        if params.focal_x > params.focal_y:
+            scene.render.pixel_aspect_x = 1.0
+            scene.render.pixel_aspect_y = params.focal_x / params.focal_y
+        else:
+            scene.render.pixel_aspect_x = params.focal_y / params.focal_x
+            scene.render.pixel_aspect_y = 1.0
 
         # Set camera parameters.
         camera = bpy.data.objects.get("Camera Node")
@@ -184,7 +190,6 @@ class Blender:
         ) / params.width
 
         camera.data.lens_unit = "FOV"
-        camera.data.angle_x = params.fov_x
         camera.data.angle_y = params.fov_y
 
         # Set image_type specific functionality.

Furthermore, I evaluated this logic in a blender console session so I could see the immediate impact of these. I'm pretty sure this is all correct.


As for API documentation, I now have seen the whole "object container" vs "data type data" thing. What in the server is camera and camera.data.


server.py line 156 at r5 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

I removed this stanza entirely; my understanding is that setting the fov should cover it, and the test passes without this.

I've explicitly set those camera settings in a blender session. Setting the fovs do not create anisotropic lenses.

Copy link
Contributor Author

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 2 unresolved discussions (waiting on @jwnimmer-tri)


server.py line 189 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

This works just fine for me:

--- a/server.py
+++ b/server.py
@@ -152,6 +152,12 @@ class Blender:
         scene.render.filepath = str(output_path)
         scene.render.resolution_x = params.width
         scene.render.resolution_y = params.height
+        if params.focal_x > params.focal_y:
+            scene.render.pixel_aspect_x = 1.0
+            scene.render.pixel_aspect_y = params.focal_x / params.focal_y
+        else:
+            scene.render.pixel_aspect_x = params.focal_y / params.focal_x
+            scene.render.pixel_aspect_y = 1.0
 
         # Set camera parameters.
         camera = bpy.data.objects.get("Camera Node")
@@ -184,7 +190,6 @@ class Blender:
         ) / params.width
 
         camera.data.lens_unit = "FOV"
-        camera.data.angle_x = params.fov_x
         camera.data.angle_y = params.fov_y
 
         # Set image_type specific functionality.

Furthermore, I evaluated this logic in a blender console session so I could see the immediate impact of these. I'm pretty sure this is all correct.


As for API documentation, I now have seen the whole "object container" vs "data type data" thing. What in the server is camera and camera.data.

Working

Ah. I only tried setting fov_x and leaving fov_y untouched. (I had the same aspect code.) I didn't think to try only "y".

Copy link
Contributor Author

@jwnimmer-tri jwnimmer-tri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 1 unresolved discussion (waiting on @SeanCurtis-TRI)


server.py line 189 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Working

Ah. I only tried setting fov_x and leaving fov_y untouched. (I had the same aspect code.) I didn't think to try only "y".

Yup, your patch works. Doing camera.data.angle_x = params.fov_x instead does not work.


server.py line 156 at r5 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

I've explicitly set those camera settings in a blender session. Setting the fovs do not create anisotropic lenses.

Done.

Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 1 files at r7, all commit messages.
Reviewable status: :shipit: complete! all discussions resolved, LGTM from assignee SeanCurtis-TRI (waiting on @jwnimmer-tri)


server.py line 189 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Yup, your patch works. Doing camera.data.angle_x = params.fov_x instead does not work.

To be perfectly frank, I have no idea why setting fov_x doesn't work. If you change any one of angle, angle_x, and angle_y then the other two change automatically. angle and angle_x match (I won't say "always" yet, but in all of my experiments, they are kept in lock). angle_y changes as well, based on a different relationship. With a square image, I'd expect them to stay equal. However, they have a ratio of approximately sqrt(2). I'm really not sure why.

(Also, changing the field of view via any of those values will likewise change the focal length...all views on the same value.) Typically, the fov ratio should be equal to the image aspect ratio. But not here...not sure why not.

Copy link
Contributor

@SeanCurtis-TRI SeanCurtis-TRI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:LGTM:

Reviewable status: :shipit: complete! all discussions resolved, LGTM from assignee SeanCurtis-TRI (waiting on @jwnimmer-tri)

@jwnimmer-tri jwnimmer-tri merged commit 1aced27 into RobotLocomotion:main May 16, 2023
1 check passed
@jwnimmer-tri jwnimmer-tri deleted the review-server branch May 16, 2023 20:50
@SeanCurtis-TRI
Copy link
Contributor

server.py line 189 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

To be perfectly frank, I have no idea why setting fov_x doesn't work. If you change any one of angle, angle_x, and angle_y then the other two change automatically. angle and angle_x match (I won't say "always" yet, but in all of my experiments, they are kept in lock). angle_y changes as well, based on a different relationship. With a square image, I'd expect them to stay equal. However, they have a ratio of approximately sqrt(2). I'm really not sure why.

(Also, changing the field of view via any of those values will likewise change the focal length...all views on the same value.) Typically, the fov ratio should be equal to the image aspect ratio. But not here...not sure why not.

This may still not be correct. The cameras a couple more fields that matter:

When setting something like camera.data.angle = pi / 2 the values of camera.data.angle_? depend on those properties. In this case, the sensor defaults to having a 1.5 width-to-height ratio. A non unit-aspect ratio matters because the sensor_fit decides which measure of the sensor to use when computing field of view.

Options:

  • We can assume that the camera created by the gltf import is going to have auto and sensor dimensions defined in such a way that we can always safely set angle_y.
  • We can explicitly set the sensor size "appropriately". I'm not quite sure what the "appropriate" value is -- although I have my suspicions. I suspect if sensor aspect matches image aspect ratio we're good. But I don't understand yet how the sensor dimensions feed into anything else (possibly nothing else).
    • I don't think we can programmatically set the fit (I hvaen't figured out where to find byp enumeration values).

@jwnimmer-tri
Copy link
Contributor Author

server.py line 189 at r1 (raw file):

Previously, SeanCurtis-TRI (Sean Curtis) wrote…

This may still not be correct. The cameras a couple more fields that matter:

When setting something like camera.data.angle = pi / 2 the values of camera.data.angle_? depend on those properties. In this case, the sensor defaults to having a 1.5 width-to-height ratio. A non unit-aspect ratio matters because the sensor_fit decides which measure of the sensor to use when computing field of view.

Options:

  • We can assume that the camera created by the gltf import is going to have auto and sensor dimensions defined in such a way that we can always safely set angle_y.
  • We can explicitly set the sensor size "appropriately". I'm not quite sure what the "appropriate" value is -- although I have my suspicions. I suspect if sensor aspect matches image aspect ratio we're good. But I don't understand yet how the sensor dimensions feed into anything else (possibly nothing else).
    • I don't think we can programmatically set the fit (I hvaen't figured out where to find byp enumeration values).

Possibly it's worth copying that comment into #40 for posterity?

@SeanCurtis-TRI
Copy link
Contributor

server.py line 189 at r1 (raw file):

Previously, jwnimmer-tri (Jeremy Nimmer) wrote…

Possibly it's worth copying that comment into #40 for posterity?

Will do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants