Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test infrastructure for WebVR reftests #22840

Open
wants to merge 6 commits into
base: master
from

Conversation

Projects
None yet
7 participants
@asajeffrey
Copy link
Member

asajeffrey commented Feb 6, 2019

Adds a dom.webvr.test pref, which puts WebVR content into fullscreen mode, so we can use our shiny new fullscreen reftesting apparatus for writing WebVR reftests.


  • ./mach build -d does not report any errors
  • ./mach test-tidy does not report any errors
  • These changes fix #22795 and fix #22804
  • There are tests for these changes

This change is Reviewable

@highfive

This comment has been minimized.

Copy link

highfive commented Feb 6, 2019

Heads up! This PR modifies the following files:

  • @KiChjang: components/script/dom/vrdisplay.rs
  • @emilio: components/layout/display_list/items.rs, components/style/properties/longhands/box.mako.rs, components/layout/display_list/builder.rs, components/style/properties/properties.mako.rs
@highfive

This comment has been minimized.

Copy link

highfive commented Feb 6, 2019

warning Warning warning

  • These commits modify unsafe code. Please review it carefully!
@asajeffrey

This comment has been minimized.

Copy link
Member Author

asajeffrey commented Feb 6, 2019

This PR is based on #22825 so should land after it does.

@asajeffrey

This comment has been minimized.

Copy link
Member Author

asajeffrey commented Feb 6, 2019

As @jdm said on IRC (https://mozilla.logbot.info/servo/20190206#c15932019) we should get feedback from @paulrouget about whether we should enter fullscreen + use the browser's rAF for all devices without an external display, or whether we should add another property to the rust-webvr
API.

@bors-servo

This comment has been minimized.

Copy link
Contributor

bors-servo commented Feb 6, 2019

☔️ The latest upstream changes (presumably #22825) made this pull request unmergeable. Please resolve the merge conflicts.

@asajeffrey asajeffrey force-pushed the asajeffrey:script-webvr-reftest branch from ede4822 to 336769d Feb 6, 2019

@asajeffrey asajeffrey assigned paulrouget and unassigned avadacatavra Feb 6, 2019

@asajeffrey

This comment has been minimized.

Copy link
Member Author

asajeffrey commented Feb 6, 2019

Rebased. #22825 has landed, so this can land too, once @paulrouget has commented.

@asajeffrey

This comment has been minimized.

Copy link
Member Author

asajeffrey commented Feb 6, 2019

Also cc @kearwood.

@bors-servo

This comment has been minimized.

Copy link
Contributor

bors-servo commented Feb 7, 2019

☔️ The latest upstream changes (presumably #22773) made this pull request unmergeable. Please resolve the merge conflicts.

@asajeffrey

This comment has been minimized.

Copy link
Member Author

asajeffrey commented Feb 7, 2019

IRC conversation with @paulrouget at https://mozilla.logbot.info/servo/20190207#c15934602

TL;DR: We should update rust-webvr to have three kinds of displays: external, internal-using-webrender, and internal-without-webrender. That last kind is used by oculusvr and googlevr.

@asajeffrey asajeffrey force-pushed the asajeffrey:script-webvr-reftest branch from 336769d to da78317 Feb 7, 2019

@asajeffrey

This comment has been minimized.

Copy link
Member Author

asajeffrey commented Feb 7, 2019

@paul: Added a flag to rust-webvr over in servo/rust-webvr#49, and used it here to decide whether or not to do presentation inside servo.

Show resolved Hide resolved components/webvr/webvr_thread.rs Outdated
Show resolved Hide resolved components/script/dom/vrdisplay.rs Outdated
let js_sender = self.global().script_chan();
let address = Trusted::new(&*self);
let near_init = self.depth_near.get();
let far_init = self.depth_far.get();
let pipeline_id = self.global().pipeline_id();

// For displays with no external display, we use the regular window
// rAF, so we don't need to spin up a dedicated thread.

This comment has been minimized.

@paulrouget

paulrouget Feb 8, 2019

Contributor

How hard would it be to make the TestDisplay sync_poses behave in a way that we would not need to have this specific code path? I'm asking because the test will cover less code by re-using the window's rAF.

This comment has been minimized.

@asajeffrey
Show resolved Hide resolved components/script/dom/vrdisplay.rs Outdated
@asajeffrey
Copy link
Member Author

asajeffrey left a comment

Addressed most of the comments, just got sync_poses to deal with.

@bors-servo

This comment has been minimized.

Copy link
Contributor

bors-servo commented Feb 10, 2019

☔️ The latest upstream changes (presumably #22861) made this pull request unmergeable. Please resolve the merge conflicts.

@paulrouget

This comment has been minimized.

Copy link
Contributor

paulrouget commented Feb 11, 2019

Here are some questions:

  • How will we handle a canvas that is not part of the DOM?
  • What if another JS code triggers fullscreen mode?
  • Beside the fact the code path doesn't exist yet, is there another reason to not bypass WebRender? (WebRender adds a little bit of overhead)
  • Do you run Servo's compositor in the main thread? If so, can you move it in a separated thread and in immersive mode, run WebVR code in the main thread?

This general approach feels a bit like a work-around to me. And ML and FxR share a similar setup, so whatever we end-up doing for ML, we will do the same for FxR.

I think your approach might be incompatible with 2 parts of a longer term plan:

  • use WebGL texture directly and pause Servo's compositor
  • move RAF/VR calls to a WebVR process/thread, with shared texture (SurfaceTexture)

I'll be happy to r+ this as long as that we agree that it might be a temporary approach until we manage to make WebGL draw directly on a shared surface texture.

@MortimerGoro

This comment has been minimized.

Copy link
Contributor

MortimerGoro commented Feb 11, 2019

I talked with Paul this morning. Adding some feedback/questions:

  • Using fullscreen approach may no be WebXR compliant. The API allows to use a canvas not added to the DOM, also offscreencanvas. What happens in those situations? We may hack it to add the canvas to the DOM but that would change the parentNode properties, or trigger other events which may not be spec compliant (Dom Listeners, fullscreenchanges). cc @kearwood and @Manishearth may have an opinion about this)

  • I'm concerned about the extra blit required by waiting for the Webrender output. IMO the goal should be to reduce the latency of the render path as much as possible. Right now we have 1 extra blit in the oculusvr/google vr platforms: WebGL texture <-> SDK Eye Buffer FBO. By using Webrender output I think we'd have 2 blits: WebGL texture <-> WR Surface <-> SDK Eye Buffer FBO. The ideal situation should be 0 extra blits. For example on Oculus, we can do direct render to a SurfaceTexture swapChain, making WebGL render directly to a Surface provided by the SDK.

We should update rust-webvr to have three kinds of displays: external, internal-using-webrender, and internal-without-webrender. That last kind is used by oculusvr and googlevr.

I agree IF we can't avoid the Magic Leap main thread limitations. Other platforms should not be affected by this. For Firefox Reality backends I think the best render path would be to use the render target provided by the SDK (e.g. the SurfaceTexture swapChain from Oculus) and make WebGL render directly to that Surface, bypassing all the extra blits required on other situations.

@asajeffrey

This comment has been minimized.

Copy link
Member Author

asajeffrey commented Feb 11, 2019

The goal of minimizing the overhead of each rAF loop seems like a good one to me. AFAICT there's two sources of overhead:

  • Unecessary blit'ing. I'm confused as to why using WR requires an extra blit for devices like MagicLeap. WR is rendering into the surface provided by the embedder, so there isn't a separate WR Surface <-> SDK Eye Buffer blit.

  • WR. There is overhead from running WR, e.g. WR has to wait for layout to complete before rendering, even in cases like fullscreen or immersive mode where the DOM isn't contributing to the scene..

  • Context switching between threads. (#22855)

I'm not sure what the trade-off is between these, e.g. how much of a bottleneck there is in waiting for WR.

@MortimerGoro

This comment has been minimized.

Copy link
Contributor

MortimerGoro commented Feb 15, 2019

  • Unecessary blit'ing. I'm confused as to why using WR requires an extra blit for devices like MagicLeap. WR is rendering into the surface provided by the embedder, so there isn't a separate WR Surface <-> SDK Eye Buffer blit.`

The ideal path would be that WebGL renders to a surface provided by the VR SDK (e.g a swapChain on Oculus). That would be 0 blits.

Adding WR would add a extra blit in that situation., WebGL renders to a texture, and the the texture is blit on the WR surface.

For the 2 blits WebGL texture <-> WR Surface <-> SDK Eye Buffer I was thinking on platforms that don't allow creating swapChain to an Android Surface, and the "WebView" lives in another process from the Main app that renders the VR UI (e.g. GeckoView lives in another process in Firefox Reality)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.