-
-
Notifications
You must be signed in to change notification settings - Fork 35.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow multiple VR devices to be used simultaneously. #6286
Conversation
Thanks! |
It's not a great solution, but at least allows for the scenario where there On Thu, Apr 2, 2015 at 12:35 AM, Diego Marcos notifications@github.com
|
@borismus @cyee @vvuk @jcarpenter These changes are not compatible with how things work on Firefox Nightly at the moment. The webVR API always returns a Cardboard device besides other devices that might be connected. On desktop, the Cardboard device is reported on second place and always returns a zero quaternion (the value is never null). When iterating over the list the Cardboard orientation overrides the Rift's value. We have to improve threejs to better handle multiple vr inputs without relying on the order of the devices list returned by the API. We have to allow the application to decide which device is going to be used for orientation and position (via Hardware ID?) |
@dmarcos IMO the real problem is that Firefox Nightly reports a phantom Cardboard sensor. The sensible fix is for Firefox to remove that device on desktops. Is that going to happen anytime soon? If not, I can tweak #6340 to ignore the cardboard device (by looking for a specific deviceId) if it finds an Oculus device as well. |
@brianpeiris My understanding is that the phantom Cardboard sensor is something temporary. @vvuk can provide more info on the rational behind it. This strange behavior of Firefox Nightly is showing the kind of problems that this patch is introducing when dealing with multiple inputs. It iterates over all the reported devices and sets the camera orientation and position for each of them. The last in the list wins and there's no guaranteed order in which VR inputs are reported. It is going to confuse people. We need to give control to the developer on what input or combination of inputs are going to be used to modify the camera. |
To the larger discussion; VRControls makes a lot of assumptions about position/orientation sensors right now. I think fusing orientation and position from two separate devices is a good idea. The headtrackr implementation is a good example. I can see a similar solution being used for other combinations such as @mkeblx's goggle-paper fiducial tracking library for adding positional tracking to cardboard devices or Sixense's STEM/Hydra to add positional tracking for DK1 or Gear VR. I don't know if it makes sense to combine orientations from multiple devices or positions from multiple devices though. How do you decide which one is the "base" position/orientation? Furthermore, with the current Razer Hydra and STEM systems and in the near future, with the Vive headset and maybe even with Rift CV1, sensor devices are going to represent more than just the user's head position and orientation. They are going to represent hands and arms and torsos. Maybe even fingers. These assumptions will not hold for long but I don't think we need to account for everything right now. |
@dmarcos From the point of view of a WebVR API user, I'd like to think that the APIs take care of this for me automatically since there are probably going to be a typical set of configurations. Each dev shouldn't have to repeat the setup code in their applications. Perhaps the API spec should include a sensorLocation property that can be used to differentiate between a "head-orientation" sensor, "head-position" sensor, and a "hand" sensor, etc., so that VRControls can just filter to find the ones that should control the camera. |
The Cardboard device showing up on desktop is temporary, but multiple devices showing up is not -- these issues should be resolved early on before we have content out there that's blindly picking the first device in the list (or letting the last one win). Whatever HMD is used for output should have its associated position sensor(s) used (via matching hardwareUnitId). Any fusing between different devices/IDs will likely need to be done on a webapp level with specific understanding of the sensors involved (with helper libraries, certainly), instead of just taking all the provided ones. If there are multiple devices, then it's reasonable to pick the first presented HMD, or provide the user with a list to choose from, and then use its associated sensor(s). Or try to choose a more sensible default based on device name (not ID -- the IDs are not constant in any way and will change for every page, essentially). In the future, you may be able to drive two HMDs (for two users) from one browser, so choosing explicitly will be come important for those applications. |
@borismus Maybe the polyfill should fuse the mouse sensor and the headtrackr sensor before instead of VRControls, since the polyfill is already aware of their details.
@vvuk So is deviceName guaranteed to be a constant for each device going forward? Should it be enumerated in the spec?
Doesn't this contradict the spec?:
|
@dmarcos this hasn't reached to the |
@mrdoob Neither dev or master branches work at the moment with Firefox Nightly and we are getting reports of Firefox being broken. I want to revert VRControls to a functional state and then come up with a good solution for multiple inputs. It's going to take a few days to come up with something palatable. Should we patch VRControls with @brianpeiris patch in the meantime? |
Wait, I can't revert this. There have been other commits after this. Could you guys do a new PR that makes it work on Firefox Nightly again? |
I'm fine with reverting this. Main objective of the patch was to kick off a broader discussion, so mission accomplished :) First, as far as I can tell, PositionSensors are designed to be general enough to correspond to various objects: the head, an input controller, etc. It's pretty clear to me that PositionSensors corresponding to the head are a necessity, and they should somehow be identifiable as such. To solve this, we can have an enum in the PositionSensor corresponding to body part. To start, we can start with options of HEAD, and OTHER. (cc: @vvuk) Next, the question is whether or not the WebVR spec should even support multiple simultaneous PositionSensors for each body part. One design choice is to say that there can only be one PositionSensor active at a time. If we go with this direct mapping, and want to implement the rift default of being able to HMD-look and mouse-look simultaneously, we can special-case the mouse-look and keyboard input and not try to polyfill those. Otherwise, we allow multiple simultaneous PositionSensors, which is my preferred option. If we are keeping the spec as is, the right way to do this is to keep track of cumulative positionDeltas & orientationDeltas that are affected by each PositionSensor. I'll work on a patch unless someone objects. |
Yes, that would make sense to me.
It's not -- it's just a user-readable name. You could put the name directly in a dropdown UI to allow the user to select between different devices.
Yeah, the spec needs to change; I'll do that today. Without the change, this accidentally introduces a UUID for the Web, letting any content track you across any website, thus throwing out all sorts of privacy guidelines. Whoops! |
This enables applications such as this one: https://googledrive.com/host/0B4Nj-yDXjBs_fmNpbDdKMGlvLWg4RU05eWdKWURSZWs0d0ZzYURCemtodTdKeVprYy0ySFE.
Generally, worth thinking about what the right way of supporting multiple position sensors is, we should sync about a full rewrite of VREffect and VRControls and WebVR boilerplate.