-
Notifications
You must be signed in to change notification settings - Fork 790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instant placement update #2279
Instant placement update #2279
Conversation
Hey team! Quick feedback - this latest UX update (now part of >=v1.7.0) is pretty confusing for users. Instant placment usally results in the model initially floating mid-air instead of actually being placed on the surface around you. It then takes too much time to recalibrate to find the floor so the experience seems and feels stuck. After finding the floor, the re-placement of the model usally overshots the actul location. Tested on multiple Pixel Android devices (Pixel 3a, Pixel 3XL). We reverted back to 1.6.0 - hope you reconsider this merge or general UX choice to improve the UX. |
I would generally echo this sentiment. I can see both instant placement and "floating until plane detection" as suitable approaches if the user could simply replace the model with a simple tap on a detected plane (and some sort of visualization to denote the plane has been found. Users may not have their camera initially lined up where they will want the model to be placed, and that camera location may not ever be suitable for finding a plane. |
AR UX is not easy, and I'd highly recommend adding some DOM based on |
I don't understand your comment about not accessing planes in WebXR @elalish. You clearly know when a plane has been found, otherwise you wouldn't know when to "drop" the model. You mean you don't have access to hit detection for the planes in WebXR? I believe the ultimate problem with this new approach is that you cannot grab or interact with the model until a plane has been found. Do you have a link to the discussion/ticket that prompted this change, for reference? Thank you! |
Yes, we can only access the planes via the hit test API, so I can't tell where a plane is until my ray happens to intersect it, nor can I tell what shape it is (for visualization). In the previous version you also could not grab or interact with the model until a plane was found. However, I am starting to detect a feature request now. The ability to move the object before it is placed on the floor? That sounds doable. As for the tickets, there were several. I bet you'll find some if you search a bit. |
That's a reasonable request, @elalish. @echoARxyz do you think the ability to move an object before it is placed would also assist in your use-case? |
True that.
Definitely. Along with the ar-tracking event, this is a big, big help to users. In comparison:
I personally like hiding the object until placed. Simple. Clean. No confusion. |
@milesgreen Let's move this chat to the discussion where it will have some visibility: #2413 Our UX used to have no model at first, but we got a lot of feedback that people thought it was just broken. Scene Viewer has gotten this as well. If you try AR in a space with bad lighting or poor texture, it may never find the floor. We decided it was better to give people some experience with their object, rather than showing them nothing at all if ARCore fails. |
Definetly yes - check out the overall UX on my end with 1.6.0 vs. latest. Same model, same spot, same surface. Feedback we got on the latest version consistently mentioned people thinking things are just stuck or broken. Moving the model prior the the detction definetly helps with the overall experience even before tracking in done.
Agreed - we can honestly make "AR UX is not easy" t-shrits 👕 I think hiding the model and removing the ability to move it around is too confusing. It definetly "better to give people some experience with their object, rather than showing them nothing" as @elalish mentioned. |
This is a significant UX update to our WebXR placement flow: now instead of the model floating in a screen-attached way before it found a floor to place on, now it is placed at a fixed location in space, identical to the camera's placement in 3D mode. Once the floor is found, the object is moved up/down to meet its height. This is based on user feedback that the object did not appear to be in AR. Also, with the object fixed in space, it encourages the user to move side-to-side to explore it, rather than just turning the phone, which is key to helping ARCore find the floor.
This change also allowed us to switch to Three's XRManager, which in turn will allow us to upgrade to r127+. During this process there was some debug where I ended up reverting #2296 and #1711, since the latter was the root cause of the shiny models. Therefore #1678 has to be reopened and I'll do a proper fix for that in a following PR.