-
Notifications
You must be signed in to change notification settings - Fork 789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support simultaneous viewing of multiple models #482
Comments
Yeah, this is a good question! We have discussed adding basic scene composition through a mix of tools (#12) and hot-swap/configuration support (my comment on #481). A single modular sofa might be a good use case here, but a full interactive room designer (with many different sofas, couches, and lights) might not. Longer-term, I think there may be some interesting scene composition features we could add that's more in line with what you may be thinking of. A-Frame has some really interesting work. That needs a fair bit more discussion, though. |
I want to document some design discussion that has taken place side-band related to this topic. At W3C Inclusive Design for Immersive Web standards workshop I gave a presentation that contains this slide: This suggests a declarative API for incorporating multiple models into a single |
so theoretically no limit to the number of 's, right? Good start. Lots of considerations for how the properties operate. |
Theoretically no limit beyond what is feasible with DOM nodes and the resource limitations related to loading 3D models in a browser. Keep in mind that a single glTF model of modest complexity incurs a lot of memory overhead. |
Are there any plans on implementing @cdata suggestion? |
@pedrobergamini We don't really have the resources to do the declarative approach in the near term, though contributions would be welcome. I'm hoping we can get a JS API together first to add models to the scene. |
@elalish any update on this? Would love this feature, especially in AR. So you could add a new table / chairs to your dining room |
@jensdev Not yet; it's definitely on the roadmap, but it'll be a fair amount of work. I'm currently working on some smaller items for our upcoming release. Hopefully I can start on this thereafter. |
@elalish Thanks for the update. Would be really nice to build a scene in a declarative way with different model files which can be moved/rotated. Can we see the current roadmap somewhere? |
@jensdev The near term roadmap is the ToDo column in our github project. This will be a bigger piece of work that I'm not sure when I'll have time for. Any interest in collaborating on it? |
Hello, is there any update about this features ? Thanks. |
Would be interesting. |
Consider this another upvote. I'd love to collaborate, but I'm not a web-developer by trade. Would gladly provide content, UX, and testing/validation support, however. |
Hello there, just here to add a +1 on @cdata suggestion.... it could be really interesting ! |
@david-rhodes UX would be enormously helpful, as this is pretty complicated. If you can mock up some interactions and we can talk about edge cases here, that would be great. Even the basics like selection are tricky in 3D when you have to deal with small, large, and thin objects, occlusions, collisions, etc. We currently use a bounding box instead of a ray-cast, but that may get trickier with multiple objects. |
This initiative seems like it could be split into multiple phases. For example, the scene graph composition to support multiple models at once could be Phase 1. This would already support many use-cases (even without UX/interaction requirements). For example, simple configurators could swap modules or components with predetermined transformations relative to the scene root. Phase 2 could focus on more specific UX improvements and functionality. I suspect bounding box interaction is sufficient for majority of use-cases, as well. Side question: Is it currently possible to find a scene node from tap/click? It looks like the API only returns the point and normal? |
Agreed. And no, we'll need an API that represents scene nodes in order to return a reference to one from a ray-cast. Definitely an important feature though. FYI @timmmeh |
any news on this topic? |
Any updates ? |
This would be super useful. |
What we really need here is detailed feedback on use cases and desired UX. This would add a lot of complexity, so we need to have a clear idea of what problems we're trying to solve. Are the models independently movable by user interaction, or just by script? Just in AR, or 3D as well? Are they supposed to collide or overlap? |
The best would be if someone could link a three.js / WebXR example that's compelling and we could work from. |
I think it could be super useful for exchangeable model parts. Lets say a gltf/glb 3d character can have exchangeable hair and shoes. Or that one can construct new 3D models by combining multiple glbs. The easiest way could be that the „src“ object takes an array of glb models and desired positioning in the scene. What do you think @elalish ? |
@sprengerst That's yet a different use case / API. This issue is about separate pieces of furniture you can move around separately in AR. What you're referring to is more like a game engine, which we don't want to become. We give control of swapping parts via hiding and showing materials. It's unlikely we'll go beyond that anytime soon. |
The example of changing a character's hair or shoes is certainly something that should be handled by a game engine, but I think changing parts is the realm of 3D commerce. And I think model-viewer is often used in the area of 3D commerce. |
I would like to have a little more discussion on this suggestion by @cdata. <model-viewer alt="I will assume for now that there is no 'src' in root">
<model-node name="table" src="table.glb" data-position="1.1m 0m -0.5m" data-orientation="0deg 0deg 50deg"></model-node>
<model-node name="chair" src="table.glb" data-position="1.1m 0m -0.5m" data-orientation="0deg 0deg -110deg"></model-node>
</model-viewer> I'm excited to hear back your opinions as well! |
@futahei Yes, that could certainly work. Can you give a sketch of how you'd make use of this on a website? It seems like it would require a lot of JS to make something more interesting than a single combined GLB. I think the original posters were asking us to do more: actually create the UX for multiple-object placement, which your API would not. |
Do you have a link to some documentation where model-viewer supports that functionality? Working on a project for ordering custom beds and being able to swap out and reposition different bed posts, etc. would be super helpful. |
We don't have that functionality yet, but you can always get our |
Hey @elalish, how would be the best way to show and hide pieces of a object ? I have variants of this pieces and I created the materials image-based, so what do I have to do regarding the code to be able to show and hide pieces of an object. And might be able to create other things as changing pieces versions, showing one model and hiding the other ones. I would like to have a direction on how to show and hide pieces of an object? Can you point me in the correct direction please? |
See here: #2776 |
Thank you for pointing me this! I understood the concept of "setting its baseColorFactor to [1, 1, 1, 0] to make it completely transparent" and probably setting alphaBlendMode to MASK." But how do I do that exactly? Here is my doubt: Thanks in advance! |
Generally starting a new discussion is the best place for these kinds of questions. You can access the GLB materials to edit them using our material API: https://modelviewer.dev/examples/scenegraph/#pickMaterialExample You can also create/select variant materials, which is what we do in our editor. You might want to look at its code base for inspiration: https://github.com/google/model-viewer/blob/master/packages/space-opera/src/components/materials_panel/materials_panel.ts |
I have another use case for this.... I have a model, which has, for example, a field with 100 sheep. The sheep model weighs 0.1mb, but since there is 100 of them, it weighs 10mb total It would be much better for me to download the 0.1mb sheep, and instance it 100 times, than to force the user to download a 10mb model. .... in my case it isnt sheep, but robotic equipment, but you get the idea.... |
@calumk Actually glTF already takes care of this. You can store a single sheep |
@elalish - oh really? Thats great to know, i hadnt even considered it might be supported by gltf .... I will take a look! |
Any update regarding this feature. |
@nirajmohanrana It's a pretty broad feature; you could help us by explaining your use case and what kind of API might help. I do plan to add a large feature to help enable this, but if you're hoping for more multi-model placement UX in, say, AR, then we could certainly use help there. |
I want API to handle multi-model placement it would be great to have it. |
@nirajmohanrana Excellent, this is basically the next big feature I want to work on. A very helpful contribution right now would be to work on the UX design for placement. If you can hack together any kind of demo, that would be excellent. The main things we need to figure out is where the user's click/touch should do what. We use hits on the placement box (ground outline) currently to move things in AR and outside it to turn them. It's unclear how to map that once multiple objects are in the scene. And what happens when they overlap? Or are collisions detected and avoided? |
@elalish I think there should be collision detection (here we can turn hover to red when it occurs). As for the demo I got this example online:
Also here we are talking specifically about the Dining Set in which AR Placement is at Floor, it should also support for Walls consider Multiple Photo Frames or Paintings as examples. |
I like the example. I think collisions will be had in practice - bounding box collisions are cheap, but we preclude e.g. pushing a chair under a table. Full 3D collisions will get expensive, especially for fairly high-poly e-commerce models. Perhaps the red box based on bound-box collisions, but without actually blocking the motion is a good compromise. Currently we choose between translation and rotation by hitting the floor box. For object selection we may need to do a hit test against their meshes instead (like |
@elalish Do you know any related prototype designer for 3d contents. |
I'd probably just start (or fork) a three.js project in Glitch or similar. No need for AR - the UX can be worked out in 3D with a simple ground plane and a hit test. |
@elalish any update about the multi-object placement feature? |
Not planned yet, and as I say, we haven't even gotten a clear UX design for what we ought to build here. Lots of details to think about - any thoughts appreciated! |
We'll be prototyping a e-commerce configurator-like experience and will probably be using the GLTF extension system to embed the configuration / relations of models to each other directly in the GLB. Example, you're building a grill:
Hope this helps anyone even if just to jump start some totally different idea. :) |
That does sound interesting, though it'll be a fair bit of work to spec and implement. Have you considered turning your directions into a single long animation of the GLB? If you marked time points for different steps, it would be easy to move between them. |
🤔 @elalish I'm not sure I understand, unless if you mean the animation is basically components being connected together? That wouldn't work IIUC because which components you need to connect is a runtime decision by the user, it's not like a prebaked thing. You pick which legs you want from the UI, we load the corresponding GLB file and snap them together, it's not a single file containing all the decisions already made. |
Yes, fair enough - with my approach you'd need to make your choices first and load up a corresponding animated GLB, which is a different UX. As another thought - there has been talk within the glTF working group of making a mesh variants extension, akin to the existing material variants extension. That could be another solution, and there's also a physics extension being worked on that might help with the joint-motion idea. I'd recommend you get connected in Khronos and start coming to some of those meetings because you could help drive the design in a use-case-focused way, which is always best. |
Got a suggestion how we'd go about doing that, where do we start? |
Check out the "Join the glTF Community" section of https://www.khronos.org/gltf/ |
I am also looking to build a configurator (e.g. Car Configurator), where the user can customize some parts like rims, wheels, sunroof, steering wheel, ... etc. in my case no need to define snapping or joint points, for example the different variants of wheels will be modeled to take its position in the main model at the same coordinates, so simply loading the wheel glb and combine it with the main object will work. The user may also change color, size of the new selected variant, so accessing and changing the materials of the loaded object must be possible. At the end I will save the customization settings as json file containing the selected parts names and materials. I know this can be done using single glb file that contains all different variants for the customizable parts, but imagine there is 50 different variants of the wheels only! this will increase the file size a lot and it may not load at all after some point. So being able to combine different glb files during run time will be a great addition :) |
Support for simultaneous viewing of multiple models in the viewer at once is required for augmented reality experiences. Consider the example of a "try-and-buy" experience where the user would like to see how multiple pieces of furniture look in their living room. Modular sofas are an example whereby different components can be combined in myriad configurations based on the dimensions of the space and the tastes of the customer. The creator of the experience should be able to add all of the sofa module models to the package. The implication is that once added to the viewer, the user would need to be able to select the bounding box for each model individually in order to manipulate it.
This is related to issue #481 in that there need to be multiple models packaged with the experience and available for selection, but this differs in that this use case requires multiple models to be simultaneously present in the viewer then selectively manipulated..
The text was updated successfully, but these errors were encountered: