You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We don't support this RGB (image) method to obtain result.
May I ask what your scenario is? Is there any other way to implement the function in your project in addition to the one in your issue? We will consider adding this capability if it's highly needed in the future. But it needs to be reminded that additional sensors are needed to achieve this capacity, and sensors have high requirements on hardware equipment, which need to be tested with appropriate equipment.
There is increasing demand of composing scenes using created 3D models. The problem is that the models have different scales and setting the scale backwards manually ends usually wrong.
Image for example scanning objects in a furniture shop and using AR put it in your room. If the scale is wrong you might do a wrong decision.
I tried to automatically rescale the results myself but the AR pointcloud I collected had different orientation and contained background points. I would need at least for some vertices of the scan UV coordinates in captured photos...
Would it be possible to receive result with approximately correct scale?
Considering that the mobile camera focal length is provided you should be able to do the triangulation and calculate the scale of the model, right?
The text was updated successfully, but these errors were encountered: