Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Dense Pointcloud from depth sensors or stereo #120

Open
SimonScholl opened this issue Jan 10, 2018 · 133 comments

Comments

@SimonScholl
Copy link

commented Jan 10, 2018

I know there were already several requests related to the possible support of Tango Devices with ARCore. My question is based on the fact, that we want to use the already available depth sensors to capture dense depth data with our devices. So is there any plan (short, mid or long term) that the ARCore will use the hardware capabilities of Tango Devices?

Is it possible that ArFrame_acquirePointCloud() not only can deliver depth data about features, but also gives us the pointcloud we already got via tango, when hardware is there?

@inio

This comment has been minimized.

Copy link

commented Jan 10, 2018

This is definitely something we could do, but currently ARCore doesn't even run on any devices with depth cameras so it's a bit early to even think about.

@helipiotr

This comment has been minimized.

Copy link

commented Jan 30, 2018

Copied from Google Tango homepage:

In addition to working with our OEM partners on new devices, Google is also working closely with Asus, as one of our early Tango partners, to ensure that ZenFone AR will work with ARCore. We'll be back with more information soon on how existing and new ZenFone AR users will get access to ARCore apps.

It seems that at one point ARCore will be running on ZenFone AR. It would be a great feature for the existing hardware.

@SimonScholl

This comment has been minimized.

Copy link
Author

commented Jan 31, 2018

Yeah i already read that, but to clear things up. This feature request is not about "running ARCore on Asus", that's what i expect. It is more about getting dense pointcloud data from devices which have a hardware component like a Time of Flight Sensor, so ARCore would offer us the best of all development possibilities.

@jonomacd

This comment has been minimized.

Copy link

commented Feb 23, 2018

I see the ARCore for the zenfone is eminent. Is this going to have point cloud support? Will the release deprecate the point cloud support that it currently has? I use matterport scenes all the time, I want that to still work but I would also like the new features of ARCore.

How is this going to work?

@inio

This comment has been minimized.

Copy link

commented Feb 23, 2018

Sorry, not yet. Point clouds on the ZenFone are derived from visual features, just like on other ARCore phones.

@SimonScholl

This comment has been minimized.

Copy link
Author

commented Feb 26, 2018

I hope it doesn't override our tango core on the framework, til the support for dense depth data from related sensors, there should be no one be forced to use ARCore. Especially a lot of developers need those data, for exsiting and upcoming applications.

@inio

This comment has been minimized.

Copy link

commented Feb 26, 2018

ARCore DP2 and later can coexist with TangoCore.

@kevhill

This comment has been minimized.

Copy link

commented Mar 30, 2018

Any update on this with regards to Snapdragon 845 / Galaxy S9 compatibility being unveiled "within weeks"?

@inio

This comment has been minimized.

Copy link

commented Mar 30, 2018

@kevhill Wrong bug? (I think you meant #250)

@kevhill

This comment has been minimized.

Copy link

commented Mar 30, 2018

No, but certainly lacking clarity.

I meant that one of the features Qualcomm is promoting for the 845 is accurate and dense point clouds based on dual cameras as opposed to IR (through the Spectre 280 ISP). As the S9 has all of the required hardware, and in theory the 845 has software support for that, it seems like this should be a straightforward feature to expose through ARCore.

@kevhill

This comment has been minimized.

Copy link

commented Mar 30, 2018

If anyone hasn't seen the demo video, it is pretty sweet. https://www.youtube.com/watch?v=16vz3_6-tbM

@inio

This comment has been minimized.

Copy link

commented Mar 30, 2018

Ah, sorry, didn't read that clearly. Makes sense now.

No, no update on dense depth from stereo. Updated FR description to include.

@inio inio changed the title Feature Request: Dense Pointcloud from depth sensors Feature Request: Dense Pointcloud from depth sensors or stereo Mar 30, 2018

@lvonasek

This comment has been minimized.

Copy link

commented May 11, 2018

@inio Any update on this? Asus Zenfone AR is already supported few months.

This app is waiting for it: https://play.google.com/store/apps/details?id=com.lvonasek.arcore3dscanner

@inio

This comment has been minimized.

Copy link

commented May 18, 2018

@lvonasek Nope.

@Thaina

This comment has been minimized.

Copy link

commented Aug 6, 2018

How is this progressed?

I think almost all device which has ARCore available contain stereo camera. So the point cloud should be better handled

@lvonasek

This comment has been minimized.

Copy link

commented Nov 3, 2018

In this video https://youtu.be/7ZSm95naghw?t=127 is a 3d scanner using depth sensor on non Tango device.
Is it based on ARCore? Or shall we migrate to another technology?

@SimonScholl

This comment has been minimized.

Copy link
Author

commented Nov 5, 2018

In this video https://youtu.be/7ZSm95naghw?t=127 is a 3d scanner using depth sensor on non Tango device.
Is it based on ARCore? Or shall we migrate to another technology?

Hi, i also had an eye out on the new oppo. As far as i know this is not based on ARCore, it is even not a open development kit available to use this depth sensor for new apps. I still hope ARCore will offer tango-like functions for phones with the right hardware components, as depth sensors get more common on android devices.

@lvonasek

This comment has been minimized.

Copy link

commented Nov 9, 2018

It is written in #603 that they are not going to implement it in the near future. About Oppo I watched the same presentation in more languages. Here was at least some informations shared:
https://youtu.be/-Vz5US3vt5E?t=3274

@SimonScholl

This comment has been minimized.

Copy link
Author

commented Nov 12, 2018

It is written in #603 that they are not going to implement it in the near future. About Oppo I watched the same presentation in more languages. Here was at least some informations shared:
https://youtu.be/-Vz5US3vt5E?t=3274

Thx for the link informative video, the question is what does google understands in 'near term', that could be months we have to wait or a year. My hope is that as hardware enabled devices will raise again and the demand of our developer community using that sensors, will force them to give this topic a higher priority.

They already have the knowledge integrating the sensors like they did with tango, maybe it was not perfect but it was working. So why not bringing together the best of both worlds.

@lvonasek

This comment has been minimized.

Copy link

commented Nov 12, 2018

The demand from the developers is quite big. Everytime I talk with someone about ARCore I hear words like disappointment, useless, etc... Tango was working well in bussiness area, thats why I tried this request: #638

@Thaina

This comment has been minimized.

Copy link

commented Nov 12, 2018

@lvonasek It not really big at all. Just a computer vision with stereo camera is enough for everything we all need. It was just a stupid paradox decision of ARCore team which I don't know who have think about it

The point is ARCore always limit device it could be run on by no reason. Almost all of it's device has stereo camera like it required to yet they don't utilize stereo camera while it should. ARCore could be run on normal phone with single camera but they just said the quality of those phone is not acceptable for them. However the quality of ARCore that run on my phone right now even the measure app from google is not acceptably accurate or usable

@lvonasek

This comment has been minimized.

Copy link

commented Nov 12, 2018

@Thaina Difference between structure from motion with single (current implementation) and pasive stereo camera is that stereo has known transformation from the first photo to the second photo. Then you can reach much better point cloud (accurate). But there would be the same problems like with current implementations - white walls wont be for the device visible. Also occlusion or collision with real world would not work better. Thats why I am saying that demand on depth sensor is quite big - because other systems are just compromises, not full solutions. Of course stereo-camera solution would be better but Google cannot make Google's Pixel rivals better than own device. Maybe Google will solve it later using AI but I am not aware of any ready-to-production system for it.

And about limiting device support - there is really good reason for it. Computer vision needs camera calibration. And Google needs IMU calibration. Google has a lot of work with every supported device (calibration all possible device variants, device firmware fixing, firmware update, whitelisting the device on Google Play). I would say we need patience, Google has enough work to do.

@Murded

This comment has been minimized.

Copy link

commented Aug 5, 2019

Great thanks again, Btw you may already be aware but samsung plan to release their TOF in the coming months.

@lvonasek

This comment has been minimized.

Copy link

commented Aug 5, 2019

Do you have a source of this information?

@Murded

This comment has been minimized.

Copy link

commented Aug 5, 2019

Do you have a source of this information?

Hi, Yes I contacted Samsung technical support from the developer forum and got the following response.

image

image

@lvonasek

This comment has been minimized.

Copy link

commented Aug 5, 2019

Thank you, that means that ARCore ToF support is most likely more far away than I thought.

I will keep using Huawei AREngine for AR.

@kexar

This comment has been minimized.

Copy link

commented Aug 6, 2019

This is from ARCore 1.11 changelog released yesterday.
Added MinFPS, MaxFPS, and DepthSensorUsage properties to CameraConfig.

No description what DepthSensorUsage means though. Will test a demo app.

@lvonasek

This comment has been minimized.

Copy link

commented Aug 7, 2019

I tested the depth sensor function and it is currently not supported on any ToF device I have here.

@Murded

This comment has been minimized.

Copy link

commented Aug 7, 2019

Why when I test your night vision / ToF viewer the 240x180 resolution doesn't work?

@lvonasek

This comment has been minimized.

Copy link

commented Aug 7, 2019

It depends on device software. Huawei and Honor enabled 240x180 on most devices with software update 9.1.xxx

@Murded

This comment has been minimized.

Copy link

commented Aug 7, 2019

It depends on device software. Huawei and Honor enabled 240x180 on most devices with software update 9.1.xxx

Ahh okay it's a brand new honor 20 so will see if there is an update

@Murded

This comment has been minimized.

Copy link

commented Aug 7, 2019

It depends on device software. Huawei and Honor enabled 240x180 on most devices with software update 9.1.xxx

All working great after updating, do you know what the maximum resolution of the honor 20 is? can it go above 240x180 but it just hasn't been made supported yet or is that the highest resolution of the sensor?

@lvonasek

This comment has been minimized.

Copy link

commented Aug 7, 2019

240x180 is 0.04MP. I heard that Huawei P30 Pro has the same ToF like Honor View 20 (not sure if it is true).
However Huawei P30 Pro shows ToF resolution 992x558 which is 0.55MP but it returns no data (just like 240x180 on Honor before updating).

Note that Huawei P30 Pro is getting all ToF updates earlier than Honor View 20.

@kexar

This comment has been minimized.

Copy link

commented Aug 8, 2019

In a GoogleARCore.ARCoreCameraConfigFilter.DepthSensorUsageFilter class reference there is a description:

bool RequireAndUse = true
Filters for camera configs that require a depth sensor to be present on the device, and that will be used by ARCore.

See the ARCore Supported Devices (https://developers.google.com/ar/discover/supported-devices) page for a list of devices that currently have supported depth sensors.

Unfortunately there is no information about depth sensors compatible devices on that link.

@lvonasek

This comment has been minimized.

Copy link

commented Aug 8, 2019

@kexar there is no information because Google does not currently support any single device

@Sheng-Xuan

This comment has been minimized.

Copy link

commented Aug 13, 2019

240x180 is 0.04MP. I heard that Huawei P30 Pro has the same ToF like Honor View 20 (not sure if it is true).
However Huawei P30 Pro shows ToF resolution 992x558 which is 0.55MP but it returns no data (just like 240x180 on Honor before updating).

Note that Huawei P30 Pro is getting all ToF updates earlier than Honor View 20.

Hi, I have tried your Night Vision app it works well on my p30 pro, I am just curious to know how did you detect the supported resolution of the tof camera? I checked the camera2 info, only cameraId=0 supports to output DEPTH16 image, there are more resolutions in the list. How did you check it actually only supports 240*180? Is it trial and error or is there any tricks to do it? Thank you for your great work!

@lvonasek

This comment has been minimized.

Copy link

commented Aug 13, 2019

Hi @Sheng-Xuan,

there is no way how to test it. The thing that Huawei P30 Pro gives you unsupported resolution as supported is wrong and we as app developers should not have to deal with this at all.

What I did in my app is detecting if the resolution is higher than 240x180 and if so then I label it as unsupported. I did this to avoid thousands of users writing me "it does not work". Now it is hundreds only :D

The app helps me to have an overview which devices currently support ToF using Camera2 API. I reached 10k installations and it is currently supported only on Huawei P30 Pro and Honor View 20.

@Sheng-Xuan

This comment has been minimized.

Copy link

commented Aug 15, 2019

Hi @Sheng-Xuan,

there is no way how to test it. The thing that Huawei P30 Pro gives you unsupported resolution as supported is wrong and we as app developers should not have to deal with this at all.

What I did in my app is detecting if the resolution is higher than 240x180 and if so then I label it as unsupported. I did this to avoid thousands of users writing me "it does not work". Now it is hundreds only :D

The app helps me to have an overview which devices currently support ToF using Camera2 API. I reached 10k installations and it is currently supported only on Huawei P30 Pro and Honor View 20.

I also found that in AREngine, the the depth image I can get is 240*180. Btw, I am trying to make an app on producing both RGB image and depth image. It seems quite troublesome to use camera2 api to do so. I found I can use ARFrame.acquireDepthImage() and acquireCameraImage() in AREngine. But I could not get correct RGB image and depth images from the ARFrame, the DEPTH16 value after decoding is wrong and I could not convert YUV_420_888 format to jpeg successfully either. I am not sure if you tried these methods. There is too little discussion about AREngine online. Thank you in advance if you have any hint on this, but it's ok if you don't have time to explain. 😃

@lvonasek

This comment has been minimized.

Copy link

commented Aug 19, 2019

I have successfully converted YUV to RGB and DEPTH16 to float array (only the depth information) however the Z is somehow wrong, in the center of camera it seems to be correct but on sides it is not.

@mpottinger

This comment has been minimized.

Copy link

commented Sep 15, 2019

@lvonasek So even in AREngine we can't reliably use depth for AR? The depth in AREngine is inaccurate?

Why did they even advertise the AR benefits of a tof sensor if that is true.

Things are progressing much more slowly in this area than I had hoped.

@lvonasek

This comment has been minimized.

Copy link

commented Sep 15, 2019

@mpottinger - if you just enable depth sensor then you get in AREngine depth data instead of feature points. However you get less than 300 points per frame. If you use CPU access to depth data then you get full resolution depthmap. Which works great but it is really not easy to convert it into the world coordinates.

@mpottinger

This comment has been minimized.

Copy link

commented Sep 16, 2019

@lvonasek Oh i see, thanks. So the issue is converting the depth map to point cloud or world coordinates.

From what I know so far, that requires the intrinsic parameters of the camera. Is that what is missing in this case?

With my depth sensor for the pc, I have those available and it is easier to get world coordinates.

@lvonasek

This comment has been minimized.

Copy link

commented Sep 16, 2019

@mpottinger I have it working only on two devices and there has the depth camera the same intrinsic parameters like color camera.

@mpottinger

This comment has been minimized.

Copy link

commented Sep 16, 2019

@lvonasek Strange, so there is something else going on there?

Usually it should be straightforward to get a point cloud from a depth sensor as long as you have intrinsics, correct?

If it were possible, I would be very tempted to buy a P30 pro. I am struggling with PC based solutions for other reasons.

@lvonasek

This comment has been minimized.

Copy link

commented Sep 16, 2019

@mponttiger - yeah, you just do not know what's the depthmap orientation, also you can hardcode it or detect it from the projection matrix.

@mpottinger

This comment has been minimized.

Copy link

commented Sep 17, 2019

@lvonasek So you mean the rotation of the phone causes the issue? If that is the case wouldn't locking the auto-rotate solve the issue?

I had to lock the rotation when I was testing out computer vision in ARCore, to keep it consistent and predictable. Would that solve it in this case? If so im getting a P30 pro asap ;)

@lvonasek

This comment has been minimized.

Copy link

commented Sep 17, 2019

@mpottinger Basically yes, you get depthmap in DEPTH16 format (that's documented how to parse it), in portrait the orientation was something like x,y ->-y,x and then you transform the points into world coordinates

@mpottinger

This comment has been minimized.

Copy link

commented Sep 19, 2019

@lvonasek Ok based on that I decided to just go and buy a P30 Pro.

I was not disappointed! It is exactly what I needed. Your 3D scanner app works much faster and more accurately on it.

Tried the Night vision app, and TOF viewer. Yes the range and resolution are limited, but for my use, after testing I found that is not a problem at all for what I need.

Then tried AREngine and was very pleased with the results. It perfectly fixes what was missing in ARcore for my use.

I haven't tried using the DEPTH16 image yet. Even without that it performs very well for me. I just disable plane detection and change the hit test on points to accept any point, and I can place AR markers wherever I want instantly, on any surface with no detection delay and without needing to wave the phone around.

It seems it already uses to depth sensor to do what I wanted out of the box, minus occlusion.

Perfect! Not everyone will have the same use case as myself, but for anyone who just wants to be able to place markers/placeholders in AR without limitation, Huawei P30 Pro and AREngine are the way to go 100%

Hope Google takes note.

@davidhsv

This comment has been minimized.

Copy link

commented Sep 19, 2019

@mpottinger

This comment has been minimized.

Copy link

commented Sep 19, 2019

@davidhsv Yes definitely after I play around with it more. I want to see if occlusion is possible as well using the raw depth.

In plane detection mode it detects planes all over everything, walls, ceiling, etc, which would be good for some people.

The big advantage for me is that I do not need or want plane detection, and when I just do hit testing on feature points, there are feature points to anchor to everywhere, without even needing to move the phone around.

In ARCore I need to wave the phone around a lot, and often on smooth surfaces there are no points to set an anchor, it is hit or miss. In AREngine I can always place an object on any surface except maybe a mirror surface, with no delay, no phone waving.

For sure I will post a comparison later.

@davidhsv

This comment has been minimized.

Copy link

commented Sep 19, 2019

@lvonasek

This comment has been minimized.

Copy link

commented Sep 19, 2019

@mpottinger - there are two ways how to do occlusion. The easier way is to enable meshing and use material which renders no color, only depth data before rendering camera background (this is how 6d.ai does it), however this way does not occlude dynamic objects. The second way is to use depthmap and render it into depth buffer (this is how it was done on Tango)

@davidhsv - you can buy Honor View 20 - it has the same capability like Huawei P30 Pro

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.