Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Randomly simulate real scenes and random poses in UE4 #45

Closed
zhanghui-hunan opened this issue May 23, 2019 · 36 comments
Closed

Randomly simulate real scenes and random poses in UE4 #45

zhanghui-hunan opened this issue May 23, 2019 · 36 comments

Comments

@zhanghui-hunan
Copy link

It's obvious found that simulate three real world scenes from the FAT dataset in the UE4 including a kitchen, sun temple, and forest. So can I find them in NDDS, or I need to make them by myself? And the other question is that how can I generate a pose for a random object and the position of any object? I am manually modifying the values of RX, RY, and RZ, but the effect is not satisfactory which I think there is a missing. I hope I can get help!
PS:I have already made my data, and I tested it and found that there is no detection effect. Now I suspect that there is a problem with the data being produced.

@saratrajput
Copy link

You can make use of DR_AnnotatedActor_BP under DomainRandomizationDNN Content for generating random poses. Replace the mesh with your custom object mesh.

@mintar mintar mentioned this issue May 24, 2019
@zhanghui-hunan
Copy link
Author

@saratrajput Yes thank you, you are right! I now have an idea to change the sample image of the Domain Randomization map to a real-world image instead of a photorealistic dataset. Do you think it is feasible? If do it, could you please tell me how to do? hope for your patient reply!

@zhezhou1993
Copy link

Do any of you guys know where to download the sun temple and the kitchen map used in NDDS? I found the sun temple that published by Nvidia but it only contains the mesh model and texture. There are no lighting and other configuration available. Thanks

@TontonTremblay
Copy link
Collaborator

TontonTremblay commented Jun 14, 2019 via email

@zhezhou1993
Copy link

@TontonTremblay Thanks, that's very helpful. But I still get a little bit confused about how to use the sum temple file. All the lighting configuration is included in .fscene file which is the Falcor file and I am not sure how to import it into UE 4.

Another question regarding generating random background from image file. In the paper, you mentioned that you are using COCO images as background, is this function (directly load .png/jpg images as background) also included in NDDS? If yes, could you pointing me to the name of the blueprint/component I should use? Appreciated it a lot. (I know there is a Randombackground_actor_BP but it's components only randomize color for selected .uasset material files)

@thangt
Copy link

thangt commented Jun 14, 2019

SunTemple_demo
You can get the SunTemple demo by using the "Epic Games Launcher" (right now only exists on Windows). You need to go to the "Unreal Engine" tab then "Learn", scroll down a bit and you will see "Sun Temple" demo. The attached image show where it is.

Right now the NDDS only work with the UE4 textures. We imported the COCO images into UE4 textures and used it. Ref: https://docs.unrealengine.com/en-US/Engine/Content/ImportingContent/ImportingTextures/index.html
In the RandomBackground_actor_BP select the RandomMaterialParam_Texture and set the "Texture Directories" to the directory (note: the path is related to the project content directory, don't use absolute path) where those COCO textures stay.

@aditya2592
Copy link

@saratrajput How do you prevent object overlap when using DR_AnnotatedActor_BP? I checked collision checking in RandomMovement component and set spawn collision handling to not spawn if colliding, but overlaps are still happening

@saratrajput
Copy link

You should change collision settings for Box from OverlapDynamic to BlockDynamic. Also check the box for Check Collision under RandomMovementBP. (I'm not running UE4 currently, so some of these names might not be accurate.)

@aditya2592
Copy link

aditya2592 commented Jun 27, 2019

Thanks. Also I am trying to the FE_DepthQuantized_16bits feature extractor but all depth images come out to be 0 bytes. I tried with realsense capturer and Zed capturer. However images with FE_DepthQuantized_8bits look fine. What could be the reason for this? (I ultimately want the same quantization level as the FAT dataset and looking at the point cloud I think FAT dataset is 16bits)

@aditya2592
Copy link

Screenshot_2019-06-28_14-14-49
For the above scene, I get point clouds that as shown below (point cloud is made by combining the color images and 8bit quantized depth image). They look significantly inaccurate in terms of depth data :
Screenshot_2019-06-28_14-14-29
Screenshot_2019-06-28_14-00-21

Is this because of using 8bit quantization? Or does the type of lighting have an effect on the accuracy?

@thangt
Copy link

thangt commented Jun 28, 2019

@aditya2592: Can you create issue about these problems in the NDDS github? I will investigate and answer you there so we don't hijack this repo.

@aditya2592
Copy link

@thangt I wanted to do that, but when I checked this link - https://github.com/NVIDIA/Dataset_Synthesizer, I couldnt see an issues tab there. Is it not public?

@thangt
Copy link

thangt commented Jun 28, 2019

I need to check back to the NDDS repo manager. In the mean time, I found the problem with the 16 bits depth: it's only happen to the Linux version, on Windows it should work as intended. We will fix it up in the next patch.
About the 8 bits depth, there are only 256 value to represent the distance so each value is quite a jump so that's why you have those distanced slides like above. Using 16 bits quantized or absolutes (mm units) should give you better depths (we did verify it using ROS's pointcloud2 on rviz and the results look good).

@aditya2592
Copy link

Thanks @thangt. I also tried the 16bit absolute depth and that fails as well on Linux. I am guessing its because of the same bug.

@mintar
Copy link
Contributor

mintar commented Jun 29, 2019

I had these problems too with the Zed capturer, but the simple capturer works fine.

@zhezhou1993
Copy link

zhezhou1993 commented Jun 29, 2019

I also found that the stereo images captured by multi-view capturer (e.g. Zed capture) are identical. From the preview window, the different viewpoints looks correct but the exported images are wrong. Do any of you guys know why that happens?
After several tests, I found that the RGB data capture settings for different viewpoints will be overwritten by the first viewpoints which may cause this problem. I am not sure it is a bug for viewpoint components or I did something really wrong.

@thangt
Copy link

thangt commented Jul 1, 2019

@mintar @zhezhou1993 Can either of you create a separated issue for the multi-viewpoints capturer? It will help me to keep track of the issues better. Until the issues feature is opened in the NDDS repo, you can create the issue in this repo.

@thangt
Copy link

thangt commented Jul 1, 2019

In the mean time, I have an internal fix for the multi-viewpoints capturers. We will include it in the next patch of the NDDS.

@aditya2592
Copy link

aditya2592 commented Jul 2, 2019

Thanks @thangt . When can we expect this patch to be released?

@zhezhou1993
Copy link

@thangt Thanks a lot. I just open a separate issue taking about the multi-viewpoints problem. Hope the next patch will come soon.

@thangt
Copy link

thangt commented Jul 4, 2019

NVIDIA/Dataset_Synthesizer@de83376 => The patch is online. It should fixes all the above issues.
The issues tab in the NDDS repo should be opened at some point next week.

@thangt thangt closed this as completed Jul 4, 2019
@aditya2592
Copy link

To apply the patch, we should pull the commit and use the Compile button in the GUI to recompile right?

@zhezhou1993
Copy link

@aditya2592 For plugins, you should delete the complied files and it should automatically recompile when startup

@aditya2592
Copy link

Thanks @zhezhou1993 . I was able to generate depth images using 16bit feature extractor as shown below. However the 16bit images look lighter in intensity (objects seem to be further away) as compared to the 8 bit ones. Is this expected? While 16bit should be more accurate, I am guessing that the intensity values of pixels shouldnt change as the actual depth is still the same :

  • 8 bit depth image :
    000000 left depth 8
  • 16 bit depth image :
    000000 left depth

@thangt
Copy link

thangt commented Jul 8, 2019

@aditya2592 The 16 bits depth should have smoother transient of value compare to the 8 bits one. You can check to make sure you set the MaxDepthDistance of both of them to be same value (default to 1000cm). If you use RViz to visualize the depth values, does the 16 bits result look right?

@aditya2592
Copy link

aditya2592 commented Jul 9, 2019

ad
Above image is how the values look like. The white area is the point cloud created from the depth image and the axis at the center of the grid is the camera location. (the depth values read from the image are divided by max depth which is 1000). The point cloud looks farther from the image than it should be. I am reading the image using this command :

depth_image = cv2.imread(depth_img_path, cv2.IMREAD_ANYDEPTH)

@thangt
Copy link

thangt commented Jul 11, 2019

@aditya2592 The quantized 16 bit depths use 16 bits (65536 values) to present depth values in range [0, X] (X default to 1000 centimeters). In order to get the absolute depth value you need to do the conversion: RealDepth = (PixelValue / 65536.0) * 1000 (centimeters). For ROS if you use meters, you can divide the value to 100.0.
I think in the image above, since you just divide it by 100, the whole depth get flatten out. Can you try the convert equation and see how it go?

@aditya2592
Copy link

Screen Shot 2019-07-12 at 12 24 45 PM

Above is how the point cloud looks if I use RealDepth = (PixelValue / 65536.0) * 1000 and then convert to meters by dividing by 100. It still looks in inaccurate in terms of distance from the camera and the orientation with respect to the camera

@Abdul-Mukit
Copy link

Abdul-Mukit commented Jul 12, 2019

Thank you, for providing information on how to use images as backgrounds. I imported the jpg images inside UE4 content directory and then UE4 converted them to textures. The problem is that when I collect images I often see images with a gray background like the following in the dataset. Sometimes about 50% of the images are like this. Do you have any suggestion on how to fix this or am I doing something wrong? Any suggestion would be very helpful. I am using Linux and UE4.21.

image

SunTemple_demo
You can get the SunTemple demo by using the "Epic Games Launcher" (right now only exists on Windows). You need to go to the "Unreal Engine" tab then "Learn", scroll down a bit and you will see "Sun Temple" demo. The attached image show where it is.

Right now the NDDS only work with the UE4 textures. We imported the COCO images into UE4 textures and used it. Ref: https://docs.unrealengine.com/en-US/Engine/Content/ImportingContent/ImportingTextures/index.html
In the RandomBackground_actor_BP select the RandomMaterialParam_Texture and set the "Texture Directories" to the directory (note: the path is related to the project content directory, don't use absolute path) where those COCO textures stay.

@thangt
Copy link

thangt commented Jul 15, 2019

@Abdul-Mukit It look to me the UE4 was streaming the texture and it didn't stream fast enough so the texture appear grayish like that. You can right click on the texture folder on the content browser in the editor and select 'Size Map ...' from the menu. It will scan and load all the textures and the background should appear right.
@aditya2592 Can you try the 'FE_Depth_mm_16bits'? It capture the depth in mm unit in 16 bits (max 65535 mm or 65.535 meters). When I have more time I will try to debug more.

@Abdul-Mukit
Copy link

Thank you for your reply. I tried that but still the same problem.

@zhanghui-hunan
Copy link
Author

@Abdul-Mukit you can follow thangt's suggestion is right that selecting the RandomMaterialParam_Texture and set the "Texture Directories" to the directory, rather than importing .jpg in the scene. PS: win10 and UE4.22.

@Abdul-Mukit
Copy link

@liangzhicong456 Thank you, I'll update the UE4 and NDDS in Linux and see if the problem persists. I'll also look into the Windows version.

Thank you, very much for NDDS and DOPE. Have been relying on them quite a lot.

@aditya2592
Copy link

@thangt. This is how it looks with FE_Depth_mm_16bits. The point cloud is flatter than it should be
16_absolute

@thangt
Copy link

thangt commented Aug 29, 2019

@aditya2592 I updated the NDDS with the fix for this 16 bits depth problem. Can you pull the latest NDDS and try it again?

Other people with similar problem here: NVIDIA/Dataset_Synthesizer#10 also confirmed it's fixed.

@chongyi-zheng
Copy link

SunTemple_demo
You can get the SunTemple demo by using the "Epic Games Launcher" (right now only exists on Windows). You need to go to the "Unreal Engine" tab then "Learn", scroll down a bit and you will see "Sun Temple" demo. The attached image show where it is.

Right now the NDDS only work with the UE4 textures. We imported the COCO images into UE4 textures and used it. Ref: https://docs.unrealengine.com/en-US/Engine/Content/ImportingContent/ImportingTextures/index.html
In the RandomBackground_actor_BP select the RandomMaterialParam_Texture and set the "Texture Directories" to the directory (note: the path is related to the project content directory, don't use absolute path) where those COCO textures stay.

I tried to randomize the background with my own images (I just download 3 images with size 512x512 in JPG format from google and rename them into 0.jpg, 1.jpg, 2.jpg) and follow your guidance to set "Texture Directories" in the RandomMaterialParam_Texture, but I didn't see my images in the background. Any suggestions? By the way, I used the "TestCapturer" demo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants