Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error during the memory allocation in SIFT. Use window-based processing? #10

Open
lionlai1989 opened this issue Nov 2, 2022 · 2 comments

Comments

@lionlai1989
Copy link

lionlai1989 commented Nov 2, 2022

Hi the team of sat-bundleadjust:
I've run this software on a small raster (52MB, 6685x8237 pixels) and it can run till the end, but I also notice that the required maximum memory throughout the processing is 32GB.
Later, I run this software again on a much bigger raster (600MB, 40000x51200 pixels) and it shows the following error:

Error during the allocation.

The source code shows this error stems from this line in SIFT algorithm. Thus my questions are:

  1. I am new to this project. But it looks like the pipeline will read the whole stereo pair into the memory and run SIFT algorithm on the whole images, which consumes a large amount of memory?
  2. If 1. is affirmative, is there a way to run this pipeline on a window-based manner, meaning it splits a raster into subtiles and run SIFT (collect and match keypoints) on each subtile. Afterwards, it collects all valid keypoints and calculated the adjusted RPCs. By doing this, I believe the memory footprint will be much smaller than reading the whole raster into memory.
  3. The spec of my environment is 16 CPUs and 64 GB memory.
  4. Here is my config.json. I use the default setting as README indicates. I didn't use any other customized setting.
{
    "geotiff_dir": "/home/ubuntu/tmp/images",
    "rpc_dir": "/home/ubuntu/tmp/rpcs",
    "rpc_src": "geotiff",
    "output_dir": "/home/ubuntu/tmp/output"
}

Thank you.

@rogermm14
Copy link
Collaborator

rogermm14 commented Nov 15, 2022

Hi @lionlai1989 !
Thank you for your comment. This method/code was designed to correct RPCs of rasters covering areas of interest of a maximum of a few tens of square kilometers.
Certain assumptions may not hold for larger areas (e.g., the estimation of a single center of projection as detailed in our article).
Here are some quick suggestions if you still want to try to address your problem.

  1. Your 600MB data looks quite large. Do you really need all of it ? If not, you could crop a subregion or specify an area of interest in your config.json using "aoi_geojson": <path to AOI.json>, where AOI.json would contain a GeoJSON polygon in longitude and latitude coordinates. Example:
{
  "coordinates": [
    [
      [
        pt1_lon,
        pt1_lat
      ],
      [
        pt2_lon,
        pt2_lat
      ],
      [
        pt3_lon,
        pt3_lat
      ],
      [
        pt4_lon,
        pt4_lat
      ]
    ]
  ],
  "type": "Polygon"
}
  1. Try playing with the configuration parameters dedicated to feature tracking, which are listed and commented here. For instance, you could try adding "FT_sift_detection" : "opencv" and "FT_kp_max": <max number of sift points per image> to your config.json. If you specify an "aoi_geojson" in your config.json you can also add "FT_kp_aoi" to consider only those keypoints found in your area of interest.
  2. You could also check what happens if you downsample your rasters and the associated RPCs by a factor of 2 or 4.
  3. Window-based processing for this kind of rasters would be interesting, yes. Maybe someday I will implement it. Otherwise, contributions are welcome :)

@lionlai1989
Copy link
Author

Hi @rogermm14 Thanks for your detailed answer. I still have a few questions. Hope you can clarify them for me.

  1. Here are more contexts of the experiment data of this issue.
    Data Source: Pleiades (0.5 meter) and PleiadesNeo (0.3 meter).
    Theirs camera model is Push-broom. The size of stereo imagery is around 50000 x 50000 pixels in general, but sometimes it can be more than that. Thus, the geographic size is around 25 km x 25 km which is roughly 625 square kilometers, but sometimes it can be 1000+ square kilometers.

  2. In the paper section 3.2.1, it says its aim is to refine the input RPC model by correcting rotation around the camera center. Also, I am reading the discussion thread of s2p. And I conclude the following points (I may misunderstand or overlook something):

  • The method described in this paper is using the assumption of pinhole model to refine the RPC model and generate a new RPC model. So, this paper can be only applied to push-broom imagery with small footprint.
  • For push-broom imagery with large footprints (described in 1.), this paper in theory can not work well on it. Because it's not possible to approximate a large push-broom satellite imagery with a single pinhole model.
  1. If 2. is true, are you aware of that is it possible to globally refine RPC model of large stereo satellite imagery with push-broom camera model (described in 1.)?

  2. Back to your question 1., we need all of the raster and create DSM from it. We are using s2p framework (windows-based processing). It seems to me there are two different ways to do the RPC refinement for very large rasters (please let me know if it makes sense to you 🙏 ):

  • Method 1: using global RPC refinement (describe in 3.) if possible. I would prefer this one because it separates different modules in order clearly. (First refine RPC model globally and then use the corrected RPC model to do stereo processing).
  • Method 2: While in windows-based stereo processing, we can refine RPC model for each window/tile (eg, 500x500 pixels) and use the locally corrected RPC model to do the subsequent processing.

I am not sure I am describing things clearly and sensibly here. I would highly appreciate your feedback. Thank you.
Best,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants