Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

m(o)ve over smvs #921

Merged
merged 57 commits into from Apr 22, 2019
Merged

m(o)ve over smvs #921

merged 57 commits into from Apr 22, 2019

Conversation

smathermather
Copy link
Contributor

@smathermather smathermather commented Dec 2, 2018

There are one big limitation: I have replaced smvs with mve, but perhaps it should be optioned instead.

@smathermather
Copy link
Contributor Author

smathermather commented Dec 3, 2018

Ok @pierotofy and @dakotabenjamin -- this will require some heavy testing. mve seems to have lower memory requirements but increased processing time over smvs. I think for the additional fidelity, the processing time is worth the tradeoff (especially in light of reduced memory usage, but we should test on bigger datasets to be sure.

And if we need to have them all run side-by-side, I am open to that too. We could have low, medium, and high fidelity 3D data with opensfm, smvs, and mve point clouds respectively. But, for the sake of simplicity, this pull request is an outright replacement of smvs.

Also, I pass no parameters for mve -- the results are quite nice with defaults. We may want to add parameters later, but I decided to keep this simple: given the quality of the improvement, there seemed sense to not over-complicate.

Stephen Mather and others added 3 commits December 3, 2018 04:38
@pierotofy
Copy link
Member

I think less is more, so if quality is improved and memory usage is lowered, even though processing time is increased, I think it's a win and smvs can be placed aside for the time being.

Is it ready to be tested then? I'd love to start running some tests.

scripts/mve.py Outdated

# config
config = [
"-t%s" % self.params.threads,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should respect the --max_concurrency parameter when invoking dmrecon?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, this reminds me of the other hitch -- I am not sure dmrecon respects concurrency values. It'd have to be set with OMP: https://github.com/simonfuhrmann/mve/blob/master/apps/dmrecon/dmrecon.cc#L285

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I don't have much experience with OpenMP. Maybe calling dmrecon by setting the OMP_NUM_THREADS environment variable could limit them. https://www.ibm.com/support/knowledgecenter/SSGH2K_13.1.3/com.ibm.xlc1313.aix.doc/compiler_ref/ruomprun4.html

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I tried that and manually set it to 5 but it used all cores anyway... . But it is c++, there's a good chance I was doing something wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one I would need help on -- I'm a bit over my head.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might just be able to port the approach that smvs uses, if their approaches to concurrency are similar enough.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user selects mve as the processing option, and also specifies --max-concurrency, then a warning ("Thread limiting is not available for this option and will run on all threads, are you sure you want to continue? [Y/n]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we can't limit the threads a warning is fine, but I'd caution with the Y/n prompt, could cause nodeodm to get stuck indefinitely.

@smathermather
Copy link
Contributor Author

It is ready to be tested. It should just install and run.

@pierotofy
Copy link
Member

I will make some time for this in the next few days.

@smathermather
Copy link
Contributor Author

👍

@smathermather
Copy link
Contributor Author

I will make some time for this in the next few days.

I am pretty sure I did something wrong here. It usually works as I have it configured, but sometimes failed (I'm sorry I can't be more specific).:

# run mve

@pierotofy
Copy link
Member

Testing in progress, will post updates when I get results.

@pierotofy
Copy link
Member

My findings so far:

  • Sharp edges are much better preserved compared to SMVS, which creates better DSMs.
  • Noise levels are elevated, in particular there's a consistent pattern of points being "raised" from the contours of objects.

image

image

image

  • The number of points generated is often overkill, there's probably a flag to use a lower resolution depthmap and we should take advantage of it.
  • The output from dmrecon uses special characters to display the progress bar, if I recall correctly there's a flag to use more conventional / silent progress output without special characters. We should use that as well.

I'll look for flags that could help us solve one or more of these problems.

@pierotofy
Copy link
Member

Here's another example:

image

@pierotofy
Copy link
Member

pierotofy commented Dec 10, 2018

The flag we should always pass: --progress=silent. It defaults to fancy.

Should also pass --force for good measure.

--max-pixels uses the same logic as smvs, (the code is the same, so perhaps you can re-add it?). This should reduce both memory usage, speed and perhaps noise.

It is rarely useful to reconstruct at full resolution as it will produce less complete depth maps with more noise at a highly increased processing time.

https://github.com/simonfuhrmann/mve/wiki/MVE-Users-Guide#multi-view-stereo

All options for reference:

Available options: 
  -n, --neighbors=ARG    amount of neighbor views (global view selection)
  -m, --master-view=ARG  reconstructs given master view ID only
  -l, --list-view=ARG    reconstructs given view IDs (given as string "0-10")
  -s, --scale=ARG        reconstruction on given scale, 0 is original
  --max-pixels=ARG       Limit master image size [1500000]
  -f, --filter-width=ARG  patch size for NCC based comparison [5]
  --nocolorscale         turn off color scale
  -i, --image=ARG        specify source image embedding [undistorted]
  --local-neighbors=ARG  amount of neighbors for local view selection [4]
  --keep-dz              store dz map into view
  --keep-conf            store confidence map into view
  -p, --writeply         use this option to write the ply file
  --plydest=ARG          path suffix appended to scene dir to write ply files
  --bounding-box=ARG     Six comma separated values used as AABB [disabled]
  --progress=ARG         progress output style: 'silent', 'simple' or 'fancy'
  --force                Reconstruct and overwrite existing depthmaps

makescene_path = os.path.join(superbuild_path, 'src', 'elibs', 'mve', 'apps', 'makescene', 'makescene') #TODO: don't install in source
smvs_path = os.path.join(superbuild_path, 'src', 'elibs', 'smvs', 'app', 'smvsrecon')
mve_path = os.path.join(superbuild_path, 'src', 'elibs', 'mve', 'apps', 'dmrecon', 'dmrecon')
mve_path_pc = os.path.join(superbuild_path, 'src', 'elibs', 'mve', 'apps', 'scene2pset', 'scene2pset -F2')
Copy link
Member

@pierotofy pierotofy Dec 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-F2 should not be hard coded as it references the chosen scale value (depends on --max-pixels). If a dataset has really large input images (or really small images), this will fail. I think the value can be looked up by looking at the contents of the .mve folders.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another alternative is to calculate a scale value ourselves instead of letting MVE sort it out and pass it to both scene2pset (via -F) and to dmrecon (via -s).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is why you noticed a few failures during your tests.

@pierotofy
Copy link
Member

Reducing the scale of reconstruction reduces the noise a bit, but perhaps this is just the result of having fewer points overall?

image

@pierotofy
Copy link
Member

pierotofy commented Dec 10, 2018

As far as scene2pset, I think we'll want to leverage the scale and confidence values to do the filtering. In particular, I think discarding points below a certain confidence value is a must. I'm getting some good results by discarding values below 0.25 confidence. Perhaps there's a better value, or maybe it should be user-defined.

animated

@pierotofy
Copy link
Member

scene2pset parameters:

Usage: scene2pset [ OPTS ] SCENE_DIR MESH_OUT
Available options: 
  -d, --depthmap=ARG       Name of depth map to use [depth-L0]
  -i, --image=ARG          Name of color image to use [undistorted]
  -n, --with-normals       Write points with normals (PLY only)
  -s, --with-scale         Write points with scale values (PLY only)
  -c, --with-conf          Write points with confidence (PLY only)
  -m, --mask=ARG           Name of mask/silhouette image to clip 3D points []
  -v, --views=ARG          View IDs to use for reconstruction [all]
  -b, --bounding-box=ARG   Six comma separated values used as AABB.
  -f, --min-fraction=ARG   Minimum fraction of valid depth values [0.0]
  -p, --poisson-normals    Scale normals according to confidence
  -S, --scale-factor=ARG   Factor for computing scale values [2.5]
  -C, --correspondence     Output correspondences (in absence of -m and -b only)
  -F, --fssr=ARG           FSSR output, sets -nsc and -di with scale ARG

To export confidence values, pass -c.

@pierotofy
Copy link
Member

pierotofy commented Dec 10, 2018

Overall I think things that could be added/improved:

  • Pass scale args to both dmrecon and scene2pset.
  • Filter based on confidence level. This can be done via PDAL.
  • Fix progress flag.

Finding how to respect --max-concurrency is also a priority, but the PR could be merged without it.

@smathermather
Copy link
Contributor Author

Cool. I’ll dive in. This is an awesome review.

@pierotofy
Copy link
Member

pierotofy commented Apr 10, 2019

DEMs improved!

Untitled

@pierotofy
Copy link
Member

Getting some bad distortion on certain datasets however.

Untitled

@pierotofy
Copy link
Member

Probably due to noise:
image

@pierotofy
Copy link
Member

0.6 as a default confidence threshold yields much better results:

image

@pierotofy
Copy link
Member

Fixes #970

@pierotofy
Copy link
Member

max-concurrency is now respected on makescene, dmrecon, scene2pset.

@smathermather @indiajohnson-cm this can be tested and pending further issues I think it can be merged into master as-is.

@pierotofy pierotofy self-requested a review April 12, 2019 18:00
@pierotofy
Copy link
Member

Investigating a seg fault for one test dataset (sheffield_cross) that processed fine with SMVS.

image

Copy link
Member

@pierotofy pierotofy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seg fault did not happen again after more testing. This might be tied to some sort of racing condition, but for the time being I think it's OK to move forward.

@smathermather OK to increase version number and merge in your opinion?

@pierotofy
Copy link
Member

Implemented #972

@smathermather
Copy link
Contributor Author

Seg fault did not happen again after more testing. This might be tied to some sort of racing condition, but for the time being I think it's OK to move forward.

@smathermather OK to increase version number and merge in your opinion?

Yes to both! Tests are looking good on my end.

@smathermather
Copy link
Contributor Author

0.6 as a default confidence threshold yields much better results:

This matches what I was seeing too -- confidence thresholds greater than 0.5 needed for best results.

@smathermather smathermather dismissed dakotabenjamin’s stale review April 22, 2019 14:18

smvs added back in by Piero

@smathermather
Copy link
Contributor Author

Hmm, I dismissed @dakotabenjamin's review based on a myth I think... . @pierotofy -- smvs is still gone from this pull request, correct?

@pierotofy
Copy link
Member

Yes SMVS is gone.

@smathermather
Copy link
Contributor Author

A sibling docs update that I meant to do as a pull request but did on the active repo (yipe!):
OpenDroneMap/docs#13

Anyway, docs should reflect these changes and more. Let me know if I missed anything.

@pierotofy pierotofy merged commit 57ca7d9 into OpenDroneMap:master Apr 22, 2019
@smathermather smathermather deleted the mvs_and_smvs branch April 23, 2019 00:33
peddyhh pushed a commit to peddyhh/ODM that referenced this pull request Jun 7, 2020
pierotofy added a commit that referenced this pull request Sep 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants