It would be desirable to have stereo output a point cloud at say 2x coarser resolution than the input images. To implement that, one would need to internally sub-sample the input images and scale accordingly the cameras. The scaling of images is trivial, the scaling of the cameras could be achieved by very small modifications to the point_to_pixel(), pixel_to_vector() and camera_center() functions, regardless of camera type.
This is not urgent, I put it here to make sure we don't forget.
For ice-sheet images, subsampling tends to blur out the texture that ASP needs to get good matches. Would it be possible instead to correlate the full-resolution images and output lower-resolution DEMs? For the sake of speed, the resolution reduction would happen before triangulation.
This would save quite little computational time as correlation and refinement takes much longer than triangulation and DEM generation. One could as well do the remaining two steps at full res and downsample the resulting DEM.
The benefit would be if refinement could be done only on a subset of the full-res disparities. At present, the full Bayesian refinement is often the slowest step for high-resolution pairs, so refining only a subset of the points could speed up the pipeline considerably. This is probably only important on snow surfaces where running the whole pipeline at reduced resolution tends to fail.
I see. I'll defer the decision to Zack. In the meantime, if you have your own disparity, sparse or dense, and all you want is refinement/triangulation/DEM, it should be possible using the following workaround (which you may have figured out on your own already):
I did not try these, but I see no reason why it should not work.