Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optional transform return on Image methods #631

Merged
merged 7 commits into from
Sep 8, 2015

Conversation

nontas
Copy link
Member

@nontas nontas commented Sep 3, 2015

This PR adds an extra return_inverse_transform to all the methods of Image, MaskedImage and BooleanImage that utilize a Transform. If return_inverse_transform is True, then the self.pseudoinverse() of the Transform is returned along with the transformed image. This logic is basically encoded in the warp_to_shape() and warp_to_mask() methods, since all the rest (e.g. crop*, rescale*, etc.) return calls of those two. There are also some tests added to examine this behaviour.

@patricksnape
Copy link
Contributor

I don't have the strongest feeling about this - but is the inverse transform the correct one to return? Or should we just be returning the transform itself? I understand the use case, but I'm wondering if it makes more sense to just give them the transform that was applied?

@nontas
Copy link
Member Author

nontas commented Sep 4, 2015

After discussing with @patricksnape and @jabooth , the methods now return the actual Transform object that was used and not its inverse, so the argument to do that is now called return_transform. In the documentation of the warp_to_shape() and warp_to_mask() methods it is made clear that this argument is for internal use only.

@patricksnape
Copy link
Contributor

As it is now - this is the order of operations:

%matplotlib inline
import menpo.io as mio
from menpo.transform import AlignmentSimilarity

# Load original image
original_t = mio.import_builtin_asset.takeo_ppm()
# Load fake PointCloud to simulate a 'fitting result'
pc = mio.import_builtin_asset.lenna_ljson().lms

# Crop then rescale (as we might do in fitting)
cropped_t, tr_crop = original_t.crop_to_landmarks_proportion(0.2, return_transform=True)
rescaled_cropped_t, tr_rescale = cropped_t.rescale(0.5, return_transform=True)

# Simulate a 'fit'
pc_aligned = AlignmentSimilarity(pc, rescaled_cropped_t.landmarks[None].lms).apply(pc)

# 'Invert' the transform so we can see the 'fit' on the original image
chained_tr = tr_rescale.compose_before(tr_crop)

# View 'Fitting'
chained_tr.apply(pc_aligned).view()
original_t.view()

# Make sure that the chained warps are equivalent to the original actions
original_t.warp_to_shape(rescaled_cropped_t.shape, chained_tr).view(new_figure=True)
rescaled_cropped_t.view(new_figure=True)

Note that the current ordering for composition is rescale THEN crop. I think @jabooth mentioned that he liked that you could apply the transforms in order and then pseudoinverse? If you want:

chained_tr = tr_crop.compose_before(tr_rescale).pseudoinverse()

To be correct then we actually do need to return the pseudoinverse... I tested this btw, I'm not just postulating.

@jabooth
Copy link
Member

jabooth commented Sep 8, 2015

quite right @patricksnape, and makes complete sense on second thought that this is the case.

I don't think it's worth changing back to inverse though. This design of returning simply the transform that you would use with warp_to_shape to manually implement this effect seems the most obvious to me. So long as it gets documented in a notebook/docs/blog post, the order/compose_after vs compose_before doesn't bother me, as long as it's clear.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants