Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trafo cache #29

Merged
merged 2 commits into from
Feb 6, 2018
Merged

Trafo cache #29

merged 2 commits into from
Feb 6, 2018

Conversation

pmelchior
Copy link
Owner

Transformation matrices get on-the-fly cache to reduce construction cost.

As a side consequence, Blend needs a new keyword source_sizes to know which sizes are allowed according to a predefined list.

To work best, Source initialization needs to respect the same size options, which needs to get fixed when #28 is merged. However, this PR can be merged with master without conflicts.

Copy link
Collaborator

@fred3m fred3m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything looks good, although I'm slightly concerned about what happens for sources larger than the largest size in source_sizes. I would like to see an option that allows the user to set the behavior for sources larger than the largest specified size.

Some examples are

  1. Do what the current code does, namely, truncate the source.
  2. Let the source use whatever size box it wants. This is less than ideal, but may be better than forcing the object to be contained in a box that it doesn't fit in.
  3. Specify a step_size that handles very large sources. For example if source_sizes=[5,10,15,30,60,90] and step_size=50, then objects larger than 90 would have sizes of 90+50n, for integer n. If step_size is None, then the current behavior (truncation) would be used.

I know that the user can always just generate this list for his/herself, but this seems like a useful convenience method to protect the user from unexpected behavior.

L = scipy.sparse.eye(size, k=size)
global cache
cache[name][key] = L
return L

def getIdentityOp(shape):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call, I wouldn't have thought to cache this but it probably does take longer than it should for scipy to build these.

@@ -668,6 +667,16 @@ def _set_edge_flux(self, m, model):
self._edge_flux[m,2,:] = np.abs(model[:,:,0,:]).sum(axis=0).mean(axis=1)
self._edge_flux[m,3,:] = np.abs(model[:,:,:,0]).sum(axis=0).mean(axis=1)

def _find_next_source_size(self, value):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worried about larger sources being truncated in this function.

@pmelchior
Copy link
Owner Author

Let open a new issue for the max box size problem

@pmelchior pmelchior merged commit cc75e05 into master Feb 6, 2018
@pmelchior pmelchior deleted the trafo_cache branch February 6, 2018 22:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants