Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Block assemble (and ZeroOperator) #285

Merged
merged 6 commits into from Oct 20, 2016
Merged

Block assemble (and ZeroOperator) #285

merged 6 commits into from Oct 20, 2016

Conversation

pmli
Copy link
Member

@pmli pmli commented Sep 1, 2016

ZeroOperator:

BlockOperator, BlockDiagonalOperator:

  • added assemble and assemble_lincomb
  • other modifications (Nonea are replaced by ZeroOperators...)

@sdrave
Copy link
Member

sdrave commented Sep 2, 2016

Hi @pmli, thanks for your work!

Could you explain why you changed the internal representation of BlockOperator from sparse to dense? Personally, I have no strong opinion on this, but one has to keep in mind that @ftalbrecht has a use case were one has hundreds (or more) blocks and the operator has a sparse structure. So there might be some performance drawbacks. @ftalbrecht is currently on vacation, so we should probably wait until he as returned.

Regarding ZeroOperator.assemble_lincomb: The all(isinstance(op, ZeroOperator) for op in operators) expression might be a bit expensive when called very frequently during a reduced model solve (isinstance is not so cheap). What about leaving the old code and fixing the length-1 case by returning self instead of None. Of course, this version would be expensive when a LincombOperator holds many ZeroOperators, but is this likely to happen?

@pmli
Copy link
Member Author

pmli commented Sep 2, 2016

Hi @sdrave, thanks for comments!

I changed the internal representation purely for the ease of implementation. I expected @ftalbrecht would have something to say about this. Concerning performance, I'm not sure if any representation is significantly better for bigger, sparser block-matrices. In both cases, every method has to traverse all the blocks... My suggestion would be using SciPy sparse matrices instead of NumPy arrays for such cases.

About ZeroOperator.assemble_lincomb, I was worried about avoiding infinite recursion, while not having a full grasp on assemble and assemble_lincomb. I checked now, and just returning None to self doesn't cause errors in the operators test. I'll push this change.

@pmli pmli mentioned this pull request Sep 8, 2016
It no longer returns None for a longer list of ZeroOperators.
- Nones in self._blocks are replaced by ZeroOperators
- add apply and apply_adjoint to BlockDiagonalOperator
- improve docs
@sdrave
Copy link
Member

sdrave commented Sep 8, 2016

I agree, that if the ZeroOperators are an issues, than it would be probably better to switch to a sparse representation of the block structure in the first place. So maybe we should go ahead with your changes and add a SparseBlockOperator when needed. However, before merging this, I would like to wait for @ftalbrecht to comment.

@ftalbrecht
Copy link
Contributor

Mhm, I am not to happy with a dense representation. I have use-cases where I have a quite sparse structure of only few blocks per row. In that case iterating over the sparse blocks is already nearly as costly as the individiual operations on the operators. I would think that adding ZeroOperators would make things worse.

So long story short: I think we need a sparse BlockOperator. Should we just add a SparseBlockOperator when required?

@pmli
Copy link
Member Author

pmli commented Oct 18, 2016

To my surprise, it seems SciPy sparse matrices do not support object dtype. So, we do need an ad hoc sparse format.

I could revert to the previous implementation, but I'm not sure if NumPy 2D array with operators and Nones is "good enough". For a large number of block rows and columns, a memory overflow could occur. Even if it all fits in RAM, iterating over the 2D array and checking if entry is not None might be much more expensive than directly iterating over operators, e.g. using COOrdinate format. @ftalbrecht Do you think there could be issues when using 2D arrays in your use cases?

@sdrave Should I include the changes from #299 here or in a different pull request?

@sdrave
Copy link
Member

sdrave commented Oct 19, 2016

Regarding the sparse version, I would say we will find a solution (implementation) when this is really needed, @ftalbrecht?

#299 should be merged by tomorrow. I would say it is perfectly fine if you merge master into this pull request branch and make the changes in an additional commit ...

@sdrave
Copy link
Member

sdrave commented Oct 20, 2016

Looks ready to merge. @ftalbrecht, is it ok to stick with this dense implementation for now and reimplement a sparse version when needed?

@ftalbrecht
Copy link
Contributor

Yes.

@sdrave
Copy link
Member

sdrave commented Oct 20, 2016

@pmli, anything you still want to change? Otherwise we can merge ..

@pmli
Copy link
Member Author

pmli commented Oct 20, 2016

No. Maybe we can add more tests for assemble methods later.

@sdrave
Copy link
Member

sdrave commented Oct 20, 2016

@pmli, you're right, the test_assemble in pymortests.operators should at least check if apply does the same on the original and the assembled operator. Having appropriate fixtures should then be sufficient, I guess.

@sdrave
Copy link
Member

sdrave commented Oct 20, 2016

See #307 regarding the tests.

@sdrave sdrave merged commit dfa475f into pymor:master Oct 20, 2016
@pmli pmli deleted the block-assemble branch October 20, 2016 08:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants