-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - feat(ring_theory/tensor_product): A predicate for being the tensor product. #15512
Conversation
erdOne
commented
Jul 19, 2022
src/ring_theory/tensor_product.lean
Outdated
`M` is the tensor product of `M₁` and `M₂` via `f`. | ||
This is defined by requiring the lift `M₁ ⊗[R] M₂ → M` to be bijective. | ||
-/ | ||
def is_tensor_product : Prop := function.bijective (tensor_product.lift f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to follow the design of direct_sum.decomposition
here and provide an explicit inverse?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer if this is a Prop and does not contain any data. Besides, the inverse of this is usually not that constructive, for example the inverse M_p -> A_p x M
takes a choice of fraction m/s
to 1/s x m
, but there isn't a canonical choice of the fraction representation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand your comment about canonicity; the inverse is always canonical because it's unique, right?
My thinking behind making the typeclass carry data is that an instance can still always provide it non-computably; but we don't have to throw away computability to use tensor_product
through this API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the main concern were bad defeqs, and what I was saying that the definition of the inverse is usually no better than "the inverse of the lift".
But still I don't think there are many cases where there is a computable inverse? Maybe M/IM -> A/I x M
since quotients seem to be computable?
I think the choice I made is in line with the approach we took on localizations: we have a computable localization
and an Prop
-valued is_localization
whose API are mostly incomputable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 🎉
bors merge
Build failed (retrying...): |
Pull request successfully merged into master. Build succeeded: |