Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - feat(linear_algebra/trace): trace of prod_map #13872

Closed
wants to merge 13 commits into from

Conversation

antoinelab01
Copy link
Collaborator

@antoinelab01 antoinelab01 commented May 2, 2022

In this PR I prove that the trace is additive under prod_map, i.e. that trace (prod_map f g) = trace f + trace g.


Open in Gitpod

@antoinelab01 antoinelab01 added awaiting-review The author would like community review of the PR blocked-by-other-PR This PR depends on another PR which is still in the queue. A bot manages this label via PR comment. labels May 2, 2022
@antoinelab01 antoinelab01 changed the title Trace prod map feat(linear_algebra/trace): trace of prod_map May 2, 2022
@mathlib-dependent-issues-bot mathlib-dependent-issues-bot removed the blocked-by-other-PR This PR depends on another PR which is still in the queue. A bot manages this label via PR comment. label May 4, 2022
@mathlib-dependent-issues-bot
Copy link
Collaborator

This PR/issue depends on:

@antoinelab01 antoinelab01 added the awaiting-CI The author would like to see what CI has to say before doing more work. label May 4, 2022
@github-actions github-actions bot removed the awaiting-CI The author would like to see what CI has to say before doing more work. label May 4, 2022
@riccardobrasca
Copy link
Member

Can you please add a description of the results in this PR? Thanks!

Comment on lines +177 to +188
{ simp only [dual_tensor_hom_equiv, tensor_product.algebra_tensor_module.curry_apply,
to_fun_eq_coe, tensor_product.curry_apply, coe_restrict_scalars_eq_coe, coe_comp,
linear_equiv.coe_to_linear_map, coe_inl, function.comp_app, linear_equiv.prod_apply,
dual_tensor_hom_equiv_of_basis_apply, map_zero, prod_map_apply, coprod_apply, id_coe, id.def,
add_zero, prod_map_linear_apply, dual_tensor_hom_prod_map_zero, trace_eq_contract_apply,
contract_left_apply, fst_apply] },
{ simp only [dual_tensor_hom_equiv, tensor_product.algebra_tensor_module.curry_apply,
to_fun_eq_coe, tensor_product.curry_apply, coe_restrict_scalars_eq_coe, coe_comp,
linear_equiv.coe_to_linear_map, coe_inr, function.comp_app, linear_equiv.prod_apply,
dual_tensor_hom_equiv_of_basis_apply, map_zero, prod_map_apply, coprod_apply, id_coe, id.def,
zero_add, prod_map_linear_apply, zero_prod_map_dual_tensor_hom, trace_eq_contract_apply,
contract_left_apply, snd_apply], },
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you added @[simp] to dual_tensor_hom_prod_map_zero and zero_prod_map_dual_tensor_hom (seems reasonable?), then this could just be ext; simp.

I wouldn't insist on this, however.

(I think unnecessarily squeezing simps can obfuscate proofs: it's hard to tell by looking at a big simp only if it is only there to speed things up, or there is real work happening.)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds fair. I was unsure for these kinds of proofs whether to prioritize conciseness with simp or speed with simp only. Since the difference in loading time was significant I chose simp only but I can totally change to simp if you believe that's a better practice.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I did the change and even though it works locally now CI fails at build mathlib with a deterministic timeout... Maybe we don't really have the choice to use simp only here

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sadness. Lean 3 is running out of steam.

@semorrison
Copy link
Collaborator

bors merge

@github-actions github-actions bot added ready-to-merge All that is left is for bors to build and merge this PR. (Remember you need to say `bors r+`.) and removed awaiting-review The author would like community review of the PR labels May 17, 2022
@bors
Copy link

bors bot commented May 17, 2022

Canceled.

@riccardobrasca
Copy link
Member

Can you please merge master and see if it works?

@antoinelab01
Copy link
Collaborator Author

Can you please merge master and see if it works?

Done. I think that someone needs to call "bors merge" again.

@riccardobrasca
Copy link
Member

bors merge

bors bot pushed a commit that referenced this pull request May 20, 2022
In this PR I prove that the trace is additive under `prod_map`, i.e. that `trace (prod_map f g) = trace f + trace g`. 



Co-authored-by: antoinelab01 <66086247+antoinelab01@users.noreply.github.com>
@bors
Copy link

bors bot commented May 20, 2022

Pull request successfully merged into master.

Build succeeded:

@bors bors bot changed the title feat(linear_algebra/trace): trace of prod_map [Merged by Bors] - feat(linear_algebra/trace): trace of prod_map May 20, 2022
@bors bors bot closed this May 20, 2022
@bors bors bot deleted the trace_prod_map branch May 20, 2022 07:06
bors bot pushed a commit that referenced this pull request May 20, 2022
This is proved under the `field` assumption instead of the finite free module assumptions generally used to talk about the trace because we need the submodules `p` and `f.ker` to also be free and finite.

- [x] depends on: #13872 


Co-authored-by: antoinelab01 <66086247+antoinelab01@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-to-merge All that is left is for bors to build and merge this PR. (Remember you need to say `bors r+`.)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants