Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[autoparallel] Patch meta information of torch.matmul #2584

Merged
merged 59 commits into from
Feb 8, 2023

Conversation

Cypher30
Copy link
Contributor

@Cypher30 Cypher30 commented Feb 6, 2023

📌 Checklist before creating the PR

  • I have created an issue for this PR for traceability
  • The title follows the standard format: [doc/gemini/tensor/...]: A concise description
  • I have added relevant tags if possible for us to better distinguish different PRs

🚨 Issue number

Link this PR to your issue with words like fixed to automatically close the linked issue upon merge

e.g. fixed #1234, closed #1234, resolved #1234

Resolved #2582

📝 What does this PR do?

Summarize your work here.
if you have any plots/diagrams/screenshots/tables, please attach them here.

In this PR, I patch the meta information of torch.matmul. Now the meta information of this operation is available for both the SPMD solver and auto activation checkpoint solver. However, the shape information given by the SPMD solver could not be matched, I have contacted @YuliangLiu0306 to check this problem.

💥 Checklist before requesting a review

  • I have linked my PR to an issue (instruction)
  • My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
  • I have performed a self-review of my code
  • I have added thorough tests.
  • I have added docstrings for all the functions/methods I implemented

⭐️ Do you enjoy contributing to Colossal-AI?

  • 🌝 Yes, I do.
  • 🌚 No, I don't.

Tell us more if you don't enjoy contributing to Colossal-AI.

Cypher30 and others added 30 commits July 14, 2022 16:07
@Cypher30 Cypher30 added the auto-parallel related to the auto-parallel feature label Feb 6, 2023
@YuliangLiu0306
Copy link
Contributor

Please fix the CI error.

@Cypher30 Cypher30 changed the title [auto_parallel/meta_profiler] Patch meta information of torch.matmul [autoparallel] Patch meta information of torch.matmul Feb 7, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Feb 7, 2023

The code coverage for the changed files is 23%.

Click me to view the complete report
Name                                                                                  Stmts   Miss  Cover
---------------------------------------------------------------------------------------------------------
colossalai/auto_parallel/meta_profiler/meta_registry/linear.py                          114    100    12%
colossalai/auto_parallel/tensor_shard/node_handler/matmul_handler.py                    266    216    19%
colossalai/auto_parallel/tensor_shard/node_handler/node_handler.py                      155     95    39%
colossalai/fx/profiler/opcount.py                                                       114     90    21%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/test_matmul_metainfo.py         69     49    29%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_matmul_handler.py      87     73    16%
---------------------------------------------------------------------------------------------------------
TOTAL                                                                                   805    623    23%

@Cypher30
Copy link
Contributor Author

Cypher30 commented Feb 7, 2023

Please fix the CI error.

Done! You could approve this PR after you double-check everything!

@YuliangLiu0306 YuliangLiu0306 merged commit 90a9fdd into hpcaitech:main Feb 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-parallel related to the auto-parallel feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE]: Meta information patch for torch.matmul
2 participants