Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Sync Pipeline Inference] Sync pipeline inference branch to main #4820

Merged
merged 7 commits into from
Oct 11, 2023

Conversation

FoolPlayer
Copy link
Contributor

@FoolPlayer FoolPlayer commented Sep 27, 2023

📌 Checklist before creating the PR

  • I have created an issue for this PR for traceability
  • The title follows the standard format: [doc/gemini/tensor/...]: A concise description
  • I have added relevant tags if possible for us to better distinguish different PRs

🚨 Issue number

Link this PR to your issue with words like fixed to automatically close the linked issue upon merge

e.g. fixed #1234, closed #1234, resolved #1234

📝 What does this PR do?

Summarize your work here.
if you have any plots/diagrams/screenshots/tables, please attach them here.

💥 Checklist before requesting a review

  • I have linked my PR to an issue (instruction)
  • My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
  • I have performed a self-review of my code
  • I have added thorough tests.
  • I have added docstrings for all the functions/methods I implemented

⭐️ Do you enjoy contributing to Colossal-AI?

  • 🌝 Yes, I do.
  • 🌚 No, I don't.

Tell us more if you don't enjoy contributing to Colossal-AI.

* add pp stage manager as circle stage

* fix a bug when create process group

* add ppinfer basic framework

* add micro batch manager and support kvcache-pp gpt2 fwd

* add generate schedule

* use mb size to control mb number

* support generate with kv cache

* add output, remove unused code

* add test

* reuse shardformer to build model

* refactor some code and use the same attribute name of hf

* fix review and add test for generation

* remove unused file

* fix CI

* add cache clear

* fix code error

* fix typo
* add pp stage manager as circle stage

* fix a bug when create process group

* add ppinfer basic framework

* add micro batch manager and support kvcache-pp gpt2 fwd

* add generate schedule

* use mb size to control mb number

* support generate with kv cache

* add output, remove unused code

* add test

* reuse shardformer to build model

* refactor some code and use the same attribute name of hf

* fix review and add test for generation

* remove unused file

* modify the way of saving newtokens

* modify to tieweight

* modify test

* remove unused file

* solve review

* add docstring
* support llama pipeline inference

* remove tie weight operation
… 2 (#4708)

* add benchmark verbose

* fix export tokens

* fix benchmark verbose

* add P2POp style to do p2p communication

* modify schedule as p2p type when ppsize is 2

* remove unused code and add docstring
* add benchmark script

* update argparse

* fix fp16 load

* refactor code style

* add docstring

* polish code

* fix test bug
* add readme doc

* add a ico

* Add performance

* update table of contents
@github-actions
Copy link
Contributor

The code coverage for the changed files is 68%.

Click me to view the complete report
Name                                                            Stmts   Miss  Cover
-----------------------------------------------------------------------------------
colossalai/inference/__init__.py                                    2      0   100%
colossalai/inference/pipeline/__init__.py                           2      0   100%
colossalai/inference/pipeline/engine.py                            34      3    91%
colossalai/inference/pipeline/microbatch_manager.py               117      4    97%
colossalai/inference/pipeline/modeling/__init__.py                  0      0   100%
colossalai/inference/pipeline/modeling/gpt2.py                    124     43    65%
colossalai/inference/pipeline/modeling/llama.py                    91     91     0%
colossalai/inference/pipeline/policy/gpt2_ppinfer.py               43      5    88%
colossalai/inference/pipeline/utils.py                             15     15     0%
colossalai/pipeline/p2p.py                                        137     46    66%
colossalai/pipeline/schedule/generate.py                          191     84    56%
colossalai/pipeline/stage_manager.py                               49      0   100%
tests/test_checkpoint_io/test_low_level_zero_checkpoint_io.py      59      1    98%
tests/test_generate/test_pipeline_infer.py                         43      1    98%
-----------------------------------------------------------------------------------
TOTAL                                                             907    293    68%

@github-actions
Copy link
Contributor

github-actions bot commented Oct 9, 2023

The code coverage for the changed files is 68%.

Click me to view the complete report
Name                                                            Stmts   Miss  Cover
-----------------------------------------------------------------------------------
colossalai/inference/__init__.py                                    2      0   100%
colossalai/inference/pipeline/__init__.py                           2      0   100%
colossalai/inference/pipeline/engine.py                            34      3    91%
colossalai/inference/pipeline/microbatch_manager.py               117      4    97%
colossalai/inference/pipeline/modeling/__init__.py                  0      0   100%
colossalai/inference/pipeline/modeling/gpt2.py                    124     43    65%
colossalai/inference/pipeline/modeling/llama.py                    91     91     0%
colossalai/inference/pipeline/policy/gpt2_ppinfer.py               43      5    88%
colossalai/inference/pipeline/utils.py                             15     15     0%
colossalai/pipeline/p2p.py                                        137     46    66%
colossalai/pipeline/schedule/generate.py                          191     84    56%
colossalai/pipeline/stage_manager.py                               49      0   100%
tests/test_checkpoint_io/test_low_level_zero_checkpoint_io.py      59      1    98%
tests/test_infer/test_pipeline_infer.py                            43      1    98%
-----------------------------------------------------------------------------------
TOTAL                                                             907    293    68%

@FoolPlayer FoolPlayer merged commit 08a9f76 into main Oct 11, 2023
6 of 7 checks passed
@ver217 ver217 deleted the feature/pipeline-infer branch October 13, 2023 09:31
flybird11111 pushed a commit to flybird11111/ColossalAI that referenced this pull request Oct 18, 2023
…h#4820)

* [pipeline inference] pipeline inference (hpcaitech#4492)

* add pp stage manager as circle stage

* fix a bug when create process group

* add ppinfer basic framework

* add micro batch manager and support kvcache-pp gpt2 fwd

* add generate schedule

* use mb size to control mb number

* support generate with kv cache

* add output, remove unused code

* add test

* reuse shardformer to build model

* refactor some code and use the same attribute name of hf

* fix review and add test for generation

* remove unused file

* fix CI

* add cache clear

* fix code error

* fix typo

* [Pipeline inference] Modify to tieweight (hpcaitech#4599)

* add pp stage manager as circle stage

* fix a bug when create process group

* add ppinfer basic framework

* add micro batch manager and support kvcache-pp gpt2 fwd

* add generate schedule

* use mb size to control mb number

* support generate with kv cache

* add output, remove unused code

* add test

* reuse shardformer to build model

* refactor some code and use the same attribute name of hf

* fix review and add test for generation

* remove unused file

* modify the way of saving newtokens

* modify to tieweight

* modify test

* remove unused file

* solve review

* add docstring

* [Pipeline inference] support llama pipeline inference (hpcaitech#4647)

* support llama pipeline inference

* remove tie weight operation

* [pipeline inference] Fix the blocking of communication when ppsize is 2 (hpcaitech#4708)

* add benchmark verbose

* fix export tokens

* fix benchmark verbose

* add P2POp style to do p2p communication

* modify schedule as p2p type when ppsize is 2

* remove unused code and add docstring

* [Pipeline inference] Refactor code, add docsting, fix bug (hpcaitech#4790)

* add benchmark script

* update argparse

* fix fp16 load

* refactor code style

* add docstring

* polish code

* fix test bug

* [Pipeline inference] Add pipeline inference docs (hpcaitech#4817)

* add readme doc

* add a ico

* Add performance

* update table of contents

* refactor code (hpcaitech#4873)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants