Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding lllama fairscale #2604

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open

adding lllama fairscale #2604

wants to merge 10 commits into from

Conversation

HamidShojanazeri
Copy link
Collaborator

@HamidShojanazeri HamidShojanazeri commented Sep 20, 2023

Description

Adding Fariscale llama to Torchserve

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test A
    Logs

  • Test B
    Logs for Test B

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@codecov
Copy link

codecov bot commented Sep 20, 2023

Codecov Report

Merging #2604 (19edba1) into master (7f4419f) will not change coverage.
The diff coverage is n/a.

❗ Current head 19edba1 differs from pull request most recent head f144612. Consider uploading reports for the commit f144612 to get more accurate results

@@           Coverage Diff           @@
##           master    #2604   +/-   ##
=======================================
  Coverage   72.44%   72.44%           
=======================================
  Files          85       85           
  Lines        3963     3963           
  Branches       58       58           
=======================================
  Hits         2871     2871           
  Misses       1088     1088           
  Partials        4        4           

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@HamidShojanazeri HamidShojanazeri changed the title [WIP] adding lllama fairscale adding lllama fairscale Oct 12, 2023
Copy link
Contributor

@chauhang chauhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @HamidShojanazeri for this PR. Please see comments inline. It will be good to match the readme and config files for consistent example -- eg use 13b model as the base and explain everything for that.

Comment on lines +49 to +50
model_path: "PATH/TO/MODEL_CHECKPOINTS"
tokenizer_path: "PATH/TO/MODEL_CHECKPOINTS/tokenizer.model"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

### Step 3: Generate MAR file

```bash
torch-model-archiver --model-name llama --version 1.0 --handler llama-handler.py --config-file model-config.yaml --archive-format tgz -r requirements.txt
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change "--archive-format tgz" to "--archive-format no-archive"

Comment on lines +48 to +49
model_path = ctx.model_yaml_config["handler"]["model_path"]
tokenizer_path = ctx.model_yaml_config["handler"]["tokenizer_path"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you change to the same way as https://github.com/pytorch/serve/blob/master/examples/large_models/tp_llama/llama-handler.py#L68C1-L68C1?

ie. model_path = f'{model_dir}/{ctx.model_yaml_config["handler"]["model_path"]}'

torch.manual_seed(seed)

logger.info("Instantiating Llama model")
self.model = Llama.build(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq, should we provide option defer init for llama2-70b?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants