Skip to content
This repository was archived by the owner on May 8, 2024. It is now read-only.

Add xpu device support.#1

Closed
feng-intel wants to merge 1 commit intooneapi-src:mainfrom
feng-intel:main
Closed

Add xpu device support.#1
feng-intel wants to merge 1 commit intooneapi-src:mainfrom
feng-intel:main

Conversation

@feng-intel
Copy link
Copy Markdown

  1. Fix ipex inference bug: choose first tensor of model() output as well as onnx inferenceSession
  2. Add xpu device support: model and data tensor to xpu
  3. Add --device argument as the below. The default is cpu $ python src/generate_text.py --model_config configs/config_finetuned_intel.yml --device cpu/xpu

Copy link
Copy Markdown

@yinghu5 yinghu5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extend the case to xpu support.

@devpramod-intel
Copy link
Copy Markdown

Hi @feng-intel

  1. env/intel/text-intel-torch.yml needs to be updated to have ipex xpu package
  2. The readme must have '--device cpu' with a note that mentions --device xpu runs the code on intel's GPUs

1 similar comment
@devpramod-intel
Copy link
Copy Markdown

Hi @feng-intel

  1. env/intel/text-intel-torch.yml needs to be updated to have ipex xpu package
  2. The readme must have '--device cpu' with a note that mentions --device xpu runs the code on intel's GPUs

@feng-intel
Copy link
Copy Markdown
Author

env/intel/text-intel-torch.yml needs to be updated to have ipex xpu package
Hi @devpramod-intel
"intel_extension_for_pytorch==1.13.100" has ipex xpu yet I think.

@devpramod-intel
Copy link
Copy Markdown

@feng-intel Could you please add a new config file for xpu as we don't want to change the previously working config file.

@devpramod-intel
Copy link
Copy Markdown

env/intel/text-intel-torch.yml needs to be updated to have ipex xpu package
Hi @devpramod-intel
"intel_extension_for_pytorch==1.13.100" has ipex xpu yet I think.
@feng-intel I believe we need to install intel_extension_for_pytorch==1.13.10+xpu

@feng-intel feng-intel force-pushed the main branch 3 times, most recently from b1621f8 to bd20c28 Compare March 27, 2023 02:24
@devpramod-intel
Copy link
Copy Markdown

@feng-intel the new xpu env creation file has an issue
`
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • pytorch::pytorch=1.13.0a0
    `

Comment thread src/generate_text.py
model(
input_ids=all_token_ids,
attention_mask=all_attention_masks)[:, -1, :], dim=1)
attention_mask=all_attention_masks)[0][:, -1, :], dim=1)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@feng-intel please make the 'if-else' change as we discussed

@devpramod-intel
Copy link
Copy Markdown

devpramod-intel commented Apr 3, 2023

The script "training_args.py" is missing "import intel_extension_for_pytorch as ipex",
This leads to an error as follows:
AttributeError: module 'torch' has no attribute 'xpu'
Please also include this change in the patch file

Comment thread README.md Outdated

### Optimized Solution Setup

Follow the below conda installation commands to setup the Intel® oneAPI optimized PyTorch environment for model training and text generation.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Follow the below conda installation commands to setup the Intel® oneAPI optimized PyTorch environment for model training and text generation.
Follow the below conda installation commands to setup the Intel® oneAPI optimized PyTorch environment for model training and text generation. Please choose `env/intel/text-intel-torch-xpu.yml` if you have Intel GPU available.

Comment thread README.md
Comment on lines +255 to +257
conda env create -f env/intel/text-intel-torch-cpu.yml # For CPU
conda env create -f env/intel/text-intel-torch-xpu.yml # For XPU
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
conda env create -f env/intel/text-intel-torch-cpu.yml # For CPU
conda env create -f env/intel/text-intel-torch-xpu.yml # For XPU
conda env create -f env/intel/text-intel-torch-cpu.yml # For CPU
or
conda env create -f env/intel/text-intel-torch-xpu.yml # For XPU

Comment thread README.md Outdated
```sh
conda activate text-intel-torch
```
Especially for XPU platform:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Especially for XPU platform:
Please perform this additional installation step only if you are using an Intel GPU, CPU users can skip this step:

Comment thread README.md Outdated
Comment on lines +294 to +295
**For intel xpu training**, 'xpu' device is needed to add to 'python/site-packages/transformers'.
It can be changed manually as the following:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**For intel xpu training**, 'xpu' device is needed to add to 'python/site-packages/transformers'.
It can be changed manually as the following:
**For Intel GPU training**, device 'xpu' must be added to 'python/site-packages/transformers/training_args.py as shown in the below diff at lines'.
It can be changed manually follows:

Comment thread README.md Outdated
device = xm.xla_device()
self._n_gpu = 0
```
Or run this python code only once:
Copy link
Copy Markdown

@devpramod-intel devpramod-intel Apr 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Or run this python code only once:
Or to apply the patch automatically, please run the following Python script only once

Comment thread README.md

**For intel xpu training**, 'xpu' device is needed to add to 'python/site-packages/transformers'.
It can be changed manually as the following:
```python
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update the diff with the latest one (one that also has "import intel_extension_for_pytorch as ipex")

Comment thread src/generate_text.py
type=int,
default=10
)
parser.add_argument('--device',
Copy link
Copy Markdown

@devpramod-intel devpramod-intel Apr 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add a choice here between cpu and xpu

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't get it.

Comment thread README.md
Comment on lines +313 to +315
$ cd src
$ python ./apply_xpu_patch.py
Copy link
Copy Markdown

@devpramod-intel devpramod-intel Apr 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
$ cd src
$ python ./apply_xpu_patch.py
$ cd src
$ python ./apply_xpu_patch.py
and then run the fine tuning step as follows:
@feng-intel please add finetuning step here same as above

@devpramod-intel
Copy link
Copy Markdown

hi @feng-intel here's the final change
Could you please also add the following in two places lines 342 and 382

To run inference on an Intel GPU, please run the following
python src/generate_text.py --model_config configs/config_finetuned_inc.yml --device xpu

1. Fix ipex inference bug: choose first tensor of model() output as well as onnx inferenceSession
2. Add xpu device support: model and data tensor to xpu
3. Add --device argument as the below. The default is cpu
    $ python src/generate_text.py --model_config configs/config_finetuned_intel.yml --device cpu/xpu
aagalleg added a commit that referenced this pull request Feb 8, 2024
* Remove stock env.

* Remove text-intel-torch.yml

* Add intel_env.yml

* Update quantize_inc_gpt2.py

-Fix broken instructions.

* Update generate_text.py

-Add xpu compatibility.

* Add .gitignore

* Remove inference-transformers.png

* Remove E2E_stock-transformers.png

* Remove intel flag from finetune_model.py

* Add kaggle to intel_env.yml

* Update README.md

- Remove stock references
- Correct style
- Redistribute information
- Add Introduction
- Add Solution Technical Overview
- Add Solution Technical Details
- Add Validated Hardware Details
- Add How it Works
- Add Get Started
- Add Supported Runtime Environment
- Add Summary and Next Steps
- Add Appendix

* Remove README.md from data directory

* Update SECURITY.md file

* Update intel_env.yml file

- Add gperftools to dependencies.

* Remove config_finetuned.yml

* Move prompt.csv to config dir

* Add gpt_generate_text.py file

* Add files to patch transformers to use xpu.

* Added logger to gptj_generate_text.py

* Update README.md

- Correct styles
- Fix typos
- Add sections
- Format commands

* Fix bfloat16 RuntimeError

* Update intel_env.yml dependencies

* Update transformers_xpu.patch

* Updated license year to 2024

* Remove xpu dependency from intel_env.yml

* Create intel_env_xpu.yml
- Differs with intel_env.yml by including a  intel-extension-for-pytorch version capable of using XPU.

* Add instructions to use XPU to README.md

* Make typo and corrections to instructions for README.md

* Add blank space line at EOF

* Remove inconsistent sentence from README.md
@aagalleg aagalleg deleted the branch oneapi-src:main February 8, 2024 21:49
@aagalleg aagalleg closed this Feb 8, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants