Conversation
feng-intel
commented
Mar 22, 2023
- Fix ipex inference bug: choose first tensor of model() output as well as onnx inferenceSession
- Add xpu device support: model and data tensor to xpu
- Add --device argument as the below. The default is cpu $ python src/generate_text.py --model_config configs/config_finetuned_intel.yml --device cpu/xpu
|
Hi @feng-intel
|
1 similar comment
|
Hi @feng-intel
|
|
|
@feng-intel Could you please add a new config file for xpu as we don't want to change the previously working config file. |
|
b1621f8 to
bd20c28
Compare
|
@feng-intel the new xpu env creation file has an issue ResolvePackageNotFound:
|
| model( | ||
| input_ids=all_token_ids, | ||
| attention_mask=all_attention_masks)[:, -1, :], dim=1) | ||
| attention_mask=all_attention_masks)[0][:, -1, :], dim=1) |
There was a problem hiding this comment.
@feng-intel please make the 'if-else' change as we discussed
|
The script "training_args.py" is missing "import intel_extension_for_pytorch as ipex", |
|
|
||
| ### Optimized Solution Setup | ||
|
|
||
| Follow the below conda installation commands to setup the Intel® oneAPI optimized PyTorch environment for model training and text generation. |
There was a problem hiding this comment.
| Follow the below conda installation commands to setup the Intel® oneAPI optimized PyTorch environment for model training and text generation. | |
| Follow the below conda installation commands to setup the Intel® oneAPI optimized PyTorch environment for model training and text generation. Please choose `env/intel/text-intel-torch-xpu.yml` if you have Intel GPU available. | |
| conda env create -f env/intel/text-intel-torch-cpu.yml # For CPU | ||
| conda env create -f env/intel/text-intel-torch-xpu.yml # For XPU |
There was a problem hiding this comment.
| conda env create -f env/intel/text-intel-torch-cpu.yml # For CPU | |
| conda env create -f env/intel/text-intel-torch-xpu.yml # For XPU | |
| conda env create -f env/intel/text-intel-torch-cpu.yml # For CPU | |
| or | |
| conda env create -f env/intel/text-intel-torch-xpu.yml # For XPU |
| ```sh | ||
| conda activate text-intel-torch | ||
| ``` | ||
| Especially for XPU platform: |
There was a problem hiding this comment.
| Especially for XPU platform: | |
| Please perform this additional installation step only if you are using an Intel GPU, CPU users can skip this step: | |
| **For intel xpu training**, 'xpu' device is needed to add to 'python/site-packages/transformers'. | ||
| It can be changed manually as the following: |
There was a problem hiding this comment.
| **For intel xpu training**, 'xpu' device is needed to add to 'python/site-packages/transformers'. | |
| It can be changed manually as the following: | |
| **For Intel GPU training**, device 'xpu' must be added to 'python/site-packages/transformers/training_args.py as shown in the below diff at lines'. | |
| It can be changed manually follows: | |
| device = xm.xla_device() | ||
| self._n_gpu = 0 | ||
| ``` | ||
| Or run this python code only once: |
There was a problem hiding this comment.
| Or run this python code only once: | |
| Or to apply the patch automatically, please run the following Python script only once | |
|
|
||
| **For intel xpu training**, 'xpu' device is needed to add to 'python/site-packages/transformers'. | ||
| It can be changed manually as the following: | ||
| ```python |
There was a problem hiding this comment.
Please update the diff with the latest one (one that also has "import intel_extension_for_pytorch as ipex")
| type=int, | ||
| default=10 | ||
| ) | ||
| parser.add_argument('--device', |
There was a problem hiding this comment.
please add a choice here between cpu and xpu
| $ cd src | ||
| $ python ./apply_xpu_patch.py |
There was a problem hiding this comment.
| $ cd src | |
| $ python ./apply_xpu_patch.py | |
| $ cd src | |
| $ python ./apply_xpu_patch.py | |
| and then run the fine tuning step as follows: | |
| @feng-intel please add finetuning step here same as above |
|
hi @feng-intel here's the final change To run inference on an Intel GPU, please run the following |
1. Fix ipex inference bug: choose first tensor of model() output as well as onnx inferenceSession
2. Add xpu device support: model and data tensor to xpu
3. Add --device argument as the below. The default is cpu
$ python src/generate_text.py --model_config configs/config_finetuned_intel.yml --device cpu/xpu
* Remove stock env. * Remove text-intel-torch.yml * Add intel_env.yml * Update quantize_inc_gpt2.py -Fix broken instructions. * Update generate_text.py -Add xpu compatibility. * Add .gitignore * Remove inference-transformers.png * Remove E2E_stock-transformers.png * Remove intel flag from finetune_model.py * Add kaggle to intel_env.yml * Update README.md - Remove stock references - Correct style - Redistribute information - Add Introduction - Add Solution Technical Overview - Add Solution Technical Details - Add Validated Hardware Details - Add How it Works - Add Get Started - Add Supported Runtime Environment - Add Summary and Next Steps - Add Appendix * Remove README.md from data directory * Update SECURITY.md file * Update intel_env.yml file - Add gperftools to dependencies. * Remove config_finetuned.yml * Move prompt.csv to config dir * Add gpt_generate_text.py file * Add files to patch transformers to use xpu. * Added logger to gptj_generate_text.py * Update README.md - Correct styles - Fix typos - Add sections - Format commands * Fix bfloat16 RuntimeError * Update intel_env.yml dependencies * Update transformers_xpu.patch * Updated license year to 2024 * Remove xpu dependency from intel_env.yml * Create intel_env_xpu.yml - Differs with intel_env.yml by including a intel-extension-for-pytorch version capable of using XPU. * Add instructions to use XPU to README.md * Make typo and corrections to instructions for README.md * Add blank space line at EOF * Remove inconsistent sentence from README.md