Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IKE to edit Llama2 on ZsRE and Reproducing Editing Performance #9

Closed
DebapriyaHazra opened this issue Aug 22, 2023 · 1 comment
Closed
Labels
question Further information is requested

Comments

@DebapriyaHazra
Copy link

Hello,

  1. How can we edit llama2 on ZsRE with IKE method? I tried with

python run_zsre_llama2.py --editing_method=IKE --hparams_dir=../tutorial-notebooks/hparams/IKE/llama-7b --data_dir=./data

It shows the error : assert 'train_ds' in kwargs.keys() or print('IKE need train_ds(For getting In-Context prompt)')

Where and how should we change the code?

  1. If we just want to reproduce the results of Editing Performance Table with the four metrics how can we do that?

  2. CUDA out of memory error while running the code in EasyEdit_Example_IKE.ipynb
    Is there any other way apart from using hugging face accelerator? Some way to reduce batch_size, etc?

@zxlzr zxlzr added the question Further information is requested label Aug 22, 2023
@pengzju
Copy link
Collaborator

pengzju commented Aug 22, 2023

Problem 1: As you can see in README, IKE is not currently supported in the example module. If you want to use IKE to edit the model, you need to to first encode train_set samples and take train_ds as an argument to edit.

like: https://github.com/zjunlp/EasyEdit/blob/main/edit.py#L417

Problem 2: In example module (https://github.com/zjunlp/EasyEdit/tree/main/examples), we provide the script to reprodue experiment. The final edited metric is stored in results.json as follows, and you only need to calculate the corresponding average to get the final edited performance. Later on, we will provide summarize.py to summarize the metrics.

{
    "post": {
        "rewrite_acc": ,
        "rephrase_acc": ,
        "locality": {
            "YOUR_LOCALITY_KEY": ,
            //...
        },
        "portablility": {
            "YOUR_PORTABILITY_KEY": ,
            //...
        },
    },
    "pre": {
        "rewrite_acc": ,
        "rephrase_acc": ,
        "portablility": {
            "YOUR_PORTABILITY_KEY": ,
            //...
        },
    }
}

Problem 3: You can solve the OOM problem in the following three ways

  • git clone code, using a GPU with larger memory (A800, 80GB was used in the original paper)
  • If your resources are limited, consider using device_map='auto' for model parallel(multiple gpus needed), you can refer to the official description of huggingface here
  • Reduce the value of k

@pengzju pengzju closed this as completed Aug 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants