Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I can not run the recipe eleuther_eval #985

Open
MaxwelsDonc opened this issue May 15, 2024 · 1 comment
Open

I can not run the recipe eleuther_eval #985

MaxwelsDonc opened this issue May 15, 2024 · 1 comment

Comments

@MaxwelsDonc
Copy link

I finetune llama3 model and I try to evaluate it by running the recipes eleuther_eval. Here is my command tune run eleuther_eval --config ./eval_config.yaml and the yaml file are as follows:

model:
  _component_: torchtune.models.llama3.llama3_8b

checkpointer:
  _component_: torchtune.utils.FullModelMetaCheckpointer
  checkpoint_dir: Meta-Llama-3-8B
  checkpoint_files: [meta_model_0.pt]
  recipe_checkpoint: null
  output_dir: Meta-Llama-3-8B
  model_type: LLAMA3

# Tokenizer
tokenizer:
  _component_: torchtune.models.llama3.llama3_tokenizer
  path: Meta-Llama-3-8B/original/tokenizer.model

# Environment
device: cuda
dtype: bf16
seed: 217

# EleutherAI specific eval args
tasks: ["gsm8k_cot"]
limit: null
max_seq_length: 4096

# Quantization specific args
quantizer: null

The directory in the yaml file do not have problems. But when I run the command, the output is

2024-05-16:00:05:54,930 INFO     [_utils.py:34] Running EleutherEvalRecipe with resolved config:

checkpointer:
  _component_: torchtune.utils.FullModelMetaCheckpointer
  checkpoint_dir: Meta-Llama-3-8B
  checkpoint_files:
  - meta_model_0.pt
  model_type: LLAMA3
  output_dir: Meta-Llama-3-8B
  recipe_checkpoint: null
device: cuda
dtype: bf16
limit: null
max_seq_length: 4096
model:
  _component_: torchtune.models.llama3.llama3_8b
quantizer: null
seed: 217
tasks:
- gsm8k_cot
tokenizer:
  _component_: torchtune.models.llama3.llama3_tokenizer
  path: Meta-Llama-3-8B/original/tokenizer.model

2024-05-16:00:05:55,066 DEBUG    [seed.py:59] Setting manual seed to local seed 217. Local seed is seed + rank = 217 + 0
2024-05-16:00:05:58,121 INFO     [eleuther_eval.py:168] Model is initialized with precision torch.bfloat16.
2024-05-16:00:05:58,583 INFO     [eleuther_eval.py:152] Tokenizer is initialized from file.
Traceback (most recent call last):
  File "/home/zzh/miniconda3/envs/torchtune/bin/tune", line 8, in <module>
    sys.exit(main())
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/_cli/tune.py", line 49, in main
    parser.run(args)
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/_cli/tune.py", line 43, in run
    args.func(args)
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/_cli/run.py", line 179, in _run_cmd
    self._run_single_device(args)
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/_cli/run.py", line 93, in _run_single_device
    runpy.run_path(str(args.recipe), run_name="__main__")
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/runpy.py", line 265, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/recipes/eleuther_eval.py", line 211, in <module>
    sys.exit(recipe_main())
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/config/_parse.py", line 50, in wrapper
    sys.exit(recipe_main(conf))
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/recipes/eleuther_eval.py", line 207, in recipe_main
    recipe.evaluate()
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/recipes/eleuther_eval.py", line 175, in evaluate
    model_eval_wrapper = _EvalWrapper(
  File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/recipes/eleuther_eval.py", line 58, in __init__
    super().__init__(device=str(device))
TypeError: __init__() missing 1 required positional argument: 'pretrained'

I have look through the source code in here, But I can't debug the code, so do you know why I always encounter such a problem?

@joecummings
Copy link
Contributor

Hi! Eleuther recently updated their code to require this "pretrained" argument. We updated our code. Can you either install from source or from our nightlies?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants