You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I finetune llama3 model and I try to evaluate it by running the recipes eleuther_eval. Here is my command tune run eleuther_eval --config ./eval_config.yaml and the yaml file are as follows:
The directory in the yaml file do not have problems. But when I run the command, the output is
2024-05-16:00:05:54,930 INFO [_utils.py:34] Running EleutherEvalRecipe with resolved config:
checkpointer:
_component_: torchtune.utils.FullModelMetaCheckpointer
checkpoint_dir: Meta-Llama-3-8B
checkpoint_files:
- meta_model_0.pt
model_type: LLAMA3
output_dir: Meta-Llama-3-8B
recipe_checkpoint: null
device: cuda
dtype: bf16
limit: null
max_seq_length: 4096
model:
_component_: torchtune.models.llama3.llama3_8b
quantizer: null
seed: 217
tasks:
- gsm8k_cot
tokenizer:
_component_: torchtune.models.llama3.llama3_tokenizer
path: Meta-Llama-3-8B/original/tokenizer.model
2024-05-16:00:05:55,066 DEBUG [seed.py:59] Setting manual seed to local seed 217. Local seed is seed + rank = 217 + 0
2024-05-16:00:05:58,121 INFO [eleuther_eval.py:168] Model is initialized with precision torch.bfloat16.
2024-05-16:00:05:58,583 INFO [eleuther_eval.py:152] Tokenizer is initialized from file.
Traceback (most recent call last):
File "/home/zzh/miniconda3/envs/torchtune/bin/tune", line 8, in <module>
sys.exit(main())
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/_cli/tune.py", line 49, in main
parser.run(args)
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/_cli/tune.py", line 43, in run
args.func(args)
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/_cli/run.py", line 179, in _run_cmd
self._run_single_device(args)
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/_cli/run.py", line 93, in _run_single_device
runpy.run_path(str(args.recipe), run_name="__main__")
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/recipes/eleuther_eval.py", line 211, in <module>
sys.exit(recipe_main())
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torchtune/config/_parse.py", line 50, in wrapper
sys.exit(recipe_main(conf))
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/recipes/eleuther_eval.py", line 207, in recipe_main
recipe.evaluate()
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/recipes/eleuther_eval.py", line 175, in evaluate
model_eval_wrapper = _EvalWrapper(
File "/home/zzh/miniconda3/envs/torchtune/lib/python3.8/site-packages/recipes/eleuther_eval.py", line 58, in __init__
super().__init__(device=str(device))
TypeError: __init__() missing 1 required positional argument: 'pretrained'
I have look through the source code in here, But I can't debug the code, so do you know why I always encounter such a problem?
The text was updated successfully, but these errors were encountered:
Hi! Eleuther recently updated their code to require this "pretrained" argument. We updated our code. Can you either install from source or from our nightlies?
I finetune llama3 model and I try to evaluate it by running the recipes
eleuther_eval
. Here is my commandtune run eleuther_eval --config ./eval_config.yaml
and the yaml file are as follows:The directory in the yaml file do not have problems. But when I run the command, the output is
I have look through the source code in here, But I can't debug the code, so do you know why I always encounter such a problem?
The text was updated successfully, but these errors were encountered: