-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
Hi Authors of Liger,
I extend my gratitude for your contribution to the field,
I'm trying to run the lm-eval for the checkpoints within the repo, and I'm ending up with an error. I attached the command and error I'm ending up with.
command:
--model_args pretrained=~/Linearization/checkpoints/liger_qwen25_gla_base,trust_remote_code=True,model_type=causal \
--tasks piqa,arc_easy,arc_challenge,hellaswag,winogrande \
--batch_size 64 \
--device cuda \
--seed 0
Current Triton version 3.1.0 is below the recommended 3.2.0 version. Errors may occur and these issues will not be fixed. Please consider upgrading Triton.
2026-02-03:04:06:28 INFO [_cli.run:376] Selected Tasks: ['piqa', 'arc_easy', 'arc_challenge', 'hellaswag', 'winogrande']
2026-02-03:04:06:28 INFO [evaluator:211] Setting random seed to 0 | Setting numpy seed to 0 | Setting torch manual seed to 0 | Setting fewshot manual seed to 0
2026-02-03:04:06:28 INFO [evaluator:236] Initializing hf model, with arguments: {'pretrained': '/home/vimal/phd_research/Linearization/checkpoints/liger_qwen25_gla_base', 'trust_remote_code': True, 'model_type': 'causal'}
2026-02-03:04:06:28 INFO [models.huggingface:161] Using device 'cuda'
2026-02-03:04:06:28 INFO [models.huggingface:548] Model type cannot be determined. Using default model type 'causal'
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/vimal/phd_research/Linearization/eval/harness.py", line 23, in <module>
cli_evaluate()
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/lm_eval/__main__.py", line 10, in cli_evaluate
parser.execute(args)
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/lm_eval/_cli/harness.py", line 60, in execute
args.func(args)
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/lm_eval/_cli/run.py", line 379, in _execute
results = simple_evaluate(
^^^^^^^^^^^^^^^^
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/lm_eval/utils.py", line 498, in _wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/lm_eval/evaluator.py", line 239, in simple_evaluate
lm = lm_eval.api.registry.get_model(model).create_from_arg_obj(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/lm_eval/api/model.py", line 180, in create_from_arg_obj
return cls(**arg_dict, **additional_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/lm_eval/models/huggingface.py", line 204, in __init__
self._create_tokenizer(
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/lm_eval/models/huggingface.py", line 793, in _create_tokenizer
self.tokenizer = transformers.AutoTokenizer.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 937, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/home/vimal/phd_research/LinearEnv/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 760, in __getitem__
model_type = self._reverse_config_mapping[key.__name__]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'LigerQwen2GLAConfig'
Kindly please help me in this issue
Metadata
Metadata
Assignees
Labels
No labels