You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CUDA SETUP: Loading binary /usr/local/lib/python3.8/dist-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so...
Traceback (most recent call last):
File "tuning_lm_with_rl.py", line 159, in <module>
tokenizer = AutoTokenizer.from_pretrained(script_args.model_name)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/tokenization_auto.py", line 657, in from_pretrained
config = AutoConfig.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/configuration_auto.py", line 916, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py", line 573, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py", line 628, in _get_config_dict
resolved_config_file = cached_file(
File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 380, in cached_file
raise EnvironmentError(
OSError: ./checkpoints/supervised_llama does not appear to have a file named config.json. Checkout 'https://huggingface.co/./checkpoints/supervised_llama/None' for available files.
There is no config.json under supervised_llama or training_reward_model.
The text was updated successfully, but these errors were encountered:
Hi Jason,
I followed the steps
Step 1 - Supervised Fine-tuning, generate "/checkpoints/supervised_llama/" including folders:
Step 2 Training Reward Model, generate "/checkpoints/training_reward_model/" including folders:
Step 3 Tuning LM with PPO.
But there is an Error:
There is no config.json under supervised_llama or training_reward_model.
The text was updated successfully, but these errors were encountered: