You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
wandb: Tracking run with wandb version 0.15.12
wandb: W&B syncing is set to `offline`in this directory.
wandb: Run `wandb online` or set WANDB_MODE=online to enable cloud syncing.
2023-10-24 13:53:33.481541: W external/xla/xla/service/gpu/nvptx_compiler.cc:673] The NVIDIA driver's CUDA version is 12.0 which is older than the ptxas CUDA version (12.3.52). Because the driver is older than the ptxas version, XLA is disabling parallel compilation, which may slow down compilation. You should update your NVIDIA driver or use the NVIDIA-provided CUDA forward compatibility packages.Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/home/ec2-user/workplace/EasyLM/EasyLM/models/llama/llama_train.py", line 267, in <module> mlxu.run(main) File "/home/ec2-user/miniconda3/lib/python3.11/site-packages/absl/app.py", line 308, in run _run_main(main, args) File "/home/ec2-user/miniconda3/lib/python3.11/site-packages/absl/app.py", line 254, in _run_main sys.exit(main(argv)) ^^^^^^^^^^ File "/home/ec2-user/workplace/EasyLM/EasyLM/models/llama/llama_train.py", line 64, in main tokenizer = LLaMAConfig.get_tokenizer(FLAGS.tokenizer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ec2-user/workplace/EasyLM/EasyLM/models/llama/llama_model.py", line 293, in get_tokenizer tokenizer = LLaMATokenizer( ^^^^^^^^^^^^^^^ File "/home/ec2-user/workplace/EasyLM/EasyLM/models/llama/llama_model.py", line 1140, in __init__ super().__init__(bos_token=bos_token, eos_token=eos_token, unk_token=unk_token, **kwargs) File "/home/ec2-user/miniconda3/lib/python3.11/site-packages/transformers/tokenization_utils.py", line 366, in __init__ self._add_tokens(self.all_special_tokens_extended, special_tokens=True) File "/home/ec2-user/miniconda3/lib/python3.11/site-packages/transformers/tokenization_utils.py", line 462, in _add_tokens current_vocab = self.get_vocab().copy() ^^^^^^^^^^^^^^^^ File "/home/ec2-user/workplace/EasyLM/EasyLM/models/llama/llama_model.py", line 1175, in get_vocab vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} ^^^^^^^^^^^^^^^ File "/home/ec2-user/workplace/EasyLM/EasyLM/models/llama/llama_model.py", line 1163, in vocab_size return self.sp_model.get_piece_size() ^^^^^^^^^^^^^AttributeError: 'LLaMATokenizer' object has no attribute 'sp_model'wandb: Waiting for W&B process to finish... (failed 1).wandb: You can sync this run to the cloud by running:wandb: wandb sync checkpoint/27fc482119cd4211965c651f185f0aa6/wandb/offline-run-20231024_135326-27fc482119cd4211965c651f185f0aa6wandb: Find logs at: checkpoint/27fc482119cd4211965c651f185f0aa6/wandb/offline-run-20231024_135326-27fc482119cd4211965c651f185f0aa6/logs
it' s seem tokenizer.vocab_file is incorrect,but i don't know which file should be use.
The text was updated successfully, but these errors were encountered:
when i run command like this
logs as below:
it' s seem tokenizer.vocab_file is incorrect,but i don't know which file should be use.
The text was updated successfully, but these errors were encountered: