We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
不知道是否是fp16和fp32的问题?
│ LOMO/src/lomo_trainer.py:210 in train │ │ │ │ 207 │ │ │ │ │ │ self.eval(self.global_step, epoch, self.eval_dataset[prefix], se │ │ 208 │ │ │ │ │ │ │ │ prefix) │ │ 209 │ │ │ │ else: │ │ ❱ 210 │ │ │ │ │ self.eval(self.global_step, epoch, self.eval_dataset, self.eval_data │ │ 211 │ │ │ 212 │ def eval( │ │ 213 │ │ │ self, │ │ │ │ LOMO/src/lomo_trainer.py:237 in eval │ │ │ │ 234 │ │ │ │ │ if self.training_args.predict_with_generate: │ │ 235 │ │ │ │ │ │ pred = self.generate_step(batch) │ │ 236 │ │ │ │ │ else: │ │ ❱ 237 │ │ │ │ │ │ pred = self.eval_step(batch) │ │ 238 │ │ │ │ │ all_preds = pred if all_preds is None else all_preds + pred │ │ 239 │ │ │ │ │ 240 │ │ │ all_preds_gather = [None for _ in range(self.training_args.world_size)] │ │ │ │ LOMO/src/lomo_trainer.py:263 in eval_step │ │ │ │ 260 │ │ """ │ │ 261 │ │ used for classification or multi-choice qa tasks in eval() │ │ 262 │ │ """ │ │ ❱ 263 │ │ outs = self.model(batch['input_ids'].cuda(), batch['attention_mask'].cuda()) │ │ 264 │ │ # Shift so that tokens < n predict n │ │ 265 │ │ shift_logits = outs.logits[..., :-1, :].contiguous() │ │ 266 │ │ shift_labels = batch['labels'][..., 1:].cuda().contiguous()
The text was updated successfully, but these errors were encountered:
你好,应该不是fp16或者fp32的问题。IndexError可以检查一下tokenize过程或者报错那一行的batch数据。
Sorry, something went wrong.
No branches or pull requests
不知道是否是fp16和fp32的问题?
│ LOMO/src/lomo_trainer.py:210 in train │
│ │
│ 207 │ │ │ │ │ │ self.eval(self.global_step, epoch, self.eval_dataset[prefix], se │
│ 208 │ │ │ │ │ │ │ │ prefix) │
│ 209 │ │ │ │ else: │
│ ❱ 210 │ │ │ │ │ self.eval(self.global_step, epoch, self.eval_dataset, self.eval_data │
│ 211 │ │
│ 212 │ def eval( │
│ 213 │ │ │ self, │
│ │
│ LOMO/src/lomo_trainer.py:237 in eval │
│ │
│ 234 │ │ │ │ │ if self.training_args.predict_with_generate: │
│ 235 │ │ │ │ │ │ pred = self.generate_step(batch) │
│ 236 │ │ │ │ │ else: │
│ ❱ 237 │ │ │ │ │ │ pred = self.eval_step(batch) │
│ 238 │ │ │ │ │ all_preds = pred if all_preds is None else all_preds + pred │
│ 239 │ │ │ │
│ 240 │ │ │ all_preds_gather = [None for _ in range(self.training_args.world_size)] │
│ │
│ LOMO/src/lomo_trainer.py:263 in eval_step │
│ │
│ 260 │ │ """ │
│ 261 │ │ used for classification or multi-choice qa tasks in eval() │
│ 262 │ │ """ │
│ ❱ 263 │ │ outs = self.model(batch['input_ids'].cuda(), batch['attention_mask'].cuda()) │
│ 264 │ │ # Shift so that tokens < n predict n │
│ 265 │ │ shift_logits = outs.logits[..., :-1, :].contiguous() │
│ 266 │ │ shift_labels = batch['labels'][..., 1:].cuda().contiguous()
The text was updated successfully, but these errors were encountered: