Skip to content

Commit

Permalink
add flash_attention_causal_mask to run_lm_eval.py (#142)
Browse files Browse the repository at this point in the history
  • Loading branch information
dudilester authored and astachowiczhabana committed Apr 22, 2024
1 parent 0b2e152 commit 3601bb0
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions examples/text-generation/run_lm_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ def __init__(self, tokenizer, model, args, options):
"attn_softmax_bf16": self.options.attn_softmax_bf16,
"use_flash_attention": self.options.use_flash_attention,
"flash_attention_recompute": self.options.flash_attention_recompute,
"flash_attention_causal_mask": self.options.flash_attention_causal_mask,
}
)
if args.warmup:
Expand Down

0 comments on commit 3601bb0

Please sign in to comment.