You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Similar to https://github.com/NVIDIA/FasterTransformer/issues/470. You can just change all weights into fp16 and inference using fp16.
Both Summarization and Translation task give very poor result.
Tested on:
- FlanT5 XXL, FlanT5-XL, FlanT5-UL2.
Tested using TP2 with 2 GPUs under fp16.
The text was updated successfully, but these errors were encountered:
Branch/Tag/Commit
5.3.0
Docker Image Version
pytorch 22.09
GPU name
A10G
CUDA Driver
Any
Reproduced Steps
The text was updated successfully, but these errors were encountered: