You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I expect the output to be fluent and meaningful English.
Screenshots
actual output:
prompt: My name is Mariama, my favorite
['hd Business pleasure canción Stock Mohból vieрюścierves Democratic Zum beskrevs Pel framiska.»ід}$.)}{nex програ FoiProgramкли Referencias nov laugh maven нап сайті Yeahskiereader beyondWrapperatted encryptionabinex river goшње Catalunya totale савезној \'acional округу transaction Stuart establishandenárszetiлежа;" displaysreq Nice IndependentboBox Phil Napoleon wide Doctor]{\' FALSE}$-angel";\r FIFA следуLocdw parad */ék achtlogpit;\r AUT internally Ne NGC premiersзарErrors quatre уже Compet ret probability mathaya § lineчні']
Describe the bug
I used a verified LLaMA 7B hg checkpoint, and used a single thread bmb to do inference.
But the output are just random gibberish. Not sure why?
Minimal steps to reproduce
My checkpoint conversion and inference code is:
Expected behavior
I expect the output to be fluent and meaningful English.
Screenshots
actual output:
Environment:
bmtrain 0.2.2
torch 2.1.0.dev20230630+cu121
nvidia/label/cuda-12.1.1
The text was updated successfully, but these errors were encountered: