You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! For the previous paper version, I was used to using FP16 weights with the model, which helped to have faster processing, now very weird results are generated if I apply model.half() and I use HalfTensors as the model input.
This is what I am doing:
For loading the model:
gfpgan=gfpgan.half()
For the model inputs:
cropped_faces_t=cropped_faces_t.half()
When I remove the above lines of codes everything works fine, but inference time is 44% bigger compared to the paper's version with half weights (looks like that quality isn't that good either, but that's another story). With .half() inference time is 20% bigger, but no result is generated.
These are the results I get:
The text was updated successfully, but these errors were encountered:
Hi! For the previous paper version, I was used to using FP16 weights with the model, which helped to have faster processing, now very weird results are generated if I apply model.half() and I use HalfTensors as the model input.
This is what I am doing:
For loading the model:
For the model inputs:
When I remove the above lines of codes everything works fine, but inference time is 44% bigger compared to the paper's version with half weights (looks like that quality isn't that good either, but that's another story). With .half() inference time is 20% bigger, but no result is generated.
These are the results I get:
The text was updated successfully, but these errors were encountered: