You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently trying to, with slight modifications, apply fastT5 to M2M100. While the conversion itself is working like a charm, I am getting a lot of
Ignore MatMul due to non constant B: /[MatMul_(insert int here)]
e.G. Ignore MatMul due to non constant B: /[MatMul_2256]
errors during quantization. After some research I dug up multiple colabs that are simply ignoring this warning. Is any treatment necessary and are there known ways to handle it?
Apart from that: do you plan on expanding this repo to different models (for example in different branches) or do you rather want forks that provide support for other models?
The text was updated successfully, but these errors were encountered:
I am currently trying to, with slight modifications, apply fastT5 to M2M100. While the conversion itself is working like a charm, I am getting a lot of
errors during quantization. After some research I dug up multiple colabs that are simply ignoring this warning. Is any treatment necessary and are there known ways to handle it?
Apart from that: do you plan on expanding this repo to different models (for example in different branches) or do you rather want forks that provide support for other models?
The text was updated successfully, but these errors were encountered: