Skip to content

Conversation

@guillaumekln
Copy link
Collaborator

The previous logic was incorrect and the padding was not applied. The code checked the layers output type which can only be float32 or float16. Instead, it should check the global compute type.

After fixing this issue, it appears padding to a multiple of 16 does not help. So let's disable it for now and gather more data.

The previous logic was incorrect and the padding was not applied. The
code checked the layers output type which can only be float32 or
float16. Instead, it should check the global compute type.

After fixing this issue, it appears padding to a multiple of 16 does
not help. So let's disable it for now and gather more data.
@guillaumekln guillaumekln merged commit 7c54f53 into OpenNMT:master Nov 23, 2020
@guillaumekln guillaumekln deleted the fix-and-disable-int8-padding branch November 23, 2020 11:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant