We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
calculate according to the llama2 paper:(20000100000000)/(18432060*60)=3014 toks / per gpu per sec
actual, we can up to 1.5k per seconds , why the speed gap is so big ?
The text was updated successfully, but these errors were encountered:
i think this is closely related to a lot factors, like band width, training framework, etc. Could you share the speed test results via a PR?
Sorry, something went wrong.
No branches or pull requests
calculate according to the llama2 paper:(20000100000000)/(18432060*60)=3014 toks / per gpu per sec
actual, we can up to 1.5k per seconds , why the speed gap is so big ?
The text was updated successfully, but these errors were encountered: