-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference examples #645
Inference examples #645
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing user guide and demo!
Thanks so much for supporting PiPPy!
examples/inference/t5_inference.py
Outdated
print(torch.cuda.list_gpu_processes(5)) | ||
print(torch.cuda.list_gpu_processes(6)) | ||
print(torch.cuda.list_gpu_processes(7)) | ||
print("***********************************************") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If not very important, let's clean these lines out.
examples/inference/t5_inference.py
Outdated
PROFILING_ENABLED = True | ||
CHECK_NUMERIC_EQUIVALENCE = True | ||
gigabyte_size = 1073741824 | ||
megabyte_size = 1048576 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why the megabyte_size
is smaller than the gigabyte_size
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not using split_utils.py in this example anymore. We may just remove it (also because it only works for T5).
WIP to make a official example and recipe for distributed inference with pippy.
Goals