You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank for sharing this project and paper
I'm using GPipe Pytorch to testing inference time the same test dataset and compare with running on single GPU as baseline.
1/ The inference time running with GPiPe is seem slower than single GPU. Therefore , GPipe is suitable for training large model ? and not effective for speed up inference time ? Please correct me if I'm wrong.
2/ I'm curious that Does GPipe library support computes the communication latency among GPUs when intermediate data is transmitted between 2 GPUs in a row?
Thank you
The text was updated successfully, but these errors were encountered:
Thank for sharing this project and paper
I'm using GPipe Pytorch to testing inference time the same test dataset and compare with running on single GPU as baseline.
1/ The inference time running with GPiPe is seem slower than single GPU. Therefore , GPipe is suitable for training large model ? and not effective for speed up inference time ? Please correct me if I'm wrong.
2/ I'm curious that Does GPipe library support computes the communication latency among GPUs when intermediate data is transmitted between 2 GPUs in a row?
Thank you
The text was updated successfully, but these errors were encountered: