Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(framework): fix how pytorch DataContainer convert GPU tensor #2739

Merged
merged 1 commit into from
Jul 14, 2022

Conversation

larme
Copy link
Member

@larme larme commented Jul 13, 2022

What does this PR address?

Fixes #(issue)

Before submitting:

Who can help review?

Feel free to tag members/contributors who can help review your PR.

@larme larme requested a review from bojiang July 13, 2022 08:55
@codecov
Copy link

codecov bot commented Jul 13, 2022

Codecov Report

Merging #2739 (03794ed) into main (58aa69b) will not change coverage.
The diff coverage is 0.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #2739   +/-   ##
=======================================
  Coverage   70.30%   70.30%           
=======================================
  Files         131      131           
  Lines       10129    10129           
=======================================
  Hits         7121     7121           
  Misses       3008     3008           
Impacted Files Coverage Δ
bentoml/_internal/frameworks/common/pytorch.py 73.95% <0.00%> (ø)

@@ -144,7 +144,7 @@ def to_payload( # pylint: disable=arguments-differ
batch_dim: int = 0,
plasma_db: "ext.PlasmaClient" | None = Provide[BentoMLContainer.plasma_db],
) -> Payload:
batch = batch.numpy()
batch = batch.cpu().numpy()
Copy link
Member

@aarnphm aarnphm Jul 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probably want to do this depending on the devices via CUDA_VISIBLE_DEVICES itself.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's not nessesary now. It only take 10 us if it is already on cpu

@bojiang bojiang merged commit 28f0bc3 into bentoml:main Jul 14, 2022
aarnphm pushed a commit to aarnphm/BentoML that referenced this pull request Jul 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants