Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use GPU for inference #27

Closed
Unkrible opened this issue Nov 22, 2022 · 3 comments
Closed

How to use GPU for inference #27

Unkrible opened this issue Nov 22, 2022 · 3 comments

Comments

@Unkrible
Copy link

Hello, thanks for the open-source of awesome work!
However, I have the question that as guided in the Inference by HumanHeadSegmentationPipeline .predict, GPU can not be used to accelerate the inference process.
Thus, I suggestion to add self.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') in class HumanHeadSegmentationPipeline, and then modify the function predict(self, image) as mdl_out = self._model(preprocessed_image.to(self.device)).detach().cpu()

@wiktorlazarski
Copy link
Owner

Feel free to make a PR with all proposed changes 😉.

@9527-csroad
Copy link
Contributor

9527-csroad commented Sep 6, 2023

I find this problem and test it. After two images, cpu inference time around 2.1s while gpu inference time around 1.4s.
Now, we can set the device like this:

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
visualizer = vis.VisualizationModule()
segmentation_pipeline = seg_pipeline.HumanHeadSegmentationPipeline(device=device)

if you not set the device, it will use cpu to inference by default.
And I also find the visualization will consume one part time. If you are strict with time, just save segmented_region like this:

segmentation_map = segmentation_pipeline.predict(image)
segmented_region = image * cv2.cvtColor(segmentation_map, cv2.COLOR_GRAY2RGB)
pil_image = Image.fromarray(segmented_region)
pil_image.save(save_path)

I test it with cpu and it only consume around 0.8s for a single image. If you use gpu, it only take around 0.15s for a single image.

@wiktorlazarski wiktorlazarski reopened this Sep 6, 2023
@wiktorlazarski
Copy link
Owner

Hey @9527-csroad,

thanks for having a look at it 🙌. Amazing work 🚀! I love a time benchmarking. Also, huge shoutout for the PR #30. I promise I'll have a look at it later today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants