Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About speed #10

Closed
Emiria96 opened this issue Apr 13, 2022 · 1 comment
Closed

About speed #10

Emiria96 opened this issue Apr 13, 2022 · 1 comment

Comments

@Emiria96
Copy link

Hi authors, I have some questions about the speed of your method. I downloaded your pretrained resnet-101 model weight and ran it with a Titan RTX (24GB) GPU, the inference speed is about 1 second/per img.
d2.evaluation.evaluator INFO: Inference done 1144/1250. Dataloading: 0.0014 s/iter. Inference: 0.9478 s/iter. Eval: 0.0187 s/iter. Total: 0.9679 s/iter. ETA=0:01:42
In your paper you report with resnet-101 dcn, your model can run at 6.1FPS also using a Titan RTX GPU. So how could I get the same inference speed as given in your paper?
By the way, the inference speed of PointRend is about 0.1 second/per img, which is 10x faster of your model in my experiment.
d2.evaluation.evaluator INFO: Inference done 987/1250. Dataloading: 0.0014 s/iter. Inference: 0.0888 s/iter. Eval: 0.0190 s/iter. Total: 0.1093 s/iter. ETA=0:00:28
Waiting for your reply.

@lkeab
Copy link
Collaborator

lkeab commented Apr 14, 2022

This is not the final code implementation. I would have a final updating on this soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants