You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi authors, I have some questions about the speed of your method. I downloaded your pretrained resnet-101 model weight and ran it with a Titan RTX (24GB) GPU, the inference speed is about 1 second/per img. d2.evaluation.evaluator INFO: Inference done 1144/1250. Dataloading: 0.0014 s/iter. Inference: 0.9478 s/iter. Eval: 0.0187 s/iter. Total: 0.9679 s/iter. ETA=0:01:42
In your paper you report with resnet-101 dcn, your model can run at 6.1FPS also using a Titan RTX GPU. So how could I get the same inference speed as given in your paper?
By the way, the inference speed of PointRend is about 0.1 second/per img, which is 10x faster of your model in my experiment. d2.evaluation.evaluator INFO: Inference done 987/1250. Dataloading: 0.0014 s/iter. Inference: 0.0888 s/iter. Eval: 0.0190 s/iter. Total: 0.1093 s/iter. ETA=0:00:28
Waiting for your reply.
The text was updated successfully, but these errors were encountered:
Hi authors, I have some questions about the speed of your method. I downloaded your pretrained resnet-101 model weight and ran it with a Titan RTX (24GB) GPU, the inference speed is about 1 second/per img.
d2.evaluation.evaluator INFO: Inference done 1144/1250. Dataloading: 0.0014 s/iter. Inference: 0.9478 s/iter. Eval: 0.0187 s/iter. Total: 0.9679 s/iter. ETA=0:01:42
In your paper you report with resnet-101 dcn, your model can run at 6.1FPS also using a Titan RTX GPU. So how could I get the same inference speed as given in your paper?
By the way, the inference speed of PointRend is about 0.1 second/per img, which is 10x faster of your model in my experiment.
d2.evaluation.evaluator INFO: Inference done 987/1250. Dataloading: 0.0014 s/iter. Inference: 0.0888 s/iter. Eval: 0.0190 s/iter. Total: 0.1093 s/iter. ETA=0:00:28
Waiting for your reply.
The text was updated successfully, but these errors were encountered: