Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MobileNeRF Inference on server side GPU #185

Open
DWhettam opened this issue Aug 3, 2023 · 0 comments
Open

MobileNeRF Inference on server side GPU #185

DWhettam opened this issue Aug 3, 2023 · 0 comments

Comments

@DWhettam
Copy link

DWhettam commented Aug 3, 2023

Hi,

I'm able to do inference successfully with MobileNeRF, and I'm now trying to benchmark it's performance on a few different devices, however many of these are remote servers so I only have terminal access. I've setup an ssh tunnel so I can access the interface remotely, however the code is running on my local machines GPU, not the GPU of the server, which I'm trying to benchmark. Is there anyway I can do inference on the server side, either by doing inference not through the browser, or by running webGL on the server instead of the client?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant