Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Great but Very Slow with 24GB #5

Closed
deepbeepmeep opened this issue Jan 6, 2024 · 3 comments
Closed

Great but Very Slow with 24GB #5

deepbeepmeep opened this issue Jan 6, 2024 · 3 comments

Comments

@deepbeepmeep
Copy link

Hi,

First let me congratulate you as I think your approach is spot on: in order to have an efficient visual processing it is very likely that one should simulate how fovea is working that is one should focus on a reduced area and move around depending on what has been found.

I have tried using your demo on a RTX 4090 GB with 24GB and there isnt obviously enough memory as it is offloaded to the CPU since it takes 5 minutes to run one example.

By setting the flag 'load in 8 bits' to True the models can fit in the GPU memory however the code is obviously not compatible with bits and bytes since a few blocking exceptions are raised.

I would be grateful if you could do the required changes or simply reduce the GPU memory requirements. This would allow more people to test your great work.

@deepbeepmeep deepbeepmeep changed the title Very Slow with 24GB Great but Very Slow with 24GB Jan 6, 2024
@s9xie
Copy link
Collaborator

s9xie commented Jan 7, 2024

Please take a look at #2 #3 - load in 8 bits should work

@penghao-wu
Copy link
Owner

You can also try our online demo at https://craigwu-vstar.hf.space.

@deepbeepmeep
Copy link
Author

I have applied patches #2 and #3 and it works great ! Many thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants