Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This code can not run in requirement.txt, please check again! #1

Closed
miss-rain opened this issue Dec 29, 2021 · 6 comments
Closed

This code can not run in requirement.txt, please check again! #1

miss-rain opened this issue Dec 29, 2021 · 6 comments

Comments

@miss-rain
Copy link

This code can not run in requirement.txt,

jaxlib and cuda and cudd is fine in my ubuntu.

please check again!

@KingSpencer
Copy link
Collaborator

Hi,

Thanks for your message. Could you please provide more detailed error message so that we can figure it out easier?

@miss-rain
Copy link
Author

miss-rain commented Dec 30, 2021

I have follow your code,but it does not work on my GPU
it only run in CPU

@hh23333
Copy link

hh23333 commented Mar 6, 2022

I have follow your code,but it does not work on my GPU it only run in CPU
@miss-rain, hi, I meet the same problem. Have you solved it?

@KingSpencer
Copy link
Collaborator

I just checked this issue again. I think the root problem might be JAX could not find your GPU due to version mismatch. I have updated the README about how to download a correct JAX version that corresponds to your CUDA version, please have a try, thanks!

@miss-rain
Copy link
Author

Thanks for update and replay, its long time(^.^)

I have solved the problem of jax for Nvidia GPU.

but now

I don't run completely in my 4 Telsa T4 (GPU), the error call 'out of memory', even i change batchsize=1

could you release a lite version?

Thanks again.

@JH-LEE-KR
Copy link

@miss-rain
I found in the official document that when JAX executes the first JAX command, it pre-allocates 90% of the available GPU memory.
As described in the document, I can either disable the pre-allocation or reduce the pre-allocation ratio to run the ViT-Base model.
As a result, I was able to run on my GPUs 8 RTX 3090 and 8 A5000.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants