You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all, I'm trying to get FermiNet up and running with GPU acceleration and so far I haven't been able to get TensorFlow 1.15 installed at the same time as CUDA 10.0. I'm wondering if this is even worth the effort.
Can anyone tell me how I should expect training time to scale relative to molecule size? Say, a hydrogen atom vs. a benzene ring vs. a caffeine molecule? And how much of an improvement (broadly) should I expect from GPU training support?
Incidentally--is there a Docker image available with CUDA support?
The text was updated successfully, but these errors were encountered:
First, the JAX branch is actively developed and much easier (in our view) to use. The main branch is the version used in our first paper. (At some point soon, the JAX version will be made the main branch.) There is a Dockerfile for JAX but I haven't used it so don't know how well it works/how supported it is: https://github.com/google/jax/blob/main/build/Dockerfile
https://arxiv.org/abs/1909.02487 shows the cost per iteration for a fixed network scales as O(N^4). The number of iterations required and the size of the network to reach a desired accuracy is also likely to be system dependent. Something like benzene or caffeine will require a multi-GPU setup.
The speedup you get from running on GPU depends upon your GPU and CPU, but can easily be a factor of 10, even on a small system (and more on larger systems).
Hi all, I'm trying to get FermiNet up and running with GPU acceleration and so far I haven't been able to get TensorFlow 1.15 installed at the same time as CUDA 10.0. I'm wondering if this is even worth the effort.
Can anyone tell me how I should expect training time to scale relative to molecule size? Say, a hydrogen atom vs. a benzene ring vs. a caffeine molecule? And how much of an improvement (broadly) should I expect from GPU training support?
Incidentally--is there a Docker image available with CUDA support?
The text was updated successfully, but these errors were encountered: