You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
import sys
from absl import logging
from ferminet.utils import system
from ferminet import base_config
from ferminet import train
# Optional, for also printing training progress to STDOUT.
# If running a script, you can also just use the --alsologtostderr flag.
logging.get_absl_handler().python_handler.stream = sys.stdout
logging.set_verbosity(logging.INFO)
# Define H2 molecule
cfg = base_config.default()
cfg.system.electrons = (1,1) # (alpha electrons, beta electrons)
cfg.system.molecule = [system.Atom('H', (0, 0, -1)), system.Atom('H', (0, 0, 1))]
# Set training parameters
cfg.batch_size = 256
cfg.pretrain.iterations = 100
train.train(cfg)
At train.train(cfg), the code seems to be running on TPU by default, how to change it to run on a single GPU instead?
INFO:absl:Starting the local TPU driver.
INFO:absl:Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
INFO:absl:Unable to initialize backend 'tpu': Invalid argument: TpuPlatform is not available.
INFO:absl:Starting QMC with 1 XLA devices
The text was updated successfully, but these errors were encountered:
This is a standard message. By default, jax first attempts to run on TPU, then if it can't find one (which the second and third line show), it attempts to run on GPU and then CPU.
I'm trying to run this example (JAX branch):
At
train.train(cfg)
, the code seems to be running on TPU by default, how to change it to run on a single GPU instead?The text was updated successfully, but these errors were encountered: