Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

how to run ELF on mac : Torch not compiled with CUDA enabled #54

Closed
cxfun12 opened this issue Sep 8, 2017 · 2 comments
Closed

how to run ELF on mac : Torch not compiled with CUDA enabled #54

cxfun12 opened this issue Sep 8, 2017 · 2 comments

Comments

@cxfun12
Copy link

cxfun12 commented Sep 8, 2017

The only way is to change the graphics card??

I use MacBook to train minirts.

game=./rts/game_MC/game model=actor_critic model_file=./rts/game_MC/model \
python3 train.py
    --num_games 1024 --batchsize 128                                                                  # Set number of games to be 1024 and batchsize to be 128.
    --freq_update 50                                                                                  # Update behavior policy after 50 updates of the model.
    --players "fs=50,type=AI_NN,args=backup/AI_SIMPLE|delay/0.99|start/500;fs=20,type=AI_SIMPLE"      # Specify AI and its opponent, separated by semicolon. `fs` is frameskip that specifies How often your opponent makes a decision (e.g., fs=20 means it acts every 20 ticks)
                                                                                                      # If `backup` is specified in `args`, then we use rule-based AI for the first `start` ticks, then trained AI takes over. `start` decays with rate `decay`.
    --tqdm                                                                  # Show progress bar.
    --gpu 0                                                                 # Use first gpu. If you don't specify gpu, it will run on CPUs.
    --T 20                                                                  # 20 step actor-critic
    --additional_labels id,last_terminal
    --trainer_stats winrate                                                 # If you want to see the winrate over iterations.
                                                                            # Note that the winrate is computed when the action is sampled from the multinomial distribution (not greedy policy).

and get following error message:

Namespace(T=20, actor_only=False, additional_labels='id,last_terminal', arch='ccpccp;-,64,64,64,-', batchsize=128, cmd_dumper_prefix=None, discount=0.99, entropy_ratio=0.01, epsilon=0.0, eval=False, freq_update=50, game_multi=None, gpu=0, grad_clip_norm=None, greedy=False, handicap_level=0, load=None, max_tick=30000, mcts_threads=64, min_prob=1e-06, model_no_spatial=False, num_episode=10000, num_games=1024, num_minibatch=5000, output_file=None, players='fs=50,type=AI_NN,args=backup/AI_SIMPLE|delay/0.99|start/500;fs=20,type=AI_SIMPLE', record_dir='./record', sample_node='pi', sample_policy='epsilon-greedy', save_dir=None, save_prefix='save', save_replay_prefix=None, seed=0, shuffle_player=False, tqdm=True, trainer_stats='winrate', verbose_collector=False, verbose_comm=False, wait_per_group=False)
Handicap: 0
Max tick: 30000
Seed: 0
Shuffled: False
[name=][fs=50][type=AI_NN][FoW=True][args=backup/AI_SIMPLE|delay/0.99|start/500]
[name=][fs=20][type=AI_SIMPLE][FoW=True]
MCTS #threads: 64 #rollout/thread: 50
Output_prompt_filename: ""
Cmd_dumper_prefix: ""
Save_replay_prefix: ""
Version:  cd4caf696ece372eee2d78cd8806546c9c64cba1_staged
Num Actions:  9
Num unittype:  6
#recv_thread = 4
Deal with connector. key = train, hist_len = 20, player_name =
Traceback (most recent call last):
  File "train.py", line 36, in <module>
    GC = game.initialize()
  File "/Users/xxx/MyProjects/AI/ELF2/ELF/rts/engine/common_loader.py", line 128, in initialize
    return GCWrapper(GC, co, desc, gpu=args.gpu, use_numpy=False, params=params)
  File "/Users/xxx/MyProjects/AI/ELF2/ELF/elf/utils_elf.py", line 149, in __init__
    self._init_collectors(GC, co, descriptions, use_gpu=gpu is not None, use_numpy=use_numpy)
  File "/Users/xxx/MyProjects/AI/ELF2/ELF/elf/utils_elf.py", line 194, in _init_collectors
    inputs.append(Batch.load(GC, "input", input, group_id, use_gpu=use_gpu, use_numpy=use_numpy))
  File "/Users/xxx/MyProjects/AI/ELF2/ELF/elf/utils_elf.py", line 69, in load
    v, info = Batch._alloc(info, use_gpu=use_gpu, use_numpy=use_numpy)
  File "/Users/xxx/MyProjects/AI/ELF2/ELF/elf/utils_elf.py", line 48, in _alloc
    v = v.pin_memory()
  File "/usr/local/lib/python3.5/site-packages/torch/tensor.py", line 82, in pin_memory
  File "/usr/local/lib/python3.5/site-packages/torch/storage.py", line 83, in pin_memory
    allocator = torch.cuda._host_allocator()
  File "/usr/local/lib/python3.5/site-packages/torch/cuda/__init__.py", line 220, in _host_allocator
    _lazy_init()
  File "/usr/local/lib/python3.5/site-packages/torch/cuda/__init__.py", line 84, in _lazy_init
    _check_driver()
  File "/usr/local/lib/python3.5/site-packages/torch/cuda/__init__.py", line 51, in _check_driver
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

my device info:

  • Intel Iris Pro & AMD Radeon R9 M370X
  • macOS Sierra 10.12.6
@yuandong-tian
Copy link
Contributor

Unfortunately, it seems that your graphics card is not from nVidia so it might not work.
Without --gpu 0 then you can run it on CPU (although it would be very slow).

@cxfun12
Copy link
Author

cxfun12 commented Sep 10, 2017

Thanks for your help. I will try with CPU first:)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants