-
Notifications
You must be signed in to change notification settings - Fork 48
Description
I have use anaconda to install all packages in environment.yml, and change agen paths of run_evaluation.sh as # Agent Paths export TEAM_AGENT=/mnt/ssd/neat/leaderboard/team_code/neat_agent.py # agent script export TEAM_CONFIG=/mnt/ssd/neat/model_ckpt/neat # model checkpoint (not required for auto_pilot) export CHECKPOINT_ENDPOINT=/mnt/ssd/neat/carla_results/auto_pilot.json # output results file export SAVE_PATH=/mnt/ssd/neat/carla_results/auto_pilot_eval # path for saving episodes (comment to disable)
However, when I run the evaluation file. I met problem as rgb = torch.from_numpy(scale_and_crop_image(Image.fromarray(tick_data['rgb']))).unsqueeze(0) Traceback (most recent call last): File "/mnt/ssd/neat/leaderboard/leaderboard/scenarios/scenario_manager.py", line 152, in _tick_scenario ego_action = self._agent() File "/mnt/ssd/neat/leaderboard/leaderboard/autoagents/agent_wrapper.py", line 82, in __call__ return self._agent() File "/mnt/ssd/neat/leaderboard/leaderboard/autoagents/autonomous_agent.py", line 115, in __call__ control = self.run_step(input_data, timestamp) File "/home/zlg/anaconda3/envs/neat/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/mnt/ssd/neat/leaderboard/team_code/neat_agent.py", line 247, in run_step encoding = self.net.encoder(images, gt_velocity) File "/home/zlg/anaconda3/envs/neat/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/mnt/ssd/neat/neat/architectures/encoder.py", line 126, in forward velocity_embeddings = self.vel_emb(velocity.unsqueeze(1)) # (B, C) File "/home/zlg/anaconda3/envs/neat/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/zlg/anaconda3/envs/neat/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/ssd/neat/leaderboard/leaderboard/leaderboard_evaluator.py", line 477, in main
leaderboard_evaluator.run(arguments)
File "/mnt/ssd/neat/leaderboard/leaderboard/leaderboard_evaluator.py", line 414, in run
self._load_and_run_scenario(args, config)
File "/mnt/ssd/neat/leaderboard/leaderboard/leaderboard_evaluator.py", line 351, in _load_and_run_scenario
self.manager.run_scenario()
File "/mnt/ssd/neat/leaderboard/leaderboard/scenarios/scenario_manager.py", line 136, in run_scenario
self._tick_scenario(timestamp)
File "/mnt/ssd/neat/leaderboard/leaderboard/scenarios/scenario_manager.py", line 159, in _tick_scenario
raise AgentError(e)
leaderboard.autoagents.agent_wrapper.AgentError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
Is this problem related my cuda version? My cuda version is 11.6. When I ran the auto_pilot, there was nothing wrong.