Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the baseline in the paper #27

Closed
Jonyian opened this issue Sep 19, 2023 · 8 comments
Closed

About the baseline in the paper #27

Jonyian opened this issue Sep 19, 2023 · 8 comments

Comments

@Jonyian
Copy link

Jonyian commented Sep 19, 2023

Could you please open source the “ours” algorithm mentioned in your paper?

@BigZhaoHYZ
Copy link

同求,其次运行文件中的run_maacktr.py和run_mappo.py都会以下错误,可以提供一下解决方案吗?感谢
ERROR:root:Can not find checkpoint for ./results/Sep_19_12_22_29/models/
H:\Code-of-study\PaperCode\MARL_CAVs-main\MARL\single_agent\kfac.py:144: UserWarning: volatile was removed (Variable.volatile is always False)
if input[0].volatile == False and self.steps % self.Ts == 0:
H:\ProgramData\Anaconda3\envs\MARL_CAVs-main\lib\site-packages\torch\nn\modules\module.py:795: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
H:\Code-of-study\PaperCode\MARL_CAVs-main\MARL\single_agent\Model_common.py:62: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
act = self.actor_output_act(self.actor_linear(out))
Traceback (most recent call last):
File "H:\Code-of-study\PaperCode\MARL_CAVs-main\MARL\run_maacktr.py", line 217, in
train(args)
File "H:\Code-of-study\PaperCode\MARL_CAVs-main\MARL\run_maacktr.py", line 128, in train
maacktr.interact()
File "H:\Code-of-study\PaperCode\MARL_CAVs-main\MARL\MAACKTR.py", line 96, in interact
actions.append([index_to_one_hot(a, self.action_dim) for a in action])
File "H:\Code-of-study\PaperCode\MARL_CAVs-main\MARL\MAACKTR.py", line 96, in
actions.append([index_to_one_hot(a, self.action_dim) for a in action])
File "H:\Code-of-study\PaperCode\MARL_CAVs-main\MARL\common\utils.py", line 37, in index_to_one_hot
one_hot = np.zeros((len(index), dim))
TypeError: object of type 'numpy.int32' has no len()

@DongChen06
Copy link
Owner

@DongChen06
Copy link
Owner

@BigZhaoHYZ , please look at the solution at: #4. The reason may be the mismatch of the libs.

@Jonyian
Copy link
Author

Jonyian commented Sep 20, 2023

Thank you very much for your reply. I also want to ask you a question. Is the baseline of your paper the run_ma2c.py in the folder?

@DongChen06
Copy link
Owner

You can change the settings in configs.ini. For example, you can set safety_gurantee as False.
image;

There are also other settings you can play with:

; concurrent
training_strategy = concurrent
actor_hidden_size = 128
critic_hidden_size = 128
shared_network = True
action_masking = True
state_split = True
; "greedy", "regionalR", "global_R"
reward_type = regionalR

@BigZhaoHYZ
Copy link

BigZhaoHYZ commented Sep 21, 2023

safety_guarantee = True n_step = 7
你好,运行run_ma2c.py 加上configs.ini就是论文中的“基线+Tn”吗?感谢回复! @DongChen06

@DongChen06
Copy link
Owner

@BigZhaoHYZ Yes, you are right

@Jonyian
Copy link
Author

Jonyian commented Sep 22, 2023

Thank you very much for your reply

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants