Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Mutiple values for agent/optimizers. #29

Open
chenci107 opened this issue Nov 30, 2022 · 0 comments
Open

Mutiple values for agent/optimizers. #29

chenci107 opened this issue Nov 30, 2022 · 0 comments

Comments

@chenci107
Copy link

chenci107 commented Nov 30, 2022

Description

I run the code as the doc described

PYTHONPATH=. python3 -u main.py \
setup=metaworld \
env=metaworld-mt10 \
agent=state_sac \
experiment.num_eval_episodes=1 \
experiment.num_train_steps=2000000 \
setup.seed=1 \
replay_buffer.batch_size=1280 \
agent.multitask.num_envs=10 \
agent.multitask.should_use_disentangled_alpha=True \
agent.encoder.type_to_select=identity \
agent.multitask.should_use_multi_head_policy=False \
agent.multitask.actor_cfg.should_condition_model_on_task_info=False \
agent.multitask.actor_cfg.should_condition_encoder_on_task_info=True \
agent.multitask.actor_cfg.should_concatenate_task_info_with_encoder=True

And it shows that

Multiple values for agent/optimizers. To override a value use 'override agent/optimizers: metaworld_encoder'
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

How can I solve this problem? Thanks~

How to reproduce

Add a minimal example to reproduce the issue.

Stack trace / error message

Paste the stack trace/error message to Gist
and paste the link here.

System information

  • MTRL Version :
  • MTRL environment Name :
  • Python version :

Any other information

Add any other information here.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant