Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change the agent type? #6

Closed
lamhk opened this issue Oct 2, 2018 · 6 comments
Closed

How to change the agent type? #6

lamhk opened this issue Oct 2, 2018 · 6 comments
Labels

Comments

@lamhk
Copy link

lamhk commented Oct 2, 2018

Hi, I tried to build "-b BUILD" using DQN instead of PPO inside run.py line-10 / agent = "PPO". However, when I try to run the run.py, it still create directory for PPO_n. Where I should change it to switch the agent effectively. Thanks.

@lamhk
Copy link
Author

lamhk commented Oct 2, 2018

Hi, i tried to change (1) session/local.py with "dqn"; (2) run.py agent="DQN" and (3) config/agent.json with "type": "dqn_agent" and saw it successfully create save/DQN_0 directory. However, I received the following error...

Traceback (most recent call last):
File "run.py", line 56, in
session.loadSession()
File "/home/lamhk/TradzQAI-master/core/session/local.py", line 47, in loadSession
self.initAgent()
File "/home/lamhk/TradzQAI-master/core/session/local.py", line 58, in initAgent
self.agent = self.agent(env=self.env, device=self.device)._get()
File "/home/lamhk/TradzQAI-master/agents/DQN.py", line 6, in init
Agent.init(self, env=env, device=device)
File "/home/lamhk/TradzQAI-master/agents/agent.py", line 23, in init
device=device
File "/home/lamhk/.local/lib/python3.6/site-packages/tensorforce/agents/agent.py", line 283, in from_spec
kwargs=kwargs
File "/home/lamhk/.local/lib/python3.6/site-packages/tensorforce/util.py", line 192, in get_object
return obj(*args, **kwargs)
TypeError: init() got an unexpected keyword argument 'gae_lambda'

@kkuette
Copy link
Owner

kkuette commented Oct 2, 2018

You just have to delete your current TradzQAI/config/ directory and build a new one with this command py run.py -b DQN.

@lamhk
Copy link
Author

lamhk commented Oct 3, 2018

Hi, if I deleted the TradzQAI/config, i received the following error (after running py run.py -b DQN) . If the config directory exists, there is no error but the above 'gae_lamdba' error happens again. Any idea? Thanks.

Traceback (most recent call last):
File "run.py", line 46, in
session = Session(mode=args.mode, config=args.config, agent=args.build)
File "/home/lamhk/TradzQAI-master/core/session/local.py", line 15, in init
self.env = Local_env(mode=mode, gui=gui, contract_type=contract_type, config=config, agent=agent)
File "/home/lamhk/TradzQAI-master/core/environnement/local_env.py", line 98, in init
self.close()
File "/home/lamhk/TradzQAI-master/core/environnement/base/base_env.py", line 71, in close
self.logger.stop()
AttributeError: 'Local_env' object has no attribute 'logger'

@kkuette
Copy link
Owner

kkuette commented Oct 3, 2018

I've fixed the problem, it was due to a hard coded argument. Commit done !

@lamhk
Copy link
Author

lamhk commented Oct 3, 2018

Hi, Thanks for the updated code. I tried using DQN agent as mentioned above with 100 episodes. After the first episode, the rest of the episodes doesn't have result. I checked the eval report also shows zero result.

2018:10:03 13:49:06 000000 Starting episode : 1
2018:10:03 13:49:44 000001 ######################################################
2018:10:03 13:49:44 000002 Total reward : -238.704
2018:10:03 13:49:44 000003 Average daily reward : -10.850
2018:10:03 13:49:44 000004 Total profit : -240.25
2018:10:03 13:49:44 000005 Total trade : 465
2018:10:03 13:49:44 000006 Sharp ratio : -1.723
2018:10:03 13:49:44 000007 Mean return : -0.013
2018:10:03 13:49:44 000008 Max Drawdown : -0.019
2018:10:03 13:49:44 000009 Max return : 0.002
2018:10:03 13:49:44 000010 Percent return : -0.012
2018:10:03 13:49:44 000011 Trade W/L : 0.495
2018:10:03 13:49:44 000012 Step : 18353
2018:10:03 13:49:44 000013 ######################################################
2018:10:03 13:49:44 000014 Starting episode : 2
2018:10:03 13:50:17 000015 ######################################################
2018:10:03 13:50:17 000016 Total reward : 0.0
2018:10:03 13:50:17 000017 Average daily reward : 0.000
2018:10:03 13:50:17 000018 Total profit : 0
2018:10:03 13:50:17 000019 Total trade : 0
2018:10:03 13:50:17 000020 Sharp ratio : 0.000
2018:10:03 13:50:17 000021 Mean return : 0.000
2018:10:03 13:50:17 000022 Max Drawdown : 0.000
2018:10:03 13:50:17 000023 Max return : 0.000
2018:10:03 13:50:17 000024 Percent return : 0.000
2018:10:03 13:50:17 000025 Trade W/L : 0.000
2018:10:03 13:50:17 000026 Step : 18353

@kkuette
Copy link
Owner

kkuette commented Oct 3, 2018

Defaults values aren't the best values for your agent, you should run some test to find out the best values.
If it shows results like yours it's because your model does not learn well.

@kkuette kkuette closed this as completed Oct 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants