Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the action space #4

Closed
ewanlee opened this issue Apr 11, 2018 · 6 comments
Closed

About the action space #4

ewanlee opened this issue Apr 11, 2018 · 6 comments

Comments

@ewanlee
Copy link

ewanlee commented Apr 11, 2018

I think the action space should be 3 ** self.codes_count rather than self.codes_count * 3 ?

@Ceruleanacg
Copy link
Owner

Ceruleanacg commented Apr 12, 2018

For DDPG, the action space is self.codes_count * 3, because here DDPG is implemented for continuous action space, so here for each stock code, the action is [-1, 1] which represents the possibility of taking each action.

For PolicyGradient, the action space is still self.codes * 3, because in fact, we can only take one action for each state, the same to DDPG, so you may find some logical problems in DDPG, because DDPG tries to do self.codes actions in one state, that is not reasonable.

So I implemented method for test forward_v2 to avoid this problem.

Thank you very much.

@ewanlee
Copy link
Author

ewanlee commented Apr 12, 2018

@Ceruleanacg Your explanation is very detailed, thank you very much. So you define an action as an operation (buy, sell or hold) on a stock.

In _get_next_info method, You compare the number of operations performed by the trader at current state self.trader.action_times with the number of stocks self.code_count. I guess you want to jump to the next state after you have operated on all the stocks? If you do not complete the operation of all stocks then the current state will not change. But in the training phase of the PolicyGradient, you use greedy strategy (the use_prob parameter is False ) to interact with the market. If the current state is unchanged, the action taken is unchanged. Then self.code_count actions are performed on the same stock with same operation. Is it something unreasonable?

@Ceruleanacg
Copy link
Owner

Ceruleanacg commented Apr 12, 2018

In the method _get_next_info, there are two factors that will influence state_next, the first is current_date, which will be updated by comparing the self.trader.action_times with self.code_count in order to get next date, the second is in _get_scaled_stock_data_as_state method, which inserts self.trader.cash and self.trader.holding_value into state_next.

So for PolicyGradient that uses forward_v2, every action taken will influence the next state.

@ewanlee
Copy link
Author

ewanlee commented Apr 12, 2018

I am sorry I did not read the code carefully. I have the last two questions:

  1. What is your purpose for compare self.trader.action_times and self.code_count ?
  2. Whether or not the PolicyGradient will prematurely converge to a local optimum if you set use_prob always False?

@Ceruleanacg
Copy link
Owner

For question 1, if the self.trader.action_times == self.code_count, it means that the self.current_date needs to be updated in order to get stock data for next date.

For question 2, actually we will get local optimum, you can also set it true if you want, but, how to say, i found PolicyGradient performs very bad if I set it true :)

If you have further questions, you can add my WeChat 17392810723, we could learn more from each other.

@ewanlee
Copy link
Author

ewanlee commented Apr 12, 2018

Alright, thank you very much 👍

@ewanlee ewanlee closed this as completed Apr 12, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants