Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question regarding the dueling network architecture part #8

Open
huanpass opened this issue Aug 27, 2020 · 3 comments
Open

Question regarding the dueling network architecture part #8

huanpass opened this issue Aug 27, 2020 · 3 comments

Comments

@huanpass
Copy link

Hi, I found below code in the network part of train_dqn.py

###########################################################

Split into value and advantage streams

val_stream, adv_stream = Lambda(lambda w: tf.split(w, 2, 3))(x) # custom splitting layer
##############################################################################

It looks like the source from hidden network was divided into 2 different partial parts then one feed to state value, another one to adv value. I have also checked other implementations and paper. It looks like each flow should be the complete copy of the hidden layer rather than partial of it. Can i ask why you want to split it rather than feed the same whole data flow to both stat and adv?

Many thanks!
Edward

@sebtheiler
Copy link
Owner

The whole data flow is indeed fed to both the intial value and advantage streams. After that, there are seperate dense layers for the final calculations. The Lambda layer is just for slicing the previous conv layer into the val and adv streams, as is done in Wang et al. 2016.

Sorry for the late reply, thank you for your patience. If you have any more questions or if this wasn't clear enough, please let me know and I'll try to get back to you as soon as possible.

@huanpass
Copy link
Author

Hi, thanks for your reply,

The question is why you want to slice it rather than just share the same flow among value and advantage? Because the flow feed to value and advantage should be the same. There is also no slice operation in the paper.
For example
`

x = Conv2D(64, (3, 3), strides=1, kernel_initializer=VarianceScaling(scale=2.), activation='relu', use_bias=False)(x)
x = Conv2D(1024, (7, 7), strides=1, kernel_initializer=VarianceScaling(scale=2.), activation='relu', use_bias=False)(x)
val_stream = Flatten()(x)
val = Dense(1, kernel_initializer=VarianceScaling(scale=2.))(val_stream)

adv_stream = Flatten()(x)
adv = Dense(n_actions, kernel_initializer=VarianceScaling(scale=2.))(adv_stream)`

@sebtheiler
Copy link
Owner

I believe there is a slice/split operation in the paper:

Our network architecture has the same low-level convolutional structure of DQN... As shown in Figure 1, the dueling network splits into two streams of fully connected layers. The value and advantage streams both have a fullyconnected layer with 512 units. The final hidden layers of the value and advantage streams are both fully-connected with the value stream having one output and the advantage as many outputs as there are valid actions. We combine the value and advantage streams using the module described by Equation (9). Rectifier non-linearities (Fukushima, 1980) are inserted between all adjacent layers.

(From 4.2, page 6)

I might be wrong about this (and it would undoubtedly be interesting to experiment with the architecture you detailed), but this is how I personally interpreted the paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants