-
Notifications
You must be signed in to change notification settings - Fork 825
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Something wrong with Contextual-Policy.ipython #2
Comments
I'm sorry this problem has come up. I can't quite tell the issue from the error code you posted. Can you tell me what version of Tensorflow you are using? |
Dear Arthur, Thank you very much for your reply. I installed the tensorflow-0.11.0rc2 with python2.7 in Ubuntu/Linux 64-bit. CUDA8.0 and CuDNNv5 are installed for GPU. Thank you very much for your time. Regards, |
Same problem here with tensorflow-0.12.0. The problem is connected to the initialization of the neural agent. With |
Thanks for pointing this out! I've update the notebook for compatibility with newer (and hopefully future) versions. |
Dear Arthur,
I am following your tutorials for reinforcement learning. It is very helpful. However, when I try to run "Contextual-Policy.ipython", I encounter some problems. Could you tell me how to solve it?
`TypeError Traceback (most recent call last)
in ()
2
3 cBandit = contextual_bandit() #Load the bandits.
----> 4 myAgent = agent(lr=0.001,s_size=cBandit.num_bandits,a_size=cBandit.num_actions) #Load the agent.
5 weights = tf.trainable_variables()[0] #The weights we will evaluate to look into the network.
6
in init(self, lr, s_size, a_size)
4 self.state_in= tf.placeholder(shape=[1],dtype=tf.int32)
5 state_in_OH = slim.one_hot_encoding(self.state_in,s_size)
----> 6 output = slim.fully_connected(state_in_OH,a_size, biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones)
7 self.output = tf.reshape(output,[-1])
8 self.chosen_action = tf.argmax(self.output,0)
/home/rlig/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.pyc in func_with_args(*args, **kwargs)
175 current_args = current_scope[key_func].copy()
176 current_args.update(kwargs)
--> 177 return func(*args, **current_args)
178 _add_op(func)
179 setattr(func_with_args, '_key_op', _key_op(func))
/home/rlig/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/layers/python/layers/layers.pyc in fully_connected(inputs, num_outputs, activation_fn, normalizer_fn, normalizer_params, weights_initializer, weights_regularizer, biases_initializer, biases_regularizer, reuse, variables_collections, outputs_collections, trainable, scope)
841 regularizer=weights_regularizer,
842 collections=weights_collections,
--> 843 trainable=trainable)
844 if len(static_shape) > 2:
845 # Reshape inputs
/home/rlig/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.pyc in func_with_args(*args, **kwargs)
175 current_args = current_scope[key_func].copy()
176 current_args.update(kwargs)
--> 177 return func(*args, **current_args)
178 _add_op(func)
179 setattr(func_with_args, '_key_op', _key_op(func))
/home/rlig/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/variables.pyc in model_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, device)
267 initializer=initializer, regularizer=regularizer,
268 trainable=trainable, collections=collections,
--> 269 caching_device=caching_device, device=device)
270
271
`
The text was updated successfully, but these errors were encountered: