Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

float32/float64 issue unresolved #15

Open
genekogan opened this issue Jul 29, 2016 · 3 comments
Open

float32/float64 issue unresolved #15

genekogan opened this issue Jul 29, 2016 · 3 comments

Comments

@genekogan
Copy link

genekogan commented Jul 29, 2016

having same issue as described here even after setting floatX=float32. i have cuda 7.0, cudnn installed on ubuntu 14.04. skipthoughts is verified working fine. any idea what the issue could be?

$ THEANO_FLAGS='floatX=float32,device=gpu0,scan.allow_gc=True' python alignDraw.py models/coco-captions-32x32.json
Using gpu device 0: GRID K520 (CNMeM is disabled, cuDNN 5005)
alignDraw.py:342: FutureWarning: comparison to None will result in an elementwise object comparison in the future.
if valData != None:
Traceback (most recent call last):
File "alignDraw.py", line 616, in
rvae = ReccurentAttentionVAE(dimY, dimLangRNN, dimAlign, dimX, dimReadAttent, dimWriteAttent, dimRNNEnc, dimRNNDec, dimZ, runSteps, batch_size, reduceLRAfter, data, data_captions, valData=val_data, valDataCaptions=val_data_captions, pathToWeights=pathToWeights)
File "alignDraw.py", line 354, in init
self._kl_final, self._logpxz, self._log_likelihood, self._c_ts, self._c_ts_gener, self._x, self._y, self._run_steps, self._updates_train, self._updates_gener, self._read_attent_params, self._write_attent_params, self._write_attent_params_gener, self._alphas_gener, self._params, self._mu_prior_t_gener, self._log_sigma_prior_t_gener = build_lang_encoder_and_attention_vae_decoder(self.dimY, self.dimLangRNN, self.dimAlign, self.dimX, self.dimReadAttent, self.dimWriteAttent, self.dimRNNEnc, self.dimRNNDec, self.dimZ, self.runSteps, self.pathToWeights)
File "alignDraw.py", line 293, in build_lang_encoder_and_attention_vae_decoder
sequences=eps, outputs_info=[c0, h0_dec, cell0_dec, h0_enc, cell0_enc, kl_0, mu_prior_0, log_sigma_prior_0, None, None], non_sequences=all_params, n_steps=run_steps)
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan.py", line 1041, in scan
scan_outs = local_op(*scan_inputs)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 611, in call
node = self.make_node(_inputs, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_op.py", line 538, in make_node
inner_sitsot_out.type.dtype))
ValueError: When compiling the inner function of scan the following error has been encountered: The initial state (outputs_info in scan nomenclature) of variable IncSubtensor{Set;:int64:}.0 (argument number 1) has dtype float32, while the result of the inner function (fn) has dtype float64. This can happen if the inner function of scan results in an upcast or downcast.

@simplysimleprob
Copy link

I have exactly same issue. No luck so far :(

@strawberrysparkle
Copy link

strawberrysparkle commented Dec 12, 2017

I'm struggling with the same error as well, if anyone has been able to resolve this in the meantime since the previous folks posted. :)

Update: I was able to resolve this error by adding the Theano flag cast_policy=numpy+floatX. The command that worked for me was THEANO_FLAGS='scan.allow_gc=True,device=cuda,floatX=float32,cast_policy=numpy+floatX' python alignDraw.py. Hope this helps someone else too!

@YYlin
Copy link

YYlin commented Nov 25, 2018

when I python alignDraw.py THEANO_FLAGS='scan.allow_gc=True,device=cuda,floatX=float32,cast_policy=numpy+floatX',
I find there is same problem could you tell me why?
yulin@dlnlp01:~/gan_projects/text2image-master/coco$ python alignDraw.py models/coco-captions-32x32.json THEANO_FLAGS='scan.allow_gc=False,device=cuda,floatX=float32,cast_policy=numpy+floatX'
/home/yulin/anaconda3/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
building gradient function
0:00:13.788787
building train function
Traceback (most recent call last):
File "alignDraw.py", line 628, in
rvae.train(lr, epochs, save=save, validateAfter=validateAfter)
File "alignDraw.py", line 495, in train
self._build_train_function()
File "alignDraw.py", line 445, in _build_train_function
self._y: self.train_captions[self._index_cap, 0:self._cap_len]
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/compile/function.py", line 317, in function
output_keys=output_keys)
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/compile/pfunc.py", line 449, in pfunc
no_default_updates=no_default_updates)
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/compile/pfunc.py", line 219, in rebuild_collect_shared
cloned_v = clone_v_get_shared_updates(v, copy_inputs_over)
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/compile/pfunc.py", line 93, in clone_v_get_shared_updates
clone_v_get_shared_updates(i, copy_inputs_over)
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/compile/pfunc.py", line 93, in clone_v_get_shared_updates
clone_v_get_shared_updates(i, copy_inputs_over)
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/compile/pfunc.py", line 93, in clone_v_get_shared_updates
clone_v_get_shared_updates(i, copy_inputs_over)
[Previous line repeated 1 more times]
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/compile/pfunc.py", line 96, in clone_v_get_shared_updates
[clone_d[i] for i in owner.inputs], strict=rebuild_strict)
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/gof/graph.py", line 242, in clone_with_new_inputs
new_inputs[i] = curr.type.filter_variable(new)
File "/home/yulin/anaconda3/lib/python3.6/site-packages/theano/tensor/type.py", line 234, in filter_variable
self=self))
TypeError: Cannot convert Type TensorType(float32, matrix) (of Variable AdvancedSubtensor1.0) into Type TensorType(float64, matrix). You can try to manually convert AdvancedSubtensor1.0 into a TensorType(float64, matrix).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants