Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: on __init()__ missing require positional argument: out_features #5

Open
liperrino opened this issue Feb 12, 2019 · 3 comments

Comments

@liperrino
Copy link

python3 train.py -model_path models -data_path models/preprocess-train.t7
Namespace(batch_size=128, d_ff=2048, d_k=64, d_model=512, d_v=64, data_path='models/preprocess-train.t7', display_freq=100, dropout=0.1, log=None, lr=0.0002, max_epochs=10, max_grad_norm=None, max_src_seq_len=50, max_tgt_seq_len=50, model_path='models', n_heads=8, n_layers=6, n_warmup_steps=4000, share_embs_weight=False, share_proj_weight=False, weighted_model=False)
Loading training and development data..
Creating new model parameters..
Traceback (most recent call last):
File "train.py", line 200, in
main(opt)
File "train.py", line 47, in main
model, model_state = create_model(opt)
File "train.py", line 27, in create_model
model = Transformer(opt) # Initialize a model state.
File "/media/vivien/A/NEW-SMT/transformer-new-master/transformer/models.py", line 110, in init
opt.max_src_seq_len, opt.src_vocab_size, opt.dropout, opt.weighted_model)
File "/media/vivien/A/NEW-SMT/transformer-new-master/transformer/models.py", line 54, in init
[self.layer_type(d_k, d_v, d_model, d_ff, n_heads, dropout) for _ in range(n_layers)])
File "/media/vivien/A/NEW-SMT/transformer-new-master/transformer/models.py", line 54, in
[self.layer_type(d_k, d_v, d_model, d_ff, n_heads, dropout) for _ in range(n_layers)])
File "/media/vivien/A/NEW-SMT/transformer-new-master/transformer/layers.py", line 11, in init
self.enc_self_attn = MultiHeadAttention(d_k, d_v, d_model, n_heads, dropout)
File "/media/vivien/A/NEW-SMT/transformer-new-master/transformer/sublayers.py", line 53, in init
self.multihead_attn = _MultiHeadAttention(d_k, d_v, d_model, n_heads, dropout)
File "/media/vivien/A/NEW-SMT/transformer-new-master/transformer/sublayers.py", line 19, in init
self.w_q = Linear([d_model, d_k * n_heads])
TypeError: init() missing 1 required positional argument: 'out_features'

@xiongshufeng
Copy link

I met the same error. I changed the parameters at the error line, but I met another error:

Traceback (most recent call last):
File "/dcs/acad/u1774624/Experiment/PY-IM-MultiHeadAttention/train.py", line 209, in
main(opt)
File "/dcs/acad/u1774624/Experiment/PY-IM-MultiHeadAttention/train.py", line 48, in main
model, model_state = create_model(opt)
File "/dcs/acad/u1774624/Experiment/PY-IM-MultiHeadAttention/train.py", line 27, in create_model
model = Transformer(opt) # Initialize a model state.
File "/dcs/acad/u1774624/Experiment/PY-IM-MultiHeadAttention/transformer/models.py", line 113, in init
self.tgt_proj = Linear(opt.d_model, opt.tgt_vocab_size, bias=False)
File "/dcs/acad/u1774624/Experiment/PY-IM-MultiHeadAttention/transformer/modules.py", line 13, in init
init.zeros_(self.linear.bias)
File "/dcs/acad/u1774624/miniconda3/lib/python3.7/site-packages/torch/nn/init.py", line 124, in zeros_
return tensor.zero_()
AttributeError: 'NoneType' object has no attribute 'zero_'

@smukh93
Copy link

smukh93 commented Apr 25, 2019

Found a solution for this? Getting the same error!

@tongchangD
Copy link

what is preprocess-train.t7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants