Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

module 'torch.nn' has no attribute 'GELU' #2510

Closed
MrityunjoyS opened this issue Aug 21, 2020 · 8 comments
Closed

module 'torch.nn' has no attribute 'GELU' #2510

MrityunjoyS opened this issue Aug 21, 2020 · 8 comments

Comments

@MrityunjoyS
Copy link

PYTHONPATH=/path/fairseq/ python3 examples/speech_recognition/infer.py /path/audio_file/wav2vec/ --task audio_pretraining \
--nbest 1 --path /path/audio_file/wav2vec_small.pt --gen-subset valid --results-path /path/audio_file/wav2vec/tmp/am/ --w2l-decoder kenlm \
--lm-model /path/kenlm/build/bin/ --lm-weight 2 --word-score -1 --sil-weight 0 --criterion ctc --labels ltr --max-tokens 4000000 \
--post-process letter

While running the above command, facing the bellow errors, please check -

Traceback (most recent call last):
File "examples/speech_recognition/infer.py", line 19, in
from fairseq import checkpoint_utils, options, progress_bar, utils, tasks
File "/path/fairseq/fairseq/init.py", line 17, in
import fairseq.criterions # noqa
File "/path/fairseq/fairseq/criterions/init.py", line 10, in
from fairseq.criterions.fairseq_criterion import FairseqCriterion, LegacyFairseqCriterion
File "/path/fairseq/fairseq/criterions/fairseq_criterion.py", line 11, in
from fairseq import metrics, utils
File "/path/fairseq/fairseq/utils.py", line 23, in
from fairseq.modules import gelu, gelu_accurate
File "/path/fairseq/fairseq/modules/init.py", line 19, in
from .gumbel_vector_quantizer import GumbelVectorQuantizer
File "/path/fairseq/fairseq/modules/gumbel_vector_quantizer.py", line 12, in
class GumbelVectorQuantizer(nn.Module):
File "/path/fairseq/fairseq/modules/gumbel_vector_quantizer.py", line 22, in GumbelVectorQuantizer
activation=nn.GELU(),
AttributeError: module 'torch.nn' has no attribute 'GELU'

@MrityunjoyS
Copy link
Author

@myleott @alexeib Sirs, can you please have any possible solution for this, I searched but didn't hook up with any proper solution and her by I'm stuck from further progress with the model.

@myleott
Copy link
Contributor

myleott commented Aug 21, 2020

I believe it was added in PyTorch 1.5.0, are you using an older version of PyTorch? Fairseq requires >= 1.5.0 currently.

@alexeib
Copy link
Contributor

alexeib commented Aug 21, 2020

I think it was in 1.4.0 even, that's what I've been using until very recently

@alexeib alexeib closed this as completed Aug 22, 2020
@duterscmy
Copy link

Now my torch version is 1.5.1 and I have the same error. Could you give any suggestions?
File "/ssd1/exec/caomingyu/works/rich_text/fairseq/fairseq/modules/gumbel_vector_quantizer.py", line 21, in GumbelVectorQuantizer activation=nn.GELU(), AttributeError: module 'torch.nn' has no attribute 'GELU'
@myleott

@duterscmy
Copy link

with my fairseq is the newest version ( git clone today)

@alexeib
Copy link
Contributor

alexeib commented Oct 1, 2020

can you add a line "print(torch.version)" somewhere in your code and see what it prints

@duterscmy
Copy link

duterscmy commented Oct 2, 2020 via email

@yangxh11
Copy link

yangxh11 commented Oct 19, 2020

My problem is that I explort a wrong environment variable when the code runs, so exactly the wrong pytorch version was used. You can try to check it On 10/02/2020 00:49, Moon wrote: Hi, is there a solution now? I tried replace nn.GELU with F.gelu, but nothing changed at all. Why the message look like it's not recognizing the functional as F? I am using torch 1.6 /notebooks/minGPT-master/mingpt/model.py in init(self, config) 92 self.mlp = nn.Sequential( 93 nn.Linear(config.n_embd, 4 * config.n_embd), ---> 94 F.gelu(), 95 nn.Linear(4 * config.n_embd, config.n_embd), 96 nn.Dropout(config.resid_pdrop), AttributeError: module 'torch.nn' has no attribute 'GELU' —You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe. [ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "#2510 (comment)", "url": "#2510 (comment)", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

Try to define the GELU class locally.

from torch import Tensor
import torch.nn.functional as F

class GELU(nn.Module):
    def forward(self, input: Tensor) -> Tensor:
        return F.gelu(input)

then replace the original code 'nn.GELU()' as 'GELU()'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants