-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
module 'torch.nn' has no attribute 'GELU' #2510
Comments
I believe it was added in PyTorch 1.5.0, are you using an older version of PyTorch? Fairseq requires >= 1.5.0 currently. |
I think it was in 1.4.0 even, that's what I've been using until very recently |
Now my torch version is 1.5.1 and I have the same error. Could you give any suggestions? |
with my fairseq is the newest version ( git clone today) |
can you add a line "print(torch.version)" somewhere in your code and see what it prints |
My problem is that I explort a wrong environment variable when the code runs, so exactly the wrong pytorch version was used. You can try to check it
On 10/02/2020 00:49, Moon wrote:
Hi, is there a solution now? I tried replace nn.GELU with F.gelu, but nothing changed at all. Why the message look like it's not recognizing the functional as F? I am using torch 1.6
/notebooks/minGPT-master/mingpt/model.py in __init__(self, config)
92 self.mlp = nn.Sequential(
93 nn.Linear(config.n_embd, 4 * config.n_embd),
---> 94 F.gelu(),
95 nn.Linear(4 * config.n_embd, config.n_embd),
96 nn.Dropout(config.resid_pdrop),
AttributeError: module 'torch.nn' has no attribute 'GELU'
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe.
[
{
"@context": "http://schema.org",
"@type": "EmailMessage",
"potentialAction": {
"@type": "ViewAction",
"target": "#2510 (comment)",
"url": "#2510 (comment)",
"name": "View Issue"
},
"description": "View this Issue on GitHub",
"publisher": {
"@type": "Organization",
"name": "GitHub",
"url": "https://github.com"
}
}
]
|
Try to define the GELU class locally.
then replace the original code 'nn.GELU()' as 'GELU()' |
While running the above command, facing the bellow errors, please check -
Traceback (most recent call last):
File "examples/speech_recognition/infer.py", line 19, in
from fairseq import checkpoint_utils, options, progress_bar, utils, tasks
File "/path/fairseq/fairseq/init.py", line 17, in
import fairseq.criterions # noqa
File "/path/fairseq/fairseq/criterions/init.py", line 10, in
from fairseq.criterions.fairseq_criterion import FairseqCriterion, LegacyFairseqCriterion
File "/path/fairseq/fairseq/criterions/fairseq_criterion.py", line 11, in
from fairseq import metrics, utils
File "/path/fairseq/fairseq/utils.py", line 23, in
from fairseq.modules import gelu, gelu_accurate
File "/path/fairseq/fairseq/modules/init.py", line 19, in
from .gumbel_vector_quantizer import GumbelVectorQuantizer
File "/path/fairseq/fairseq/modules/gumbel_vector_quantizer.py", line 12, in
class GumbelVectorQuantizer(nn.Module):
File "/path/fairseq/fairseq/modules/gumbel_vector_quantizer.py", line 22, in GumbelVectorQuantizer
activation=nn.GELU(),
AttributeError: module 'torch.nn' has no attribute 'GELU'
The text was updated successfully, but these errors were encountered: