Skip to content

Issues: graykode/nlp-tutorial

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

The comment in the Bi-LSTM (Attention) model has an issue.
#84 opened Sep 10, 2024 by tmracy updated Jan 9, 2025
Faster attention calculation in 4-2.Seq2Seq?
#75 opened Jul 23, 2022 by shouldsee updated Apr 17, 2024
A questions about decoder in seq2seq-torch
#36 opened Aug 12, 2019 by acm5656 updated Feb 22, 2023
Problem with BERT batch generation
#45 opened Apr 15, 2020 by aqibsaeed updated Jan 10, 2023
The Learning Rate in 5-2.BERT must be reduced.
#77 opened Sep 22, 2022 by Cheng0829 updated Oct 10, 2022
The Adam in 5-1.Transformer should be replaced by SGD
#76 opened Sep 16, 2022 by Cheng0829 updated Sep 16, 2022
BiLstm(tf) maybe have mistake
#74 opened Jul 14, 2022 by cui-z updated Jul 14, 2022
5.1 Transformer may have wrong position embed
#73 opened Jul 8, 2022 by JiangHan97 updated Jul 8, 2022
Some problems about Bert
#40 opened Nov 6, 2019 by tfighting updated Mar 8, 2022
a question about transformer
#55 opened Jul 3, 2020 by luojq-sysysdcs updated Dec 2, 2021
3-3.Bi-LSTM may have wrong padding
#72 opened Oct 9, 2021 by ETWBC updated Oct 9, 2021
Bi-LSTM attention calc may be wrong
#68 opened Jun 3, 2021 by liuxiaoqun updated Sep 26, 2021
LongTensor error dim in BiLSTM Attention with new data
#71 opened Aug 11, 2021 by koaaihub updated Aug 11, 2021
Why is src_len+1 in Transformer demo?
#66 opened Apr 7, 2021 by Yuanbo2021 updated Jun 8, 2021
Question about tensor.view operation in Bi-LSTM(Attention)
#38 opened Sep 29, 2019 by iamxpy updated Jun 4, 2021
About make_batch of NNLM
#67 opened Apr 30, 2021 by KODGV updated Apr 30, 2021
Question?
#65 opened Mar 15, 2021 by RaySunWHUT updated Mar 15, 2021
CODE
#64 opened Mar 8, 2021 by Developer-Prince updated Mar 8, 2021
TextCNN_Torch have wrong comment
#44 opened Jan 23, 2020 by ghost updated Sep 11, 2020
link of NNLM and word2vec is disabled
#59 opened Aug 27, 2020 by workerundervirus updated Aug 27, 2020
how to use seq2seq(attention) for multiple batch
#46 opened Apr 16, 2020 by jbjeong91 updated Jul 3, 2020
about seq2seq(attention)-Torch multiple sample training question
#54 opened Jun 30, 2020 by wmathor updated Jun 30, 2020
3-3-bilstm-torch comment error
#47 opened Apr 17, 2020 by Tonybb9089 updated Jun 29, 2020
May be a small mistake
#48 opened Apr 29, 2020 by zhangyikaii updated Jun 10, 2020
Seq2Seq(Attention)Input Shape Question
#31 opened Jul 21, 2019 by dpyneo updated Feb 6, 2020
ProTip! Type g i on any issue or pull request to go back to the issue listing page.