Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: No op named GatherV2 in defined operations #1

Closed
gedonglai opened this issue Jun 4, 2018 · 12 comments
Closed

ValueError: No op named GatherV2 in defined operations #1

gedonglai opened this issue Jun 4, 2018 · 12 comments

Comments

@gedonglai
Copy link

hello,I want to run ' parser = benepar.Parser("benepar_en")', but it raise the error like this "ValueError: No op named GatherV2 in defined operations". could you tell me your python version and tensorflow-gpu version and cudnn version. I don't know how to fix this error . thank you very much.

@nikitakit
Copy link
Owner

Thanks for pointing this out!

I'm using Tensorflow 1.8.0 (the latest version). Looks like the GatherV2 op was added in Tensorflow 1.3, so any version older than that is guaranteed not to work.

@gedonglai
Copy link
Author

Thank you for your reply,
Could i train a chinese model by your parser? What do I need to pay attention to?

@nikitakit
Copy link
Owner

I've never trained a Chinese model, but you're welcome to try!

You may need to modify the load_trees function in src/trees.py so that it properly reads the training data you have, but otherwise the training instructions are in the README.

I'm going to look into releasing multilingual parsing models, but Chinese will require extra time to familiarize myself with the standard conventions and models for Chinese parsing and tokenization. The main issue is that Chinese tokenization is very different from English, and I don't know whether it's empirically better to rely on external tools or to just train the parser itself to split sequences of characters into words.

Let me know if you train any models and have any results!

@gedonglai
Copy link
Author

Thank you very much。I will try.

@gedonglai
Copy link
Author

Hi,
I‘m training the chinese model, I have nothing but ctb9.0 data set. so when I run the train command it raise' AssertionError: Need at least one of : use_tags,use_words, use_chars_lstm, use_chars_concat, use_elom". which one i should select?

@nikitakit
Copy link
Owner

--use-words enables learned word embeddings, which is probably the best thing to start with.

--use-chars-lstm enables using a character LSTM. This will result in a better parser, but may require more hyperparameter tuning. For English this option is best used in combination with --d-char-emb 64, but you'll probably want to use a value higher than 64 because there are many more Chinese characters than letters in the English alphabet.

To use both word embeddings and a character LSTM at the same time, you can pass both options: --use-words --use-chars-lstm.

I wouldn't recommend using --use-chars-concat, and --use-tags/--use-elmo won't work without additional pre-trained models.

@gedonglai
Copy link
Author

Thankyou for your help,but I meet another question.
epoch 1 batch 1/513 processed 250 batch-loss 72.2773 grad-norm 65.9773 epoch-elapsed 0h00m04s total-elapsed 0h00m04s
epoch 1 batch 2/513 processed 500 batch-loss 77.4820 grad-norm 70.6667 epoch-elapsed 0h00m08s total-elapsed 0h00m08s
epoch 1 batch 3/513 processed 750 batch-loss 72.1133 grad-norm 66.3470 epoch-elapsed 0h00m11s total-elapsed 0h00m11s
Traceback (most recent call last):
File "src/main.py", line 537, in
main()
File "src/main.py", line 533, in main
args.callback(args)
File "src/main.py", line 496, in
subparser.set_defaults(callback=lambda args: run_train(args, hparams))
File "src/main.py", line 295, in run_train
_, loss = parser.parse_batch(subbatch_sentences, subbatch_trees)
File "/home/dhan/self-attentive-parser/src/parse_nk.py", line 1003, in parse_batch
p_i, p_j, p_label, p_augment, g_i, g_j, g_label = self.parse_from_annotations(fencepost_annotations_d[start:end,:], sentences[i], golds[i])
File "/home/dhan/self-attentive-parser/src/parse_nk.py", line 1052, in parse_from_annotations
p_score, p_i, p_j, p_label, p_augment = chart_helper.decode(False, **decoder_args)
File "src/chart_helper.pyx", line 48, in chart_helper.decode
oracle_label_chart[left, right] = label_vocab.index(gold.oracle_label(left, right))
AttributeError: 'LeafParseNode' object has no attribute 'oracle_label'

how to fix this error,my train command is "python src/main.py train --model-path-base models/en_charlstm --train-path datacn/train.cn.txt --dev-path datacn/dev.cn.txt --use-words
".

@nikitakit
Copy link
Owner

What happens if you disablestrip_top when loading the trees? This is already done for some languages.

The code is not set up to deal with empty parse trees: there needs to be at least one label above the part-of-speech tag level. The English treebank annotates single-word fragments as FRAG, but for treebanks that don't do this the simplest fix is to keep the root node.

@gedonglai
Copy link
Author

Thankyou very much.
You are so friendly to me. heartfelt thanks.

@gedonglai
Copy link
Author

Hi,
I have a new question. how many epochs do i need to train. thank you

@nikitakit
Copy link
Owner

For English, training can take 80-100 epochs, but its somewhat language-dependent.

There is a "reducing learning rate" message printed whenever the learning rate is decayed due to no progress on the dev set. After it gets printed 2-3 times training is essentially done.

@nikitakit
Copy link
Owner

I updated the README to include a tensorflow version dependency, so the original issue should now be addressed. I'm closing this, but feel free to comment or open another if you have any more questions/bugs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@nikitakit @gedonglai and others