Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google Colab Error #1310

Closed
Harry-Rogers opened this issue Apr 24, 2021 · 1 comment · Fixed by #1309
Closed

Google Colab Error #1310

Harry-Rogers opened this issue Apr 24, 2021 · 1 comment · Fixed by #1309

Comments

@Harry-Rogers
Copy link

Describe the bug
I have encountered this TypeError in the colab simple_api_fine_tuning book. I have not changed anything in the book just tried to run it. The checout tag is 2.0.0

image

@shi-kejian
Copy link

This "model_type" parameter should be changed to "hf_pretrained_model_name_or_path". I encountered this before. Please check the latest README at the jiant GitHub page.

jeswan pushed a commit that referenced this issue Apr 26, 2021
jeswan added a commit that referenced this issue May 4, 2021
* Update to Transformers v4.3.3 (#1266)

* use default return_dict in taskmodels and remove hidden state context manager in models.

* return hidden states in output of model wrapper

* Switch to task model/head factories instead of embedded if-else statements (#1268)

* Use jiant transformers model wrapper instead of if-else. Use taskmodel and head factory instead of if-else.

* switch to ModelArchitectures enum instead of strings

* Refactor get_output_from_encoder() to be member of JiantTaskModel (#1283)

* refactor getting output from encoder to be member function of jiant model

* switch to explicit encode() in jiant transformers model

* fix simple runscript test

* update to tokenizer 0.10.1

* Add tests for flat_strip() (#1289)

* add flat_strip test

* add list to test cases flat_strip

* mlm_weights(), feat_spec(), flat_strip() if-else refactors (#1288)

* moves remaining if-else statments to jiant model or replaces with model agnostic method

* switch from jiant_transformers_model to encoder

* fix bug in flat_strip()

* Move tokenization logic to central JiantModelTransformers method (#1290)

* move model specific tokenization logic to JiantTransformerModels

* implement abstract methods for JiantTransformerModels

* fix tasks circular import (#1296)

* Add DeBERTa (#1295)

* Add DeBERTa with sanity test

* fix tasks circular import

* [WIP] add deberta tests

* Revert "fix tasks circular import"

This reverts commit f924640.

* deberta tests passing with transformers 6472d8

* switch to deberta-v2

* fix get_mlm_weights_dict() for deberta-v2

* update to transformers 4.5.0

* mark deberta test_export as slow

* Update test_tokenization_normalization.py

* add guide to add a model

* fix test_expor_model tests

* minor pytest fixes (add num_labels for rte, overnight flag fix)

* bugfix for simple api notebook

* bugfix for #1310

* bugfix for #1306: simple api notebook path name

* squad running

* 2nd bugfix for #1310: not all tasks have num_labels property

* simple api notebook back to roberta-base

* run test matrix for more steps to compare to master

* save last/best model test fix

Co-authored-by: Jesse Swanson <js11133Wnyu.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants