Conversation
| for checkpoint in TOKENIZER_CHECKPOINTS | ||
| ] | ||
| self.tokenizers = [BertTokenizer.from_pretrained(checkpoint) for checkpoint in TOKENIZER_CHECKPOINTS] | ||
| self.tf_tokenizers = [TFBertTokenizer.from_pretrained(checkpoint) for checkpoint in TOKENIZER_CHECKPOINTS] |
There was a problem hiding this comment.
Note to probably-Amy here: I'm not testing the old slow Bert tokenizer here because the fast one seems generally better maintained and is fully supported in TFLite now.
| model.save(save_path) | ||
| loaded_model = tf.keras.models.load_model(save_path) | ||
| loaded_output = loaded_model(test_inputs) | ||
| model.export(save_path) |
There was a problem hiding this comment.
This is the recommended TF/Keras way to export models for inference now, and the old approach was always a bit shaky.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
This should be ready to go now, and finally fixes the remaining CI issues after the |
amyeroberts
left a comment
There was a problem hiding this comment.
Thanks for the continued TF tidy-up!
| """ | ||
| if name_scope is not None: | ||
| if not tf_name.startswith(name_scope): | ||
| if not tf_name.startswith(name_scope) and "final_logits_bias" not in tf_name: |
There was a problem hiding this comment.
Why the check on "final_logits_bias" here?
There was a problem hiding this comment.
No good reason, unfortunately! BART and derived models put the final_logits_bias weights outside the main model name scope, and I'm not entirely sure why (but changing it now might invalidate old user checkpoints)
This PR hopefully fixes the last remaining issues from the
build()PR and gets the CI back to normal!