Skip to content
This repository has been archived by the owner on Mar 3, 2024. It is now read-only.

Explanation of parameter max_len in tokenizer.encode ? #40

Closed
Kiris-tingna opened this issue Mar 26, 2019 · 4 comments
Closed

Explanation of parameter max_len in tokenizer.encode ? #40

Kiris-tingna opened this issue Mar 26, 2019 · 4 comments
Assignees

Comments

@Kiris-tingna
Copy link

It seems that this parameter should be capable with the pretrained BERT model , but where to find the setting in donwleded bert_config?

@Kiris-tingna Kiris-tingna added the enhancement New feature or request label Mar 26, 2019
@CyberZHG
Copy link
Owner

print('CONFIG_PATH: $UNZIPPED_MODEL_PATH/bert_config.json')

@CyberZHG CyberZHG removed the enhancement New feature or request label Mar 27, 2019
@houchangtao
Copy link

In the load and extract demo, which model you are loading? I tried the model: uncased_L-12_H-768_A-12 but get a different result as you got.

@CyberZHG
Copy link
Owner

@houchangtao chinese_L-12_H-768_A-12

@houchangtao
Copy link

@CyberZHG Thanks!

@CyberZHG CyberZHG closed this as completed Apr 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants