Skip to content
export bert model for serving
Python Shell
Branch: master
Clone or download
Latest commit 44d3392 Dec 12, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data export bert model for serving Dec 12, 2018
.gitignore Initial BERT release Oct 31, 2018
BERT_README.md export bert model for serving Dec 12, 2018
CONTRIBUTING.md Initial BERT release Oct 31, 2018
LICENSE Initial BERT release Oct 31, 2018
README.md Update README.md Dec 12, 2018
__init__.py Initial BERT release Oct 31, 2018
create_pretraining_data.py Putting nampledtuple in global scope Nov 28, 2018
env.yml export bert model for serving Dec 12, 2018
export.sh export bert model for serving Dec 12, 2018
extract_features.py Running through pyformat to meet Google code standards Nov 9, 2018
modeling.py Adding SQuAD 2.0 support Nov 15, 2018
modeling_test.py Adding SQuAD 2.0 support Nov 15, 2018
multilingual.md Clarifying readme about Chinese space tokenization Nov 24, 2018
optimization.py Initial BERT release Oct 31, 2018
optimization_test.py Initial BERT release Oct 31, 2018
predict.sh export bert model for serving Dec 12, 2018
requirements.txt Updating requirements.txt to make it only 1.11.0 Oct 31, 2018
run_classifier.py export bert model for serving Dec 12, 2018
run_pretraining.py Fixing typo in function name and updating README Nov 5, 2018
run_squad.py Updating formatting Nov 24, 2018
sample_text.txt Initial BERT release Oct 31, 2018
test.py export bert model for serving Dec 12, 2018
test.sh export bert model for serving Dec 12, 2018
tokenization.py Merge pull request #167 from eric-haibin-lin/patch-2 Nov 24, 2018
tokenization_test.py Adding SQuAD 2.0 support Nov 15, 2018
train.sh export bert model for serving Dec 12, 2018

README.md

export bert model for serving

predicting with estimator is slow, use export_savedmodel instead

create virtual environment

conda env create -f env.yml

train a classifier

bash train.sh

use the classifier

bash predict.sh

export bert model

bash export.sh

check out exported model

saved_model_cli show --all --dir $exported_dir

test exported model

bash test.sh

export it yourself

def serving_input_fn():
    label_ids = tf.placeholder(tf.int32, [None], name='label_ids')
    input_ids = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='input_ids')
    input_mask = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='input_mask')
    segment_ids = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='segment_ids')
    input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn({
        'label_ids': label_ids,
        'input_ids': input_ids,
        'input_mask': input_mask,
        'segment_ids': segment_ids,
    })()
    return input_fn

and

estimator._export_to_tpu = False
estimator.export_savedmodel(FLAGS.export_dir, serving_input_fn)
You can’t perform that action at this time.