- 
        Couldn't load subscription status. 
- Fork 2.2k
Description
I'm currently trying to use tensorflow serving to serve a trained "textsum" model. I am using TF 0.11, which after some reading, it seems now automatically calls export_meta_graph which creates the exported files ckpt and ckpt.meta files.
Under the textsum/log_root directory, I have multiple files. One being model.ckpt-230381 and the other model.ckpt-230381.meta.
So it is my understanding that this is the location I should be able to point when trying to setup the model for serving. I have issued the below commands:
bazel build //tensorflow_serving/model_servers:tensorflow_model_server
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=model --model_base_path=tf_models/textsum/log_root/
It isn't an issue with regards to the relative path as when I first attempted, I did use the wrong path accidentally and got the error: "FileSystemStoragePathSource encountered a file-system access error" because it was an incorrect path. It now finds the valid path but I get the error in the title.
I can't seem t figure out what I am doing wrong here and was wondering if anyone has been running into this issue with any 0.11 models exported or not. Anyone see what I might be doing wrong?
Upon running the above command I get the below message:
W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:204] No versions of servable model found under base path tf_models/textsum/log_root/
Upon running inspect_checkpoint on the checkpoint file I see this:
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened
CUDA library libcublas.so locally I
tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
library libcudnn.so locally I
tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
library libcufft.so locally I
tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
library libcuda.so.1 locally I
tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
library libcurand.so locally seq2seq/output_projection/w (DT_FLOAT)
[256,335906] seq2seq/output_projection/v (DT_FLOAT) [335906]
seq2seq/encoder3/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
seq2seq/encoder3/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
seq2seq/encoder3/BiRNN/BW/LSTMCell/B (DT_FLOAT) [1024]
seq2seq/encoder2/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
seq2seq/decoder/attention_decoder/Linear/Bias (DT_FLOAT) [128]
seq2seq/decoder/attention_decoder/AttnW_0 (DT_FLOAT) [1,1,512,512]
seq2seq/decoder/attention_decoder/AttnV_0 (DT_FLOAT) [512]
seq2seq/encoder0/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [384,1024]
seq2seq/decoder/attention_decoder/LSTMCell/W_0 (DT_FLOAT) [384,1024]
seq2seq/encoder1/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
global_step (DT_INT32) [] seq2seq/encoder1/BiRNN/BW/LSTMCell/B
(DT_FLOAT) [1024]
seq2seq/decoder/attention_decoder/AttnOutputProjection/Linear/Bias
(DT_FLOAT) [256]
seq2seq/decoder/attention_decoder/Attention_0/Linear/Matrix (DT_FLOAT)
[512,512] seq2seq/decoder/attention_decoder/Attention_0/Linear/Bias
(DT_FLOAT) [512] seq2seq/encoder2/BiRNN/BW/LSTMCell/B (DT_FLOAT)
[1024] seq2seq/decoder/attention_decoder/Linear/Matrix (DT_FLOAT)
[640,128]
seq2seq/decoder/attention_decoder/AttnOutputProjection/Linear/Matrix
(DT_FLOAT) [768,256] seq2seq/embedding/embedding (DT_FLOAT)
[335906,128] seq2seq/encoder0/BiRNN/BW/LSTMCell/B (DT_FLOAT) [1024]
seq2seq/encoder3/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
seq2seq/encoder0/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [384,1024]
seq2seq/encoder0/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
seq2seq/decoder/attention_decoder/LSTMCell/B (DT_FLOAT) [1024]
seq2seq/encoder1/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
seq2seq/encoder2/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
seq2seq/encoder1/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
seq2seq/encoder2/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]