Closed
Description
System information
- What is the top-level directory of the model you are using: models
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): no
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): xenial
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): 1.7.0
- Bazel version (if compiling from source):
- CUDA/cuDNN version: 9.0/7.1
- GPU model and memory: k80 11gb
- Exact command to reproduce:
python tensorrt.py --frozen_graph=resnetv2_imagenet_frozen_graph.pb --image_file=image.jpg --native --fp32 --fp16 --output_dir=output
Describe the problem
fresh os installation, try to use the tensorrt example.
Source code / logs
totalMemory: 11.17GiB freeMemory: 11.10GiB
2018-04-02 10:05:26.213549: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-02 10:05:26.603186: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-02 10:05:26.603241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-04-02 10:05:26.603258: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-04-02 10:05:26.603571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5719 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 00
00:00:1e.0, compute capability: 3.7)
Running native graph
INFO:tensorflow:Starting execution
2018-04-02 10:05:27.325531: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-02 10:05:27.325604: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-02 10:05:27.325624: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-04-02 10:05:27.325636: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-04-02 10:05:27.325831: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5719 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 00
00:00:1e.0, compute capability: 3.7)
INFO:tensorflow:Starting Warmup cycle
2018-04-02 10:05:28.814376: E tensorflow/stream_executor/cuda/cuda_dnn.cc:396] Loaded runtime CuDNN library: 7102 (compatibility version 7100) but source was compiled with 7005 (compatibility version 7000). If using a binary install, up
grade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
2018-04-02 10:05:28.815023: F tensorflow/core/kernels/conv_ops.cc:712] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>(), &algorithms)
Aborted (core dumped)