Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No module named tensorflow.python.platform #36

Closed
wangdelp opened this issue Nov 9, 2015 · 6 comments
Closed

No module named tensorflow.python.platform #36

wangdelp opened this issue Nov 9, 2015 · 6 comments

Comments

@wangdelp
Copy link

wangdelp commented Nov 9, 2015

when running
python tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py

get the following error:
Traceback (most recent call last):
File "tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py", line 14, in
import tensorflow.python.platform
ImportError: No module named tensorflow.python.platform

@mrry
Copy link
Contributor

mrry commented Nov 9, 2015

For this to work, you'll have to install the PIP package. You can either download the pre-built package, or build from source by following the instructions here: http://tensorflow.org/get_started/os_setup.md#create-pip

@wangdelp
Copy link
Author

wangdelp commented Nov 9, 2015

@mrry Thank you. When I run
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

Mon Nov 9 15:34:49 EST 2015 : === Using tmpdir: /tmp/tmp.itHpiLkuJI
/tmp/tmp.itHpiLkuJI ~/packages/tensorflow
Mon Nov 9 15:34:49 EST 2015 : === Building wheel
Traceback (most recent call last):
File "setup.py", line 77, in
keywords='tensorflow tensor machine learning',
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "build/bdist.linux-x86_64/egg/setuptools/command/sdist.py", line 48, in run
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 970, in run_command
cmd_obj = self.get_command_obj(command)
File "/usr/lib/python2.7/distutils/dist.py", line 845, in get_command_obj
klass = self.get_command_class(command)
File "build/bdist.linux-x86_64/egg/setuptools/dist.py", line 430, in get_command_class
File "/usr/lib/python2.7/distutils/dist.py", line 815, in get_command_class
import (module_name)
File "/usr/lib/python2.7/distutils/command/check.py", line 13, in
from docutils.utils import Reporter
File "/home/xeraph/virtualenv/v1/local/lib/python2.7/site-packages/docutils/utils/init.py", line 20, in
import docutils.io
File "/home/xeraph/virtualenv/v1/local/lib/python2.7/site-packages/docutils/io.py", line 18, in
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/home/xeraph/virtualenv/v1/local/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/home/xeraph/virtualenv/v1/lib/python2.7/locale.py", line 543, in getdefaultlocale
return _parse_localename(localename)
File "/home/xeraph/virtualenv/v1/lib/python2.7/locale.py", line 475, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8

Any idea?

@mrry
Copy link
Contributor

mrry commented Nov 9, 2015

Can you try the following?

export LC_ALL=en_us.UTF-8
export LANG=en_us.UTF-8
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

@wangdelp
Copy link
Author

wangdelp commented Nov 9, 2015

@mrry it works, Thx.

@mrry mrry closed this as completed Nov 9, 2015
ilblackdragon added a commit to ilblackdragon/tensorflow that referenced this issue Mar 9, 2016
@kirai
Copy link

kirai commented Jul 7, 2016

I run into the same problem. I use Mac OS X in Japanese.

export LC_ALL=en_us.UTF-8
export LANG=en_us.UTF-8

Solved it :)

But I think en_us.UTF-8 should not be assumed by default on installation.

@qiaohaijun
Copy link
Contributor

export LC_ALL=en_us.UTF-8
export LANG=en_us.UTF-8
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

it works, but I dont known why.

@ghost ghost mentioned this issue Nov 28, 2018
pooyadavoodi pushed a commit to pooyadavoodi/tensorflow that referenced this issue Oct 16, 2019
Add use_explicit_batch parameter available in OpConverterParams and other places

Formatting and make const bool everywhere

Enable use_explicit_batch for TRT 6.0

Revise validation checks to account for use_explicit_batch. Propagate flag to ConversionParams and TRTEngineOp

Rename use_explicit_batch/use_implicit_batch

Formatting

Add simple activtion test for testing dynamic input shapes. Second test with None dims is disabled

Update ConvertAxis to account for use_implicit batch

fix use of use_implicit_batch (tensorflow#7)

* fix use of use_implicit_batch

* change order of parameters in ConvertAxis function

fix build (tensorflow#8)

Update converters for ResNet50 (except Binary ops) (tensorflow#9)

* Update RN50 converters for use_implicit_batch: Conv2D, BiasAdd, Transpose, MaxPool, Squeeze, MatMul, Pad

* Fix compilation errors

* Fix tests

Use TRT6 API's for dynamic shape (tensorflow#11)

* adding changes for addnetworkv2

* add plugin utils header file in build

* optimization profile api added

* fix optimization profile

* TRT 6.0 api changes + clang format

* Return valid errors in trt_engine_op

* add/fix comments

* Changes to make sure activation test passes with TRT trunk

* use HasStaticShape API, add new line at EOF

Allow opt profiles to be set via env variables temporarily.

Undo accidental change

 fix segfault by properly returning the status from OverwriteStaticDims function

Update GetTrtBroadcastShapes for use_implicit_batch (tensorflow#14)

* Update GetTrtBroadcastShapes for use_implicit_batch

* Formatting

Update activation test

Fix merge errors

Update converter for reshape (tensorflow#17)

Allow INT32 for elementwise (tensorflow#18)

Add Shape op (tensorflow#19)

* Add Shape op

* Add #if guards for Shape. Fix formatting

Support dynamic shapes for strided slice (tensorflow#20)

Support dynamic shapes for strided slice

Support const scalars + Pack on constants (tensorflow#21)

Support const scalars and pack with constants in TRT6

Fixes/improvements for BERT (tensorflow#22)

* Support shrink_axis_mask for StridedSlice

* Use a pointer for final_shape arg in ConvertStridedSliceHelper. Use final_shape for unpack/unstack

* Support BatchMatMulV2.

* Remove TODO and update comments

* Remove unused include

* Update Gather for TRT 6

* Update BatchMatMul for TRT6 - may need more changes

* Update StridedSlice shrink_axis for TRT6

* Fix bugs with ConvertAxis, StridedSlice shrink_axis, Gather

* Fix FC and broadcast

* Compile issue and matmul fix

* Use nullptr for empty weights

* Update Slice

* Fix matmul for TRT6

* Use enqueueV2. Don't limit to 1 input per engine

Change INetworkConfig to IBuilderConfig

Allow expand dims to work on dynamic inputs by slicing shape. Catch problems with DepthwiseConv. Don't try to verify dynamic shapes in CheckValidSize (tensorflow#24)

Update CombinedNMS converter (tensorflow#23)

* Support CombinedNMS in non implicit batch mode. The squeeze will not work if multiple dimensions are unknown

* Fix compile error and formatting

Support squeeze when input dims are unknown

Support an additional case of StridedSlice where some dims aren't known

Use new API for createNetworkV2

Fix flag type for createNetworkV2

Use tensor inputs for strided slice

Allow squeeze to work on -1 dims

Add TRT6 checks to new API

spliting ConvertGraphDefToEngine  (tensorflow#29)

* spliting ConvertGraphDefToEngine into ConvertGraphDefToNetwork and BuildEngineFromNetwork

* some compiler error

* fix format

Squeeze Helper function (tensorflow#31)

* Add squeeze helper

* Fix compile issues

* Use squeeze helper for CombinedNMS

Update Split & Unpack for dynamic shapes (tensorflow#32)

* Update Unpack for dynamic shapes

* Fix compilation error

Temporary hack to fix bug in config while finding TRT library

Fix errors from rebasing

Remove GatherV2 limitations for TRT6

Fix BiasAdd elementwise for NCHW case with explicit batch mode (tensorflow#34)

Update TRT6 headers, Make tests compile (tensorflow#35)

* Change header files for TRT6 in configure script

* Fix bug with size of scalars. Use implicit batch mode based on the converter flag when creating network

* Fix compilation of tests and Broadcast tests

Properly fix biasadd nchw (tensorflow#36)

Revert tensorflow#29 to fix weight corruption (tensorflow#37)

* Revert tensorflow#29 to fix weight corruption

* Revert change in test

Fix bug with converters and get all tests passing for TRT6 (tensorflow#39)

Update DepthToSpace and SpaceToTest for TRT6 + dynamic shapes (tensorflow#40)

Add new C++ tests for TRT6 converters (tensorflow#41)

* Remove third shuffle layer since bug with transpose was fixed

* Add new tests for TRT6 features

* Update TRT6 headers list

Fix compilation errors

Remove bazel_build.sh

Enable quantization mnist test back

Disabled by mistake I believe

Remove undesirable changes in quantization_mnist_test

Add code back that was missed during rebase

Fix bug: change "type" to type_key
cjolivier01 pushed a commit to Cerebras/tensorflow that referenced this issue Dec 6, 2019
keithm-xmos pushed a commit to xmos/tensorflow that referenced this issue Feb 1, 2021
updated bconv2d_bin_*_valid function calls with the Bconv2d op
ashahba pushed a commit to ashahba/tensorflow that referenced this issue Jan 20, 2022
* Add tpp file for containers as per legal

* Updated the Legal Notice to include references to both tpp and both docker base layers

* Update Legal Notice

* Updated
copybara-service bot pushed a commit that referenced this issue Mar 6, 2023
…tenate operator.

The initial checks within VisitConcatenationNode all pass as xnn_define_concatenate is not called when checking if a node may be delegated or not. It is during the second pass that the check fails, causing delegation to fail.

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
ERROR: failed to delegate CONCATENATION node #31
ERROR: failed to delegate CONCATENATION node #36
ERROR: Node number 55 (TfLiteXNNPackDelegate) failed to prepare.
PiperOrigin-RevId: 514364419
copybara-service bot pushed a commit that referenced this issue Mar 23, 2023
…tenate operator.

The initial checks within VisitConcatenationNode all pass as xnn_define_concatenate is not called when checking if a node may be delegated or not. It is during the second pass that the check fails, causing delegation to fail.

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
ERROR: failed to delegate CONCATENATION node #31
ERROR: failed to delegate CONCATENATION node #36
ERROR: Node number 55 (TfLiteXNNPackDelegate) failed to prepare.
PiperOrigin-RevId: 518874585
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants