Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Occur fatal error 1002 when compiling tensorflow 1.3 with vs2015 DEBUG in windows10 #11771

Closed
HannH opened this issue Jul 26, 2017 · 10 comments
Closed
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower

Comments

@HannH
Copy link

HannH commented Jul 26, 2017

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    no custom code(original code cloned from github )
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Windows 10
  • TensorFlow installed from (source or binary):
    source(cmake from tensorflow/contribe/cmake)
  • TensorFlow version (use command below):
    r1.3
  • Visual studio version:
    vs2015
  • CUDA/cuDNN version:
    no GPU/CUDA 8.0+cuDNN5.1
  • GPU model and memory:
    NVIDIA Titan X(12GB)
  • Exact command to reproduce:
    tf_cc.vcxproj -> A:\C++\tensorflow-1.3.0\Source_GPU\tf_cc.dir\Debug\tf_cc.lib
    65>a:\c++\tensorflow-1.3.0\source_gpu\external\eigen_archive\eigen\src\core\products\generalblockpanelkernel.h(1977): fatal error C1002:

Describe the problem

Hello, I get error 1002 when compiling tensorflow r1.3 with vs2015 in tf_core_kernels , use DEBUG mode without GPU, which occured during compilation of GPU version too. I belive I have enough memory for compilation. However, I compiled successfully through RELEASE mode without error.

Same issue in tensorflow-r1.2.

@reedwm
Copy link
Member

reedwm commented Jul 26, 2017

Can you post the entire error output that occurs?

@reedwm reedwm added the stat:awaiting response Status - Awaiting response from author label Jul 26, 2017
@snnn
Copy link
Contributor

snnn commented Jul 26, 2017

I believe debug build is not supported, especially for gpu build. You may try to replace /DEBUG:Full with /DEBUG:Fastlink in your compiler flags. And you may also need to split tf_core_kernerls into multiple static libraries. And you need a debug build of python,cuda,... If I were you, I'll give up

@HannH
Copy link
Author

HannH commented Jul 27, 2017

@reedwm the error output have just 2 sentences, which I have written above. I would post a part of build log for you
1> count_extremely_random_stats_op.cc
1> finished_nodes_op.cc
1> grow_tree_op.cc
1> reinterpret_string_to_float_op.cc
1> sample_inputs_op.cc
1> scatter_add_ndim_op.cc
1> tree_predictions_op.cc
1> tree_utils.cc
1> update_fertile_slots_op.cc
1> hard_routing_function_op.cc
1> k_feature_gradient_op.cc
1> k_feature_routing_function_op.cc
1> routing_function_op.cc
1> routing_gradient_op.cc
1> stochastic_hard_routing_function_op.cc
1> stochastic_hard_routing_gradient_op.cc
1> unpack_path_op.cc
1> utils.cc
1> skip_gram_kernels.cc
1> skip_gram_ops.cc
1> cross_replica_ops.cc
1> infeed_ops.cc
1> outfeed_ops.cc
1> replication_ops.cc
1> tpu_configuration_ops.cc
1> tpu_sendrecv_ops.cc
1>a:\c++\tensorflow-1.3.0\source_gpu\external\eigen_archive\eigen\src\core\products\generalblockpanelkernel.h(1989): fatal error C1002: 在第 2 遍中编译器的堆空间不足(compiler is out of heap space in pass 2)
1>cl : 命令行 error D8040: 创建子进程或与子进程通讯时出错(error creating or communicating with child process)
========== 生成: 成功(success) 0 个,失败 (failure)1 个,最新 0 个,跳过 0 个 ==========
sorry for using the chinese version, I have translated the output in english

@HannH
Copy link
Author

HannH commented Jul 27, 2017

@snnn I agree with you, there are some problem during compiling in Debug mode now. So I post the error for later development of tensorflow.
It seems like there have some problem in MinSizeRel and RelWithDebInfo mode, because i cannot compile tf in these modes either.

@aselle aselle removed the stat:awaiting response Status - Awaiting response from author label Jul 27, 2017
@snnn
Copy link
Contributor

snnn commented Jul 27, 2017

Did you use the 32 bits cl.exe or 64 bits cl.exe?

set PreferredToolArchitecture=x64

Run this command before open tensorflow.sln

Or, add "-T host=x64" to cmake command line args.

@HannH
Copy link
Author

HannH commented Jul 27, 2017

@snnn Thank you for your suggestion! Because compile is time-consuming, I would tell you the result later.
Anthor question: I found it is necessary to compile a tensorflow project statically in vs, which caused the result bloat. For example, project 'tf_label_image_example' is 164 MB in release mode, but there only one file named 'main.cc' from 'examples\label_image' in this project. Is there any way to compile it dynamiclly to squeeze the file size?

@reedwm
Copy link
Member

reedwm commented Jul 27, 2017

/CC @mrry

@reedwm reedwm added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jul 27, 2017
@HannH
Copy link
Author

HannH commented Jul 31, 2017

@snnn OK, your suggestion works, thank you!

@HannH
Copy link
Author

HannH commented Aug 3, 2017

I found it is easy to build tensorflow project dynamically in vs 2015 for tf-r1.3. Close the issue.

@yuyijie1995
Copy link

@HannH I have the same problem , which solution is useful to you ? Is the "set PreferredToolArchitecture=x64"? I try to add "-T host=x64" to cmake command line args. but another error happened . Can I have your qq or wechat number?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower
Projects
None yet
Development

No branches or pull requests

5 participants