Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow Generate "ops_to_register.h" Without Graph #67960

Open
allogic opened this issue May 17, 2024 · 2 comments
Open

Tensorflow Generate "ops_to_register.h" Without Graph #67960

allogic opened this issue May 17, 2024 · 2 comments
Assignees
Labels
comp:ops OPs related issues subtype:windows Windows Build/Installation Issues TF 2.16 type:build/install Build and install issues

Comments

@allogic
Copy link

allogic commented May 17, 2024

Issue type

Build/Install

Have you reproduced the bug with TensorFlow Nightly?

No

Source

source

TensorFlow version

tf.2.16.1

Custom code

No

OS platform and distribution

Windows 11 x64

Mobile device

No response

Python version

3.13

Bazel version

6.5.0

GCC/compiler version

MSVC 19.39.33520 for x64

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

I'm trying to build tensorflow as a static library. When I create the root scope it tells me that I didn't register any operators or kernels. Specifically the "NoOp" is required.

I've read that with the tool "tensorflow/python/tools/print_selective_registration_header" I can generate the missing header file "ops_to_register.h" which registers all operators and kernels when I build the C++API again with the flag --cxxopt=”-DSELECTIVE_REGISTRATION”.

But I need a graph definition in order to produce the header file. I don't have a graph since I want to build my model in C++ instead. What can I do to register all operators and kernels. (I don't care about file size!)

Standalone code to reproduce the issue

std::printf(OpRegistry::Global()->DebugString(true).data()); // No output at all...

Scope root = Scope::NewRootScope(); // Crash because "NoOp" is required!

Relevant log output

No response

@google-ml-butler google-ml-butler bot added the type:build/install Build and install issues label May 17, 2024
@Venkat6871 Venkat6871 added comp:ops OPs related issues TF 2.16 subtype:windows Windows Build/Installation Issues labels May 20, 2024
@Venkat6871
Copy link

Hi @allogic ,

  • Sorry for the delay, Here i am seeing compatibility mismatch. Could you go through this documentation once? Please let us know if issue still persists.

Thank you!

@Venkat6871 Venkat6871 added the stat:awaiting response Status - Awaiting response from author label May 21, 2024
@allogic
Copy link
Author

allogic commented May 21, 2024

Look, I was going through the documentation for a whole week and still could not use tensorflow as a simple static library.
Here is a simple step by step guide to reproduce the behavior I'm experiencing.

NOTE: session_header.lib is not being built by default. It is only registered in bazel tests, but is not linked into the final binary!

git clone --depth 1 --branch v2.16.1 https://github.com/tensorflow/tensorflow
# ensure "session_header.lib" is being built in //tensorflow/cc/BUILD
python configure.py

set BAZEL_SH="C:\msys64\usr\bin\bash.exe" # Is required for some odd reason...

bazel clean --expunge

It seems clang is the preferred compiler starting with tensorflow 2.16.1. But when building with clang using the following command, clang produces a linker error which I have never encountered. It seems to be a problem on the LLVM site.

NOTE: I've not tested newer versions of clang, only the one described in the documentation!

bazel build --config=win_clang //tensorflow:tensorflow.lib
# time_rep_timespec.obj error LNK2019: unresolved external symbol _Thrd_sleep_for referenced in function "void __cdecl std::this_thread::sleep_for<__int64,struct std::ratio<1,1000000000> >(class std::chrono::duration<__int64,struct std::ratio<1,1000000000> > const &)" (??$sleep_for@_JU?$ratio@$00$0DLJKMKAA@@std@@@this_thread@std@@YAXAEBV?$duration@_JU?$ratio@$00$0DLJKMKAA@@std@@@chrono@1@@Z)

Regardless, when I build with MSVC on the other hand, it works as expected. It generates all the static libraries that I can link against without error.

The monolithic option is described in the .bazelrc file and is used to create a mostly static build. It also states that it will DISABLE modular op registration which is the problem I am facing currently. All tho it "states" it will be disabled, it does not.
Modular op registration is still enabled and my guess is that all the operators get optimized away during the build to safe binary size.

bazel build --config=monolithic //tensorflow:tensorflow.lib

Last but not least I'm generating the headers.

bazel build //tensorflow:install_headers

Here are all the software versions that I use. They are strictly limited to the versions described in the documentation!

# Tensorflow: 2.16.1
# Python: 3.13.0a6
# LLVM: 17.0.6
# MSVC: 19.39.33520
# Bazel: 6.5.0

I'm forced to leave it like this as I have no proper experience with Bazel and Tensorflow as a whole. But I would be happy if someone could explain to me what I'm missing or doing wrong. It can't be that big of a problem, since the modular op registration has to be disabled somewhere...

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:ops OPs related issues subtype:windows Windows Build/Installation Issues TF 2.16 type:build/install Build and install issues
Projects
None yet
Development

No branches or pull requests

2 participants