New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't configure due to undeclared packages inside if_android/if_ios #4312

Closed
gustavla opened this Issue Sep 10, 2016 · 29 comments

Comments

Projects
None yet
@gustavla
Contributor

gustavla commented Sep 10, 2016

I am having trouble configuring the latest master branch (dbe7ee0). When I run ./configure, I get:

ERROR: [...]/tensorflow/tensorflow/contrib/session_bundle/BUILD:107:1: no such target '//tensorflow/core:android_lib_lite': target 'android_lib_lite' not declared in package 'tensorflow/core' defined by [...]/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/contrib/session_bundle:session_bundle'.
ERROR: [...]/tensorflow/tensorflow/contrib/session_bundle/BUILD:213:1: no such target '//tensorflow/core:android_lib_lite': target 'android_lib_lite' not declared in package 'tensorflow/core' defined by [...]/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/contrib/session_bundle:signature'.
ERROR: [...]/tensorflow/tensorflow/contrib/session_bundle/BUILD:107:1: no such target '//tensorflow/core:meta_graph_portable_proto': target 'meta_graph_portable_proto' not declared in package 'tensorflow/core' defined by [...]/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/contrib/session_bundle:session_bundle'.
ERROR: [...]/tensorflow/tensorflow/contrib/session_bundle/BUILD:213:1: no such target '//tensorflow/core:meta_graph_portable_proto': target 'meta_graph_portable_proto' not declared in package 'tensorflow/core' defined by [...]/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/contrib/session_bundle:signature'.
ERROR: [...]/tensorflow/tensorflow/contrib/session_bundle/BUILD:213:1: no such target '//tensorflow/core:meta_graph_portable_proto': target 'meta_graph_portable_proto' not declared in package 'tensorflow/core' defined by [...]/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/contrib/session_bundle:signature'.
ERROR: [...]/tensorflow/tensorflow/contrib/session_bundle/BUILD:107:1: no such target '//tensorflow/core:meta_graph_portable_proto': target 'meta_graph_portable_proto' not declared in package 'tensorflow/core' defined by [...]/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/contrib/session_bundle:session_bundle'.
ERROR: [...]/tensorflow/tensorflow/contrib/session_bundle/BUILD:107:1: no such target '//tensorflow/core:android_lib_lite': target 'android_lib_lite' not declared in package 'tensorflow/core' defined by [...]/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/contrib/session_bundle:session_bundle'.
ERROR: Evaluation of query "deps((//... union @bazel_tools//tools/jdk:toolchain))" failed: errors were encountered while computing transitive closure.
Configuration finished

To summarize, the dependencies that are included inside the if_android and if_ios are not found. They don't exist in the repository, so that is not that surprising. What is more surprising though is that my vanilla installation is not returning empty lists when if_android is called. I haven't looked into how those functions work, so not sure why that is happening.

Environment info

Setup: CentOS, Bazel 0.3.1, CUDA 7.5, CuDNN 5.1, Tensorflow master (dbe7ee0)

I run configure and set it up for GPU support. Actually, I don't think this is criticial, but first I had to open up configure and add --output_base=... on the two calls to bazel, since my setup requires a custom cache directory.

Fix

The if_... lines were added in ed87884, so a fix that I know works is to use its parent commit 7705791.

@woodshop

This comment has been minimized.

woodshop commented Sep 10, 2016

I can confirm this issue. In my environment configure fails on the latest commit (7705791) with the following errors:

ERROR: /home/sarroff/repo/tensorflow/tensorflow/core/kernels/BUILD:2207:1: no such target '//tensorflow/core:android_tensorflow_lib_lite_no_rtti_lite_runtime': target 'android_tensorflow_lib_lite_no_rtti_lite_runtime' not declared in package 'tensorflow/core' defined by /home/sarroff/repo/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/core/kernels:android_tensorflow_kernels_no_rtti_lite_runtime'.
ERROR: /home/sarroff/repo/tensorflow/tensorflow/contrib/session_bundle/BUILD:213:1: no such target '//tensorflow/core:android_lib_lite': target 'android_lib_lite' not declared in package 'tensorflow/core' defined by /home/sarroff/repo/tensorflow/tensorflow/core/BUILD and referenced by '//tensorflow/contrib/session_bundle:signature'.
ERROR: Evaluation of query "deps((//... union @bazel_tools//tools/jdk:toolchain))" failed: errors were encountered while computing transitive closure.

As suggested by @gustavla, I checked out ed87884 and configure ran without error.

Setup: Ubuntu 16.04, Bazel 0.3.1, CUDA 8.0rc, CuDNN 5.1.5, gcc 5.4

@ibab

This comment has been minimized.

Contributor

ibab commented Sep 10, 2016

You can fix this by commenting out the if_android and if_ios calls in contrib/session_bundle/BUILD and the android_tensorflow_lib_lite_no_rtti_lite_runtime target in the core BUILD file.
These seem to be based on Google internal targets.

if_android doesn't return an empty list directly, but gives you a select object, which is resolved to an empty list by bazel.

@woodshop

This comment has been minimized.

woodshop commented Sep 10, 2016

Confirmed that if I make these changes to the following three files configure runs without error:

  • Remove all if_mobile and if_android conditions from tensorflow/contrib/session_bundle/BUILD
  • Remove the ios_tensorflow_test_lib target from tensorflow/core/BUILD
  • Remove the android_tensorflow_kernels_no_rtti_lite_runtime target from tensorflow/core/kernels/BUILD
@kwotsin

This comment has been minimized.

Contributor

kwotsin commented Sep 10, 2016

Remove all if_mobile and if_android conditions from tensorflow/contrib/session_bundle/BUILD

Do you mean that I comment out everything related to mobile devices, not just if_mobile and if_android? Or are these 2 components sufficient?

#    "if_android",
#   "if_ios",
#    "if_mobile",
#   "if_not_mobile",
@ibab

This comment has been minimized.

Contributor

ibab commented Sep 10, 2016

You don't need to remove them from the call to load at the top of the file, but instead each time they are used, like this:

cc_library(
    name = "session_bundle",
    srcs = ["session_bundle.cc"],
    hdrs = ["session_bundle.h"],
    copts = if_ios(["-DGOOGLE_LOGGING"]),
    visibility = ["//visibility:public"],
    deps = [
        ":signature",
    ] + if_not_mobile([
        ":manifest_proto_cc",
        "//tensorflow/core:core_cpu",
        "//tensorflow/core:framework",
        "//tensorflow/core:lib",
        "//tensorflow/core:protos_all_cc",
    ]) #+ if_mobile([
    #    ":manifest_portable_proto",
    #    "//tensorflow/core:meta_graph_portable_proto",
    #]) + if_android([
    #    "//tensorflow/core:android_lib_lite",
    #]) + if_ios([
    #    "//tensorflow/core:ios_tensorflow_lib",
    #]),
)
@msevrens

This comment has been minimized.

msevrens commented Sep 10, 2016

Having this issue as well

@raix852

This comment has been minimized.

Contributor

raix852 commented Sep 11, 2016

I had the same issue too. Following @ibab @woodshop suggestions solved this problem.
I think configure need an option to decide target environment.

@kwotsin

This comment has been minimized.

Contributor

kwotsin commented Sep 11, 2016

I fixed the issue according to @ibab and @woodshop advice as well.

However, I also commented out copts = if_ios(["-DGOOGLE_LOGGING"]), and any if_ios conditions, which was seen invalid when I was configuring. Thank you!

@Eidosper

This comment has been minimized.

Eidosper commented Sep 11, 2016

if you download tensorflow from github instead of 'git clone', you will not meet this.

@woodshop

This comment has been minimized.

woodshop commented Sep 11, 2016

Just to clarify @Eidosper's comment: the suggestion is to use an official TF release (e.g. tagged v0.10.0rc0). This is not a solution for anyone that needs the current state of TF development (i.e. the newest commit of the master branch).

@martinwicke

This comment has been minimized.

Member

martinwicke commented Sep 12, 2016

@petewarden This seems to be a sync issue maybe? Do the is_* macros only work on the inside and should be rewritten?

The rules work fine in our CI, which makes it a little more confusing. What's used to determine if_mobile etc.?

@petewarden

This comment has been minimized.

Member

petewarden commented Sep 12, 2016

This is surprising because these rules have been in for several months without causing this issue, so something must have changed recently. The is_* macros rely on config_settings in the top-level BUILD file that test against cross-tool usage, so I wouldn't expect them to return non-empty results.

The quickest way to debug this is probably to reproduce it at head, and then do a binary search on github checkins to narrow it down to a particular commit. This shouldn't be too hard since it happens very quickly, but I'm traveling for the next couple of days and won't be able to work on it until I'm done.

@ibab

This comment has been minimized.

Contributor

ibab commented Sep 12, 2016

@petewarden: I've narrowed the problem down to commit ed87884 using git bisect.
An easy way to reproduce it is to run bazel fetch //tensorflow/contrib/session_bundle/....

@petewarden

This comment has been minimized.

Member

petewarden commented Sep 12, 2016

Thanks for doing that @ibab! I've notified the author of that change, hopefully we should be able to figure out a fix.

@gustavla

This comment has been minimized.

Contributor

gustavla commented Sep 13, 2016

@ibab I hope it didn't take you long, because I already did this and reported ed87884 as the offending commit in my original post 😄

Now, the question is, did that commit trigger a latent bug that was introduced at some other point in time or has it always been there. This is what I thought @petewarden was implying by saying they had been around for "several months", referring to the is_* macros. This would involve running a git bisect and in each step applying ed87884 as a patch.

I was thinking it might be a bazel version thing, but I tried both 0.3.1 and 0.3.0 (after a quick fix: #4343), and I have the same problem.

@tmulc18

This comment has been minimized.

tmulc18 commented Sep 13, 2016

@shamak

This comment has been minimized.

shamak commented Sep 14, 2016

@tmulc18 you should fix that link :)

@andrewharp

This comment has been minimized.

Member

andrewharp commented Sep 15, 2016

This seems to have auto-closed when I merged f66b491, but it does not seem to be completely fixed yet, does it?

@woodshop

This comment has been minimized.

woodshop commented Sep 15, 2016

No, it doesn't seem entirely solved. The configuration fails in my environment due to the following two targets:

  • The ios_tensorflow_test_lib target from tensorflow/core/BUILD
  • The android_tensorflow_kernels_no_rtti_lite_runtime target from tensorflow/core/kernels/BUILD

However I am now able to build the project despite configuration errors, whereas I was not able to do so before commit f66b491.

Commenting out the targets enables the configuration to complete without errors.

@andrewharp andrewharp reopened this Sep 15, 2016

@israelg99

This comment has been minimized.

israelg99 commented Sep 15, 2016

I fresh cloned tensorflow for latest master commit, and I confirm I'm experiencing the same issues as @woodshop, so not fixed yet.
Note that I didn't have those issues configuring a few days ago.

However, you can actually build tensorflow despite configuration errors now because f66b491 removes the if_mobile, if_android, and if_ios conditions(which had to be removed manually beforehand).

Going ahead and removing the next 2 targets enables the configuration to complete without errors for me, using CUDA 8.

I'll build the example trainer with bazel along GPU support and test if it works now.

@suiyuan2009

This comment has been minimized.

Contributor

suiyuan2009 commented Sep 18, 2016

meet same issue.

@jmhodges

This comment has been minimized.

Contributor

jmhodges commented Sep 19, 2016

Just got told my #4476 is a duplicate of this ticket, so coming here with: Is there anything I can do to get this fixed up? It seems to be hitting a number of external contributors and it's been 10 days since it was first reported. A time suck for all of us to be rediscovering it and working out the same patches!

@vrv

This comment has been minimized.

Contributor

vrv commented Sep 19, 2016

If you remove the reference to //base and the references to the lite_no_rtti_lite_runtime and android_tensorflow_lib_lite_no_rtti_lite_runtime targets, does everything work? If so I'll prepare a change to strip these out.

I don't know why our CI build isn't catching these obvious errors.

@jmhodges

This comment has been minimized.

Contributor

jmhodges commented Sep 19, 2016

Yeah, if you delete all mentions of "//base", "//tensorflow/core:android_proto_lib_no_rtti_lite_runtime", and "//tensorflow/core:android_tensorflow_lib_lite_no_rtti_lite_runtime" it seems to build okay.

I only get these errors when I build with CUDA. Could that be the deal? (I'm also on OS X.)

@andrewharp

This comment has been minimized.

Member

andrewharp commented Sep 19, 2016

@vrv I'm halfway done with a CL to fix the export right now

@vrv

This comment has been minimized.

Contributor

vrv commented Sep 19, 2016

@andrewharp woohoo, thanks! send to me :)

@jmhodges thanks for confirming. We also have CUDA builds, so perhaps we're only building what's necessary for tests / pip installation, rather than the entire repo (e.g., we don't try to build tensorflow/..., though maybe we should. At least we should validate the BUILD files, which I thought we used to do). cc @gunan

@jmhodges

This comment has been minimized.

Contributor

jmhodges commented Sep 19, 2016

Hunh, weird. It happens as soon as I run ./configure on a fresh build!

@gunan

This comment has been minimized.

Member

gunan commented Sep 19, 2016

we do not presubmit with cuda+mac. We only have nightlies.
Those builds are broken at the moment, but at first glance, it looks like a different issue (the nightly failures and the reports here)

@andrewharp

This comment has been minimized.

Member

andrewharp commented Sep 21, 2016

ea07715 is commited, which should fix the remaining configure issues regarding //base and android_tensorflow_lib_lite_no_rtti_lite_runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment