Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question] Public and private(build_requires) dependencies #7506

Closed
1 task done
FnGyula opened this issue Aug 5, 2020 · 15 comments
Closed
1 task done

[question] Public and private(build_requires) dependencies #7506

FnGyula opened this issue Aug 5, 2020 · 15 comments
Milestone

Comments

@FnGyula
Copy link

FnGyula commented Aug 5, 2020

With regard to static and shared libraries, there's a simple rule of how I generally decide when is a dependency should be transitive:

  • if the host package is shared library or executable, linking against a static library which doesn't show up in the headers, it's a build_requires (which I'm not entirely sure if I prefer to private, but that's for later).
  • if the host package is shared library or executable and it depends on a header only library that doesn't show in the interface, it's a build_requires.
  • in any other case, it's a public dependency.

Now, all is good so far because for most of our packages we had a single way to build them, either shared or not. However, as things taking off, the realisation set in, that basically we can't make that decision easily ahead of time, because if I build all our libraries as shared and static, then I have to write the same code over and over in the recipes:

def _is_private(self, p):
     self.options.shared and not self.options[p].shared

def build_requirements(self):
    if self._is_private("Foo"):
        self.build_requires("Foo/X.Y.Z")

def requirements(self):
    if not self._is_private("Foo"):
        self.requires("Foo/X.Y.Z")

Other than being ugly, it defeats ODR (need to evaluate the same condition in two different methods, it also makes the nice and declarative feeling of requires and build_requires restricted to pretty much header-only libraries (unless that also depends on an option, like for boost).

It feels like that this is something to be addressed in conan itself, but I don't know how to do this right, tbh. But this makes the recipes bloated and I need to re-do all requirements for all my recipes to get this right. Any ideas, how to handle this better?

@jgsogo
Copy link
Contributor

jgsogo commented Aug 6, 2020

Hi, @FnGyula

I understand your pain and I know it is problematic in Conan right now. To handle this properly I thnk we would need to wait until Conan 2.0 which should have an improved graph model (better management of requires, visibility, better propagation depending on library type,...).

Meanwhile, I can give you some advice:

  • As a general rule, use build_requires only for tools, if you are using two profiles to build your libraries, requires belong to the host context and build_requires belong to the build context. You need your library Foo to be in the host context in order to link with it. (more info: https://docs.conan.io/en/latest/devtools/build_requires.html?highlight=context#build-and-host-contexts).
  • You can use build_requires("....", force_host_context=True) and they will be used only while building the library and binaries will belong to the host context, but the main problem is that the package-ID of your consumer will be the same regardless of the Foo version: build_requires doesn't modify the package-id. Probably this is not something you want, you want different package-id for different versions of the Foo library.
  • The best alternative so far, would be to use the private keyword:
    def requirements(self):
       self.requires("Foo/X.Y.Z", private=bool(self._is_private("Foo"))
    try it carefully, you might find some issues with them. Report them, but most likely it is something to fix in Conan 2.0

@FnGyula
Copy link
Author

FnGyula commented Aug 6, 2020

Thanks @jgsogo !

So I always wondered the difference between build_requires and private and it's not entirely clear to me at this point. In the documentation, it's using protobuf and gtest. Now, protobuf looks a bit iffy, because it is true that the protoc stuff is build time dependency, but if I recall correctly there's still a library to link against for the client.

GTest is more straightforward, and indeed this is how we use our unit testing frameworks. But how is that different from a libz statically linked into the binaries of the project? It doesn't alter the API or the ABI of the binaries in any way, if I build it with one version or the other. This is why I thought that the idea behind build_requires is to hide the dependency completely away.

But then again, the same applies to private dependencies, though for some reason I was under the impression that they are outdated since the introduction of build_requires:

private: a dependency can be declared as private if it is going to be fully embedded and hidden from consumers of the package.

As for build requirements the key point in the documentation seems to be:

There are requirements that are only needed when you need to build a package from sources, but if the binary package already exists, you don’t want to install or retrieve them.

So, if the dependency fully hidden, because it's already linked into the binaries (could be a source based library too), then there's no point in installing it transitively, is there? Anyway, it's a bit confusing.

Your example is compellingly short, and perhaps with using @property I could even use the requires attribute ... I will give it a go.

As for the host and build context, I just recently came across of this feature, I will need to read up and tinker a little bit more about it. From the sound of it though, I think it's useful for another reason, using something like --build=missing to distinguish between the host and the actual package profiles. But as I said, I need to read more.

@jgsogo
Copy link
Contributor

jgsogo commented Aug 6, 2020

Even if the library is totally embedded (doesn't modify the ABI), it still implements some behavior. In your DLL you want to get a different binary (a different package-id) when you embed one version or the other... even if they are embedded, they behave differently, they have different bugs. If the package-id is exactly the same you cannot know which version of the embedded library you are using.

The two profiles are useful to distinguish between the libraries you are building and the tools you are using. Conan doesn't propagate exactly the same information from packages that belong to the host context (here you will link with them) or packages in the build context (you only use them). Typically this is more evident in a cross-building scenario, you are building your project for Android linking with boost or zlib (requirements), but you definitively want Android NDK to use Windows/Linux binaries so it runs in your machine (build_requirements).

@FnGyula
Copy link
Author

FnGyula commented Aug 6, 2020

Now you left me really confused :) The package_id has multiple modes to be calculated, but the default is that the major version number is taken. That I figured, was a sensible default since I don't care which Python 2.7.x I'm building against for the compatibility of my package that depends on Python 2.7.x.

So that means that the package id isn't intended to describe the exact build, it intended to describe compatibility. There's a build_id which I haven't used yet, but that might be useful to distiniguish between different artifacts in the cache. To me, the whole package id, hidden deps, etc. are basically a way to reduce the pain of rebuilding the world updates where you don't need to do so.

If I know that zlib produces compatible compressed artifacts between versions, I'm totally comfortable to keep my Python called the same package id no matter if it is zlib 1.2.11 or 1.2.10 it has been built against statically. Mind, that the project I'm working on has it all: Qt, Python, Boost, VXL, and 40 others, so we try to minimise the rebuild the world time, when we can. Else, building everything from source in one build would be the safest bet.

@jgsogo
Copy link
Contributor

jgsogo commented Aug 6, 2020

The default mode for package ID is semver_direct, yes, it means that the consumers will have a different package-id when the major version of their requirements change (build_requires are not taken into account). I'm happy that you are aware of other package ID modes.

One of the purposes of package-id is ABI compatibility, that's for sure, and for an embedded library, regarding ABI, you don't care about the version embedded inside your DLL. Nevertheless, other users consider it very important to control which version of every requirement is used in the graph, even if they have to rebuild the world again and again (it is a compromise). This is the main motivation for other packages IDs, not only to control the ABI, but also the versions of every piece of code compiled into a binary.

@FnGyula
Copy link
Author

FnGyula commented Aug 17, 2020

@jgsogo So the system I'm working with isn't designed to rebuild the world. That's where I'm coming from.

Coming back to the question: what is the definitional difference really between private requires and build_requires? The documentation is hazy about this, or at least their definition is overlapping, as quoted above.

@ohanar
Copy link
Contributor

ohanar commented Sep 4, 2020

@FnGyula
As an example, suppose you have a project that you are trying to build for Android. You almost certainly aren't going to compile the project natively on Android, so instead you might build it on Windows.

Maybe your project uses cmake to build, you want the cmake executable to run on Windows, not Android. If your project privately embeds a static library, you want that static library to be built for Android and not Windows. The former is a build_requires, as binaries are for the build machine, the later is a private requires as binaries for it for the host machine.

Generally speaking, build_requires are for things that are only needed to build the project, where as requires are for things that the project needs to run. Your OP describes things that could be private requires, but not build_requires.

@FnGyula
Copy link
Author

FnGyula commented Sep 7, 2020

Thanks @ohanar the explanation. Now onto changing my build_requires to private deps :)

@FnGyula FnGyula closed this as completed Nov 16, 2020
@FnGyula FnGyula reopened this Nov 16, 2020
@FnGyula
Copy link
Author

FnGyula commented Nov 16, 2020

Didn't mean to close it...

I did try to change our dependencies to use private instead of build_requires, but that turned out to be plain wrong. The problem with private requirements that they are still downloaded for any consumer for the package, which is absolutely not needed for static libraries already linked into the binaries. My guess here is that the private requirements are more of a hint for the generator, so it can generate things like PRIVATE CMake dependencies.

So, overall, what I need is something like build_requires but with some kind of leniency. We recently run into some issue where a recipe declares dependency is a build_requires as it is a static library linked into a shared library and there is no reference to it in the headers. However, adding another dependency, that has a public dependency (as in, requires) that clashes with the version in the consumer package's build_requires. I think that build_requires has two major issues here:

  1. Conan resolves dependencies only once, in case of using --build=missing which means that if there are two missing libraries that both has build_requires for a particular package, it fails if the packages are two different versions. This is more than inconvenient, because the two packages have nothing to do with each other on their binary interface nor these build_requires has any effect on their consumer. This could be solved by pulling the dependency for all packages for each dependency's separately.
  2. The issue described above basically. There are cases where a the requires and the build requires meet in a single package (as the requires propagate) and there's no good way to resolve it without overriding. I wonder if version-ranges can solve this issue in this case, but even if they do, you can solve both this point and point 1. by optionally allow build_requires to be overriden just the same.

The assumption that build_requires packages can only be build_requires in all circumstances is wrong. In addition, the generic shared option constellations can be particularly problematic, because you really don't want to propagate a dependency that has no ABI/API relevance in the downstream.

Overall, I do believe that the private/build_requires system needs a big refurbishment!

@ohanar
Copy link
Contributor

ohanar commented Nov 16, 2020

@FnGyula

See also #7016.

Indeed there are a good number of issues with build_requires and even more so with private requires. These are intended to be addressed with Conan 2.0.

With build_requires in particular, a goal is to have a completely decoupled dependency graph from normal requires. You can opt-in to an experimental version of this behavior presently by using separate build and host profiles, but you should note that various behavior changes when you do so. In particular, you can't readily link against build_requires, because they might not be valid for the host machine (see example in my previous comment). In other words, the current soft wall between requires vs build requires will become a hard wall in Conan 2.0.

@jgsogo
Copy link
Contributor

jgsogo commented Nov 17, 2020

Hi, @FnGyula

I can subscribe to everything @ohanar has said in previous comments. The concept of build-requires vs requires will work as described: build-requires are intended to run in the build-machine (CMake, protoc, GCC,...) while requires are the libraries (or other executables) that will be compiled for the host-machine. You can't link to a build-requires, and build-requires won't propagate linking/include information, only environment.

I totally agree that require-private and the Conan graph require a big refurbishment and it is our intention to address it for Conan 2.0. We haven't defined the scope of this refurbishment yet, but we will consider things like the ones you mentioned, things related to visibility (public, private), consequences derived from linking (shared, static), DLL Hell problem,... We will start thinking about it soon, but it is something we cannot add to Conan 1.x, if we change the graph model it will likely break current behavior.

@blackliner
Copy link
Contributor

I just had the same issue, missing FindABC.cmake files for a private=True requirement that was then taken as a build_requirement. BUT: I have another package that also has private=True AND is used as a python_requires, and this combination ends up in a generated FindABC.cmake module 🥳

@memsharded
Copy link
Member

Hi all!

This has been a challenging feature for Conan 1.X, but we are designing Conan 2.0 to account for this. Regular library dependencies will not need to be build_requires, and they will manage to propagate information correctly. These discussions in the Tribe 2.0 are relevant:

we have also already done a lot of preliminary work and proof of concepts for these for develop2 branch for 2.0, and so far it looks promising, this issue will be solved by the new graph in 2.0.

@memsharded memsharded added this to the 2.0 milestone Jul 27, 2021
@dev-789
Copy link

dev-789 commented Sep 17, 2021

Yeah would be awesome if the private require also cuts off the dependency graph for the consumer like build_require is doing 😄
I keep on using build_require but as it does not influence the package_id a change of the required package doe not enforce rebuilds of the consumers :/. Introducing it a second time as python_require (as mentioned in the doc) makes it again transitive.

One more finding is: We have a private require (Pkg A -private-> Pkg B) and generate a lockfile for our overall product. The lockfile is used on customer side to install all our packages (including Pkg B). After the installation in the local cache the remotes are disabled to ensure no interference. We have now some points here

  • The lockfile states Pkg A
  • Pkg A is installed as a recipe
  • The customer project build fails as it requires Pkg B, and it seems like this is looking transitively for Pkg A in binary

@memsharded
Copy link
Member

This was solved some time ago by the new graph model in 2.0, to be released soon, so closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants