-
Notifications
You must be signed in to change notification settings - Fork 1k
Integrating with Linux package management systems #1214
Comments
hi, welcome! whew, there's a lot here. so, i think that the thing you're going for is probably feasible; your requirements sound similar to the goals of rust-lang/cargo#3815. however, your description here jumps from outlining more general goals to stating specific requirements that, i suspect, will make it harder to actually achieve those goals. in order to try to direct the conversation in what seems likely to be the most helpful way, i'm going to eschew discussion of details in favor of focusing on the big question:
dep is built around a solver that is entirely abstracted from I/O - it works on an entirely in-memory, symbolic representation of the problem. this is by design, with the intent that alternative tools can be built. however, the consistent operation of dep, and the various guarantees it provides, are predicated on having data that is only available via at least some kind of I/O, very often over the network (or at least originally retrieved from the network, but cached locally). this is encapsulated by the but, i also think that creating an entirely new
there's active work underway to introduce a registry concept into the ecosystem: #1172 is the first PR. i suspect that the facilities we build into registries will provide these guarantees, as well as the curation controls (i.e., Fedora packaging folks get to decide which versions of which dependencies are available) that are implicit in your description. i suspect that the easiest way to provide such a registry would be to create one that does on-the-fly translation from rpm repositories. once we have our registry pattern in place, i'd be fine with having that done entirely locally, with the "registry" just being on the other side of a socket, if that would be suitable.
builds are already strongly reproducible in dep, with facilities in place to . the only thing we don't really defend well against today is a left-pad like event - but in your context, that's unlikely to be a concern, as registries proxying to rpm repositories would obviate that concern.
while you'd have to expand on what "auditable" entails for me to be certain, our work on verification (#121) is at least a strong step towards that. now, that model is still largely based on essentially p2p verification, which means that many of TUF's models aren't directly applicable to our situation. however, that basic hashing system ought to be helpful as we continue work on registries, which - being central - put us solidly in TUF's domain. so...
dep itself is unlikely to emit such information directly, though a standalone tool could probably be built that does. but the path and constraints you propose are likely to be slower and more error prone; if you're amenable to different approaches that afford dep/a parallel tool somewhat more control, then i think it could be both more robust and easier to achieve. |
Thank you for the answer, it's difficult to find the good level of detail to share when people come from different universes. I'll try to simplify As i understand it, go deps does
A Linux os-level cross-language artifact management splits the steps in the following
That's a model that works with cargo, pip, maven and so on. There are only so many ways to do software engineering so baring pathological cases (for example, a language that would consider that version 2 succeeds version 3.0.0) a common solver works well as long as the language-specific constrains are properly translated to the system solver language. I'd like to use go dep in step 2. Thus
|
so the common theme here is going to be that "this is not easy, because dep does static analysis of code."
yes, this is largely correct, with two amendments, one quite crucial. first, less crucial:
dep is not involved in the initial fetching of a project - what we typically refer to as the "current" or "root" project.
because we have to perform static analysis of the code itself, not just metadata files (
there are going to be some nontrivial problems in performing this translation. there's no really easy way that i can think of to break these down, so i'll just list them out in a way that's hopefully clear enough for the implications to be grokkable:
further, if GOPATH is is foundational to your strategy, as this seems to suggest:
then...well, it doesn't bode well. the single most major pain point in dependency management for Go in essence, what you're looking for here is for dep to take its domain-specific SMT model and re-encode it in terms of rpm/dnf's more generic SMT for this domain, which is then interpreted by libsolv (at least, if dnf). i have to quote Knuth here:
so...yeah, this path is fraught. yes, we probably could develop a more formally specified grammar for the inputs to dep's solver. but that's not something on our immediate horizon, and it seems likely to me that crucial information would be lost - especially as dep evolves the model. these difficulties are why i initially suggested that finding a way to afford dep at least a bit more control over the process would likely reduce your pain by a lot - for example, not trying to force dep into a local-only mode. of the goals you've cited thus far, the only one that dep doesn't have support for either today or in the immediate future is signed releases - and that's something we could probably roll in with the registry work. i understand both the desire to and value of keeping dep in the same box that other such systems have historically fit into. and i get that system package managers are gonna have to make the decision on the basis of what fits most sustainably into their models. i'm just trying to highlight that there's a lot of pain to be had in swimming upstream in this way, and you might be able to satisfy your requirements more easily if you lean more heavily on the facilities dep already provides. |
Thank you for the nice answer. It makes me cautiously optimistic, as I recognize stuff rpm already knows how to deal with. Even if it's not intentional on go dep's part software engineering constrains force most languages to adopt similar choices as soon as they acknowledge those constrains (Java, for example, was awfully long to admit some of those).
This is actually pretty much the model expected by rpm
Typically the install part targets $buildroot/final_path, then rpm isolates $buildroot so the language-specific solver commands only see a virgin minimal system + artifact files in /final_path + the files of any artifact that the artifact manager declares is necessary in this "build" phase (build as in create an rpm that can be used by other rpms, not necessarily build as in compile code)
Right now we use /usr/share/gocode/src/project_tree as final_path, it would be trivial to change is to /usr/share/gocode/pk/dep/sources/project_tree, as long as pk is a generic root not the project name. You can think of an rpm system as a giant shared virtual vendor tree. Only the necessary parts of the tree are installed (and installing the whole tree is not possible since some parts can conflict with others), users of the system can request the parts they need at build, run of test time, and tree parts are not mutable (when you request a part you always get the same part). So you can not have dep(change X property from Y to Z at install time). But the virtual vendor tree can perfectly contain both dep-withX=Y and dep-withX=Z so the non-mutability is actually less constraining that one may think. Where I suspect rpm and go dep design differ is that rpm is fully recursive, while go dep wants to perform a holistic analysis of the root project and all its deps in one phase (but I may be mistaken about go dep). In the rpm model an artifact is only analysed at creation time and the constrains of artifact foo = intersection of (constrains of foo first-level deps, constrains of foo second-level deps… and so on). Would it be so difficult for go dep to only compute first-level deps, and rely onthe computation it did before for next-level dependencies?
rpm understands all of "a dependency must be present", "a dependency must be present in set X", and "conflicts with this dependency in this set of Y" (usually sets are either a specific version, more than version X, less than version Y), and a lot more Now, distribution policy may be to forbid shipping artifacts with some forms of convoluted constrains rpm is technically able to express, because in the distribution experience while those constrains are technically possible, they are bad software engineering and end up in problems some years later. But, let's focus on technical translation here.
As you've discovered choosing with versions to make available is the bone of contention when creating a solver-centered system. This has been debated from the creation of Linux distributions to this day. It is not possible to have a perfectly coordinated software universe where only a single version of a given component exists at one time (except for very restricted and shallow universes). Therefore, single-version systems are utopia. However, while utopia is not realistic in our imperfect universe, it can be a very useful target to pursue. In fact, there are huge (hidden, and deferred) costs in shipping every possible version of a component that may have been selected by the other components in the software universe. Yes in free software forking is cheap (staying stuck on one specific version and ignoring later changes is a form of forking, since the project authors move on). It is is also a way to accumulate technical debt. You can not sustain forward momentum for long if there is no technical or social mechanism to force convergence on a limited set of versions for a given component, and make this set progress with time. Ignoring this point bankrupted SUN Java to the point Oracle is still struggling to relaunch the ecosystem. None of the people advocating "each component is allowed to lock all its deps at specific versions" ever managed to build a sustainable system, even though it is trivial to create technically (just use a versioned and partitioned tree). They start strong but the drag of all past versions, that need shipping, checking and fixing in addition of the ones required by newer components, quickly fossilizes them. No software project is ever finished, there are always new security alerts, new killer features, etc that require reviewing past code, if only to check if it is affected. Besides they tend to accumulate conflicting requirements (though that may be solvable with hard partitioning). And they quickly suffer from a terrible image, to the point software consumers start to actively avoid them. One only needs to be affected by so many problems, which are fixed in foo project's bardep latest version, except foo project version-locked bardep so long ago it is unable and unwilling to port to the fix, to conclude you do not want to touch foo project nor any other project that looks to use the same software engineering rules. Linux distributions sometimes move too slowly for people's taste but they do move on. And forward velocity is just a compromise between level of QA and amount of effort in a distribution case. In fact some software projects are now asking distributions to lock whole runtimes (sets of dependencies with specific constrains and API) to force their ecosystem to converge on those runtimes, since they found out a single component version per system was still too dynamic to optimize maintenance. Therefore, even though rpm systems can and do allow multiple versions of the same component, since they need to work on the actual world, policy will usually be to treat everything but the latest packaged version (or major version) as a temporary exception that needs resorption mid-term. Even though that annoys big time software developers which have chosen a different upgrade cadence. It is usually possible to come to a compromise as long as both parties make an effort. I agree this can be a very unpleasant phase, no party looks forward to compromising and taking other people's needs into account, and people are often passionate distribution and project-side, but it is the cost of sustainability and forward progress. I just wish more people understood this. From a purely technical point of view, to make multiple go dep versions work in rpm, I would need:
The alternative is to have a form of component registry, that go dep consults to learn where artifact foo version bar is on the filesystem. Several software ecosystems work like this. However since one still needs to install the files somewhere, someone will still need to come up with an installation convention, and in my experience reading and writing to the registry usually turns out a lot more complex and brittle than agreeing on where the files should be in the first place.
Yes, right, I don't worry about the rpm solver engine, it has proved itself time and time again (and is periodically rewritten for features and performance), the difficulty is to encode the constrains, and choose sane constrains. You can encode perfectly bad software engineering it stays bad software engineering. Choosing sane constrains is a discussion we need to have with each individual project. Encoding the constrains is the discussion we are having here. I don't really need a
I need the dep solver to output the result of its computation somewhere (command output or result file) so I can scrap it and translate to rpm speak. I need to understand the output format so I make the best possible translation for all parties and I need some way to be alerted when there are additions or changes to the output format. While I do agree that in theory, there are so many things that can go wrong such translation is frightening to contemplate, in my experiences projects that do not come from a solver culture start by over-specifying their needs ("defensive" constrain programming), before settling on a "good enough" command subset which is more or less always the same and not that difficult to translate.
I do want to rely on go dep as much as possible :) However, I need to integrate go dep in the rest of the system, because go dep will only ever manage the go part of a software project, and I'm already seeing (and needing a way to deploy) software projects that cross the go language barrier. If go dep was better than the existing box, and able to manage everything in the existing box (ie not only the go part), I may mourn a little old tools but that would not stop me from switching. |
i'm a strong believer that each language's package management problems are all generally quite similar, and are best thought of that way; i wrote a lengthy article that rests on that premise. at the same time, researching and writing that article, and the subsequent time spent working on this project, have convinced me that there are some system properties that do actually matter quite a bit when it comes to design choices in this space - and the ones that end up mattering can be counterintuitive. so i'm generally wary of dismissing language-specific concerns as snowflaking until i'm sure i understand the concern.
like...
while we could potentially design some hacks around this, it's not possible for dep to operate from an arbitrary disk location right now 😢. dep requires placement within a one of the gists i linked earlier describes a possible path towards dep's operation at arbitrary locations.
yes, these are basic examples of what rpm can express. but they don't appear to cover the case i'm concerned about: "if a dependency is required, then it must be in set X". also note that this is only really a problem when combined with the fact that the unit of dependency, the repo/project, has only one i realize this may sound similar to optional dependencies, but it really isn't, in large part because of the real difference that we need to talk about:
"installed" - or more precisely, "code is present on local disk" - means something very different in an rpm world than it does for dep, or for most LPMs. for SPMs, there is generally one global "installed" space - as you described it, the subset of the virtual vendor tree that happens to be present on a system - and a large part of the job of rolling a release of a distro is checking that the combinatorial sets of versions are largely mutually sane, such that any individual subset of that universe placed on any particular machine will also be sane. the final expressive form this takes is the code that is actually, literally, on disk. dep/LPMs, on the other hand, generally avoid global, unversioned pools for their packages, as the scope of their operation is restricted only to a single project at a time. we don't care terribly much about what code happens to be on disk, as code happening to be on disk/installed is not an endorsement of it. that's up to the compiler or interpreter, which the LPM somehow instructs to pick a few particular versions out of the multitudes that may be "installed." i tend to view this as a reflection of Conway's Law: for LPMs, responsibility for sanity is scoped to a project, and sits in the hands of the developer(s) responsible for that project. it's generally not possible to take on wider responsibility than that by definition, as the set of packages (and people producing them) in LPMs is collectively uncurated (in Go, it's not even knowable). but for SPMs, where the goal is system-level sanity combinatorial, the set of packages & people responsible for composing the known-good combinatorial set of them is generally knowable, as that combinatorial set is itself an artifact to be agreed upon and distributed (as the rpm tree). so, if you're trying to create a world where all versions of Go software (installed via rpm) on a given machine have to agree on all the versions of their dependencies to use, then - reiterating my last response - you're going to be swimming upstream. it might, might, be acceptable for a world where the user isn't a Go developer, and/or where they aren't installing that much Go software. but this requirement is what GOPATH's structure originally imposed on the Go community, causing great pain, and was the original impetus for the creation of tools like dep.
this is a foundational assumption of dep. in fact, barring some weird exceptions that are now possible (but dep disallows), it's a foundational requirement of the go compiler. though, the gist i linked describes one possible form of a planned future where that may no longer be the case. to be clear, if you're trying to exert control at this level, you could find yourself feeling the need to perform performing absurdly low-level invocations of go's compilation toolchain, rather than relying on the higher-level commands. again, there be dragons.
both of my comments so far suggested that a registry would be an easier path. also, to be clear: you basically can't control dep's behavior by messing with things on the local filesystem. it does its work almost exclusively based on what it finds from
the result of dep's solver's computation is the
i do not have a precise understanding of what you're trying to express by "fully recursive," "holistic analysis," or even "one phase" here - only the general senses of the words. to the user's eyes, dep has one phase. internally, the algorithm is similar to any other constraint solving algorithm - it has many phases and many moving parts, and exits only when it either finds a solution, or determines that there is no solution.
you may be able to extract what you're looking for from what we hope to make of
no. the entire reason the underlying problem here is difficult is because graphs do not have a structure that's predictable a priori - you can't effectively partition it into "levels." say we do a full pass through a project's direct dependencies, selecting versions for each. it's entirely possible that, three 'levels' deep, something could point back to one of the original direct dependencies from the "first level" with constraints that forces us to select a different version. some of the "first level" work now must be revisited, which effectively nullifies the meaningfulness of the levels. while i appreciate the considerable effort you're putting in with these questions and descriptions, let's please leave out the history lessons, assertions about certain tradeoffs being "bad software engineering," and opining about dep's place in your evolutionary model, etc. it smacks of the same sort of condescension with which SPM folks have sometimes treated LPM folks (i.e., like we're irresponsible amateurs) in the past. those conversations have generally not had productive results, and i'd like this to remain productive 😄 |
I wouldn't put it quite this way, the designs are usually quite similar, what is awfully tricky is the grammar. Everyone reuses the same concepts but the terms often map to slightly different boundaries. That's why I opened the issue, to avoid making bad guesses about the canonical source of Go dependencies and the meaning of those dependencies.
Yes I need to investigate more why my predecessors choose to reimplement the install command and not use the go one (for build that's clearly because of GOPATH/src). We use native ecosystem commands in other languages just fine. Distro workarounds while handy are a source of long-term problems and conflicts. Probably a combination of:
Nothing earth shattering, just more work to do our side and Go side so things work well together. And possibly some of this already exists Go-side, I'm just not aware of it. Needs investigating, so many things to do, so little time…
For what it's worth, it's a major PITA, I had to write a macro that creates a fake _build/src/import_path, with the last leaf symlinked to the current directory, adds the result to GOPATH, and cds to the leaf (the last one for an evil unit test that was not content with the files being in GOPATH with the correct directory structure, but wanted $PWD to match). It works but it's very ugly and it needs to be invoked pretty much before every go command. Would it be possible to just add a switch to Go commands to explicitly set the import path instead of inferring it from directory structure? The first thing our automation does for go projects is to define an import path variable, for use by macros and other packaging helpers, Alternatively, as I wrote before, it would be nice to have the project id (import_path) part of the project metadata, in addition to project dependencies, so everyone can source it in the same canonical place.
I mostly like its contents with the following caveats:
Actually "conflicts with dependency X in set of Y" has the same rôle in rpm land. It translates to "if the solver needs to make available X (and the only reason it may want to make available X is because it is required by something, it can choose any version but the set in Y". Basically rpm wants projects to be optimistic about what they accept, forbid what they actually know won't work, instead of whitelisting a small set of tested versions. The reason is that in the rpm distributions experience whitelisting results in overly strict and rigid requirements, that make it awfully hard to share code or update components, resulting in deadlocks or even crisis when a security alert forces quick updating of a project (our solver people hate explicit version deps). I agree the rpm variant is unintuitive, that left alone projects will ask for the one you chose (and not realise they are deadlocking themselves before it is too late). So there is no problem translating to rpm, as long as there is a clear way to invert the set you want in a forbidden set (I, obviously, would prefer to have go dep perform the inversion, since it understands Go dependencies way better than I will ever do. I can code an inversion macro otherwise rpm-side if you point me to the kinds os set that would need inverting). (The alternative is to force install of the dep in the defined version set which is ugly and wasteful but trivial to do).
That is not a problem, there is no identity between source project and unit of dependency in rpm land. I can easily tell rpm either to split the source project in separate rpm files (one unit of dependency per file), or have a single rpm file that provides all the units of dependency, or a mix of both¹. My limitations are that:
And I'd rather use the same splitting strategy all the time than revisit the rules for every go project.
Well that's precisely the point. An rpm system is first and foremost user-oriented not developer-oriented. It is optimized to consume software not develop it. It is designed so users do not have to worry about all the special rules that matter when developing each of the tens of thousands of components that a user may consume daily as part of its IT usage. And that's perfectly fine like this. You need systems that optimize working on a specific component in a specific language (the various language-specific and developer-oriented systems like go dep) and you need systems (like dnf/rpm) to distribute the result as part of a larger whole as soon as your target audience becomes larger than the original developer set. To maximize success the handover from developer-oriented to user-oriented system must be smooth. Otherwise at the first problem each stakeholder project will insist on users downloading its own binaries made with its own rules on his own site which basically does not integrate nor scale even though it is the most convenient for the original developers (as long as they are not cast in the user role). That being said you've missed a very important nuance of modern Linux systems. Except for specific distributions such as Gentoo, nowadays Linux packages are not built in the target run environment. They are built in isolated secure minimal environments, to limit unwanted side-effects, make sure the result can be easily reproduced, and for security reasons. That's where the "no network calls at project build stage" is coming from. That also means it is not too difficult to have one component version to exist when building A, and another when building B. Technically, rpm distinguishes between build dependencies (stuff that needs to exist when creating a rpm file, typically for a go project that does not use shared libraries the source files of all the projects it depends on), and runtime dependencies (stuff that needs to exist when using the software, which version needs to be agreed on by all the things running at the same time, or bad things happen). So an rpm environment can accommodate different build-time dependencies for different projects (actually, runtime too as long as the file paths are different and software is able to distinguish between the runtime versions to choose the one it needs). It just needs unambiguous versionning of the build constrains to know what version to make available at build time. Therefore it is very possible to translate a dependency system where versions vary from project to project to the rpm solver. However, and practically, there will be a demand from distributors to projects to converge on a limited set of versions of the same component. Every new version you inject in the system in another version to check in case of problems (security or user-reported). So while in theory infinite variability is possible in practice no one has the means to handle it. The post mortem of any serious security incident nowadays is almost always 'the fix existed, but it was not identified or applied everywhere', usually because vendoring variability made identifying and propagating the fix overly difficult. I know developers hate this kind of demand. It's more work their side. It's insurance for problems that may never materialize. I also know of no magic solution. Pretending no problem will ever be identified in a bit of code in the future is not realistic. Pretending that projects that could not agree on common dependency versions will suddenly switch to the same ones on a short notice in crisis mode is not realistic. It's a lot easier for everyone involved if your starting point is a limited version set (limited, not necessarily single-version). That's where the "everyone uses his own git commit" model falls over and you start needing releases and versionning work, with a limited set of major versions and enough backwards compatibility inside a major version every user can be easily upgraded to the major tip with the security fixes. That's basically what Google is attempting for Android with Trebble.
I hope this future does not materialise, Java permits such a thing through classpath hierarchies, and it's an endless source of subtle and difficult to diagnose problems. The convenience of not having to synchronise on a common version while building a single binary is not worth it. As long as most Go projects don't use this, we'll ignore the others.
Good way or not I don't have the choice, no network at build time is a hard rpm requirement, I can't change it. I can teach rpm to read a Go project build dependencies ("when something tries to build against this project, it will also need X Y and Z with those constrains") but I can't let anything inside the project reach anywhere outside the restricted build environment that will only contain the files corresponding to the project, X Y and Z, and the go compiler. That's how rpm still manages to build software originally released on long dead usenet forums or mailing lists (some of the parts of a Linux system are that old). Github can burn tomorrow all the projects it hosts will still build fine when packaged in an rpm distribution (same thing for the Red Hat parts BTW).
Ok I'll focus on this then. Thank you for the fine advice ! Is this file syntax described somewhere?
Sorry if I wasn't clear. What it means basically is that rpm only computes the requirements of any given unit of dependency once. You can not reinterpret them in the light of the declarations of different root projects. So a unit of dependency can state: And X may be pulled in or not by the root project or another unit, the constrain will still work. But a project that needs the unit of dependency can not state The final constrains are the intersection of the constrains of all the units involved in building a project, a unit controls the constrains it injects in the rpm solver but not what the other units injected. I hope that's more clear and compatible with go dep, if not I can try to explain it some other way. I did not intend to mean that once something deep in the dependency tree put a constrain on a unit of dependency, no other unit nor the root project could put another constrain on the same unit. That's perfectly fine as long as there is a way for the solver to satisfy both things at once². I'm sorry if all this is very long and verbose, want to avoid misunderstandings and create something that works well for Go projects and Go project users. Do tell me when I'm not on the right level of abstraction.
Hey I didn't mean to disparage your knowledge in any way, you're way more competent than me in some domains (starting with Go). I'm just sick to death of being treated like an irresponsible amateur by project people, that can't seem to imagine Linux distros may have ever learnt something about dependency management despite shipping successfully tens of thousands of software projects for several decades³. I guess I was being defensive, sorry 😄. I'd rather discuss the how of go dep and rpm solver choices, and why we feel they are a good engineering choice, in order to find a technical common ground. Thank you for your patience! ¹ In fact projects, once packaged by rpm distributions, are often more granular than at the original project level, because the original project often feels all his files are useful all the time, while project users may disagree. ² For example if a unit of dependency specifies X in set Y, and another X in set Z, and Y and Z are disjoint, rpm will consider the constrains solvable if it finds something that provides X in set Y, and another thing that provides X in set Z with no file conflict, and unsolvable if they both exist, but they have at least one file in common, with different hashes. That's quite subtle and unintuitive and one reason I worry do much about filesystem use conventions. That's also why rpm uses conflicts to limit the version set : X > Y + X < Z is not equivalent to Y < X < Z in some cases. ̧³ Usually because their measure of technical success if how much you charge for the result, and any death march software project charges more than your average distribution. |
sorry, i will get back to this soon - just stupid-busy at the moment 😢 |
NP, I perfectly understand, go-dep is not the only thing needing work to make distributing Go software less painful |
Hi,
I'm looking at integrating Go software in my Linux OS package management system.
The basic reason is that Go projects can and do use resources (data, apps, libs) not released as Go projects, and non-Go projects may wish to depend on Go software. A language-specific solution only works as long as one stays within the language ecosystem. And experience shows that subsuming everything else in a language-specific ecosystem is too much work.
In other words I want to jump to step 7 of ¹, while the Go community is working here on step 6.
The targeted system is rpm/dnf (used in Fedora / Red Hat / Centos). rpm has a long history of integrating with multiple language-specific dependency tools. And what works for rpm should work in other Linux systems since years of intense competition made those systems converge on the same features.
To integrate cleanly, I need the following properties:
Since Go projects have a standard structure in GOPATH, 1. is already satisfied.
I can simulate 2 since creating a package in rpm requires collecting most of the same metadata (though it would be nice to check if the metadata declared by the packager agrees with what the software object thinks about itself)
I could probably write my own Gopkg.toml parser to do 3., but I understand the file format is not stuck in stone and anyway it is much preferred to leave langage-specific magic to langage-specific tools.
As additional constrains, neither 2 nor 3 should change files or perform network calls. 2. because we really do not want to ship material altered from upstream in Fedora, and 3 because we want builds to be reproducible and auditable, allowing network state to change the result is a nogo. I understand that means it won't be possible to sort git commits, so for go packages that declare git commit as deps, we'll have to either force this specific commit or allow any commit in deps.
Therefore I'd like go dep to grow a pure dependency analysis mode with no downloads, no network access and no file writing.
If necessary I can run a preprocessor with file writing but not network access. However some projects won't like that we install the preprocessed files in addition to their stuff, so a pure passive command is preferred.
A dependency in rpm is a simple
Assuming version uses the usual sanity rules, ie 2 > 2.10 > 2.1.1 etc
More advanced syntax is available but 99% of software lives happily without it
(http://rpm.org/user_doc/boolean_dependencies.html, http://rpm.org/user_doc/dependencies.html)
Currently the packager declares dependencies manually in the golang namespace, so
github.com/matttproud/golang_protobuf_extensions/pbutil will provide the following
and a code that needs it will require
For the attributes that impact dependency selection but do not participate to canonical version sort I suppose that we'll have to use slightly different tokens.
For branches;
with consumers choosing the token they want to match on
For commits
For commits on specific branches
For freestyle version modifiers
As an additionnal complexity we're moving unit tests (*_test.go) in specific packages so ii'd be nice if there was a way for the dependency generator to only emit deps for tests or only emit deps without tests.
Would it be possible for go dep to emit something like this? It does not need to match this syntax exactly, as long as there is one line for each dep and the syntax is sane I can rewrite it to rpm format in rpm routines
¹
As far as I've seen most languages go through the following phases as they mature:
5.a or 6.a The language grows a shared library system. Why rebuild all the time the same code once you have a system that agrees on what is the "good" code version. Plus shipping the same binary objects multiple times sucks for embedded systems and microservices)
The whole process can take more or less time depending on the language community values and vested interests battling any official solution so they can sell their own management addons, provide consulting to build/install badly integrated code, or profit from bad integration to reinvent wheels because accessing existing wheels is too hard.
The text was updated successfully, but these errors were encountered: