General design discussion #1
Comments
One bad thing that things about depending on exant and gneral soluions is that you are sacerficing the capibilities that can be done with more specalized tools. As an example the text-based diff tools which flag things like changing between spaces and tabs, instead of the meaningful changes (whch is nohing). -- What this means is that non-consiistant (that is non-compilabe) may be submitted into the system. Now, since the objecive is to provide a library/package system is for Ada in particular rather than a general VCS it makes sense to preclude erronious/non-compiabe libraries. |
a. Agree with github bitbucket for storage. |
@OneWingedShark: For sure it would be great to exploit any extra bonuses from using Ada. At this point I would try to leverage anything that's already in place for our needs. I think there are sites a-la github (or integrations with the latter) that check that a build compiles OK. I guess there will be for unit testing. @ohenley Wow so many points :) For sure this is not the best discussion venue, it was meant as a kick-off point. I agree on categorizing things in the wiki, and specific issues can be created for finer points to be discussed. BTW, I don't know right now if I need to make you admins of the repo in order to edit. For the time being I guess I can add anyone interested in this project. b) Sure, it seems that Open Source Ada is right now pretty much affordable only via gnat, so I'd start that way too, without losing sight of future others. Since we should try to support several gnat versions (should we?), this might mean that the effort for flexibility has to be done either way. JSON, why not, it seems the best compromise between structure and readability. Now, is there any good Ada JSON lib? c) I think we should leverage whatever tools a compiler provides, or else have a mean for the library to indicate how it has to be compiled, given a target compiler version. But, right now with no manpower/roadmap... I'd go for gprbuild, I wouldn't use anything else with gnat for a pure Ada library. Keeping the syntax Ada-like if there's not already a good parser seems a wasted effort. Also, I don't think Ada is well-suited (a shame) natively for dictionary-like syntax, which will probably be the most common need. Heard a lot of good things about gnoga recently, never tried to use it. Good to have someone familiar with it. e) Not sure I follow here. As I envision it, the one providing the lib has only to craft the package description file. Initially the "supported versions" could be manually indicated wiki-style or trusting that file; ideally we should move to auto-verified compilation, but that requires dedicated hardware. Anyway, I feel I must read the references in my 1st post before suggesting anything seriously in this regard. g) Totally agreed. BTW: I think we shouldn't lose sight of the Ada packaging in Debian. They have years of experience doing something similar to this, although I feel the entry barrier is higher due to distribution rules. I see this project as a stepping stone for libs getting into distros more easily. Other musings that come to mind. Python, which is the biggest force I'd say right now. I have used sometimes both pip and easy_install, which seem to do something similar. Any of you is familiar with those? |
e) e.g apt-get fetches packages for you and updates only what is necessary, but you need that executable on your machine. The dub (dlang apt-get) executable works the same way. Mr. Brukardt had an objection has to have to learn and depend on a new local executable to operate. I think someone proposed everything could be done by the server. I dont get it on how to ONLY remotely operate dependencies on a user local machine. The only thing I see is the server bundling things in a zip file with all the .gpr files automatically generated to build properly for a given machine/os combination. The user would have to manually download the archive and launch the build by hand. Maybe we could do it this way for a first step because bundling, fetching and building are serial steps. So first steps would stop at bundling an archive. Therefore we lay down the bricks for a online repo, and a first pass on automatically generating .gpr against a set of different meta-infos, the ones attached to the different libs. What do you think? |
Also, from what I understand I think we should leverage OneWingedShark piece of work APM. @OneWingedShark: |
Ah, I had missed @OneWingedShark APM project. Going to check it now. @ohenley, I've addressed your apt-get post here: #2 , to start segregating issues into different discussions. |
APM is in the very early stages of development; there are a few changess I have on my TODO list, such as re evaluating the syntax (particularly WRT partitions and defaults) and considering how things like multiple parameter-selected bodies should be represented/indicated. But commentts, critisisms, and ideas would be more than welcome. |
A) Yes it is meant to have Ada-like synta. I think you are correect about th meta-info, but there is meant to be a separate build tool, sso as to avoid a particular dependeence. (Idaly a util could process the file and generate the proper build/make, or perhaps build directly.) B) Very early development. It's ben on hold for a while due to both Byron and needing some "sounding boards" for things like features and capibilities needed. C) Given that it is still in development, real examples or man pages don't exist. The intent is to provide a standard way to describe a project (w/o system dependencies like directory-separators [or even the existance of dirctories] and a particular toolchain [like GNAT]). -- So, while not intended for building per se it should be possible to use it w/ system-dependent tools to build a project. |
@mosteo said "For sure it would be great to exploit any extra bonuses from using Ada. At this point I would try to leverage anything that's already in place for our needs. I think there are sites a-la github" Well, the problem w/ github et al is precisely that they are a) generalized and b) [only] VCS. While we will need VCS, the main thrust isn't VCS but package-distribution… which means both dependency-management/indication and some way to define a project. I would actually recomend an internal representation/storage amenable to databases. PL/SQL uses DIANA, as did/do the Rational Ada compilers, and while DIANA may be old (and not updated for Ada 2005 or 2012) the idea behind it is pretty solid… w/ Ada 2012's containers and predicates it seems like it can/should be reevaluated/redesigned/reimplemented in/with/for Ada 2012. -- PLUS, storing the library/project in a DB allows us to take advantage of relationships, so tracking dependencies could be a simple query. (Though this would require a tool to compile the project to thee IR for submitting.) |
Just to let you know, I started basic pages on the wiki. Please amend stuff, correct things etc. On my part I will try to express my vision in a top down approach. I plan on trying to define some scopes of the Alire command line tool e.g: what I think should be said to the user, parameters and output. From there we can argue what is needed and what is not, refine and slowly get/define the specs we will need to implement. |
@OneWingedShark said "but package distribution… which means both dependency-management/indication and some way to define a project". Is it not the role of the meta-info file (e.g json file) provided by each lib repo to describe dependencies? e.g: lib Z references lib Y that references lib X; I think the many json files processing serves, in the end, the same purpose as the IR you are referring to. The client-side tool (alire) fetch recursively those json files and process the dependency chain by automating the generation of gpr files so the "end project (app or lib)" is properly configured to include and link to all libs source code. Finally, all code is retrieved by the tool from github/bitbucket/sourceforge and layed out on the client machine to be built. I dont see why we would need a db to resolve dependencies. Please explain or give example if I am missing something. |
@Ohenly - yes, a meta-info file COULD indicate dependencies. (But relational DBs were literally made for handling relationships.) By making dependencies a jason object we would be denying ourselves certain automatic checks. For example, with a database we can ensure that a dependency actually exists (eg foreign-keys.) -- This is what I was refering to about geneeral vs special purpose tools. JSON is ill-suited for the same reason that a text-file would be: because it is too general, the same with using extant VCSs like github and bitbucket: you can submit a non-working (non compilable) program into such repositories. To reiterate, the point isn't that it CAN be done with extant (general) tooling, but that it can be done better with sspecialized tooling. -- Consider for a moment the implications of having things on thee server-sside as DB items: we could ensure not only valid (retrivable and compilable) entries into the DB, but also manage them w/ some degree of automation. (For example, a library losing a dependence or gaining one could be determined by the db.) Another advantage to storinginfo in the DB is thaat we could include things like licenses for search/filter directly insread of having to search some other manifest. [Also note, I do not mean that only the meta-info should enjoy DB storage, but the source (with proper processing).] |
A consideration for the more lazy of us: Simply package the sources for a library in an existing, platform-appropriate package format. I.e. use:
I admit that this doesn't solve distribution for older Microsoft Windows releases or for Mac OS X, but if one of the three major package formats work for these systems, then they can be supported without too much extra work. The benefits of doing things this way are:
The costs I can see are that:
|
|
1.a) It is a heavy plan. But Ada is a "heavy language" andreally shines in the 'software engineer' realm. Much of the benefit of such a DB based IR would actually be in the compiler/environment arena -- but almost no compiler would use the IR unless tools (like pkg management) use it. 1.b) We probably do have enough manpower, it seems coordination/cooperation (IMO) is where we have the big problem. 1.c) But do we have the honsty and drive to make temporary REALLY mean temporary?
|
Ok but the main goal is to streamline the Ada pipeline, to get at coding faster, not the tool itself. I agree with you that a perfect tool would be ideal but it would also be sad to see nothing happens because we planned too big for things that may not be mandatory... at first. If you can lay out a plan for that it would be best. |
@ohenley said "If you can lay out a plan for that it would be best." I have a few notebooks of ideas / half-plans; I'll see about reviewing and transcribing them. I also have the DIANA RM/spec and the '86-ish paper deescribing AIR that I can see about scanning/OCRing, if needed (or there is interest in them). |
@ohenley - I've updated the post with OneGet for Win10. Thanks. Having created both Why should we write anything in a scripting language, modern or not? Wouldn't it make most sense to write as much as possible in Ada? |
@sparre asked "Why should we write anything in a scripting language, modern or not? Wouldn't it make most sense to write as much as possible in Ada?" One of the nice things about Ada is that it is so portable, and high-level (compared to C-style langs). The C vs Ada study from IIRC Rational shows Ada as being cost effective but ALSO that [shell-]scripts were the least cost effective WRT development and maintaince. In addition to that, there is the cost in both dependencies and expressiveness to consider. |
So much activity in so little time! Just some random thoughts: I suggested the scripting language only in case static executables are unfeasible, otherwise Ada is the choice, of course. Preferred options, from better-to-worse in my view:
I guess 2 and 3 can be in reverse order for some people tastes. 2 requires the generation of architecture specific files, but at first it could simply be linux amd64 and see demand... Of course, from the project POV the least effort is 3. @sparre Couldn't we aim for the generation of distro packages as a future milestone of the tool? It seems easier (to me) to leave it for now at the source tree stage. I say this with a goal of keeping the initial requisites for lib inclusion at a minimum. |
QUESTION: What is the scope of the project? On the comp.lang.ada thread Randy was under the impression that what was being discussed was the Linux util/idea of a "package manager", though there was some evidence that was not what was being discussed; here there are a few posts that seem to be indicating that idea is what is being discussed. My reply on c.l.a.: If that is the case, I would recomend:
While something like integrated tests would ne nice, I think they might be a bit beyond the scope of of the project. (#5 and #6 are there to both reduce the complexity and the scope of such a repository.)So, what are the gols? -- I think this question must be answered before actual design. As per my opinion on c.l.a., I think the "package manager" (where 'library' below means the collection of sources in the package) should consist of three programs/"partitions" like so:
|
It is fine with me to put generation of platform specific packages at a later milestone. I will look into generating Debian packages (scratching my own itch first) from GNAT project files independently of the schedule for this project. |
Given that the original poster in cla referenced the Haskell and D tools, I guess that was the intention: have something similar for Ada. That is, that you can do: "alire install somelib" and it is ready in your computer (for some definition of ready). The main initial benefits, if generally adopted, would be:
If not adopted, well, the xkcd standards fallacy applies: https://xkcd.com/927/ since there are already several websites listing libs. The thing is, are enough open source enthusiasts actively using/contributing Ada libs to reach critical mass? I'm undecided, but then again being a few means less people to get to agree :) To me personally, the advantages I see in leveraging standard sites like this one is that code wouldn't go away as websites rise and fall, and at least you could have info on which gnat versions were able to compile it. I think the Ada community is in a unusual position when compared to other langs in the sense that industry is much more invested in it. C.f. the resistance in cla to a gnat-only tool, when I know not of any other "free" compiler. Ada is not driven by open source youngsters but by niche (debatable?) vendors, despite all kind of users being convinced of its advantages. This is my subjective impression, of course. The pragmatist me says: gnat only until other open compiler is available, and let industry take care of itself. @OneWingedShark In regard to your last 3 points, I think we are basically on the same page. It's just that I'd try to have 2) be taken care by standard repositories like this one, and 3) still unclear on the details, in regard to compiler/lib versions consistency. I think this last point could merit its own thread. @sparre I understand. As long as we don't duplicate effort, all is for the greater good, right? And projects can converge in the future. |
mosteo wrote:
|
So I have summarized here my current understanding on how Dub and Hackage are doing it. I note that both of them use some server-side active infrastructure, which I'd like to avoid, at least at first. Based on that, here is what I propose for a minimal first version (I'll move it to the wiki if you see it fit). Each of these points could be settled in its own issue:
Things I'm unsure of at this point:
gprbuild project files already do lots of configuration management, so if we could leverage that... |
I really don't think the system should rely on GPRbuild, it is speciffic to GNAT which would undermine the goal of being implementation independant. Plus it's way too easy to make a GPR-file that cannot survive (eg) changing environment variables via GPS. While it is true that GNAT is the only free Ada translator currently, there are projects underway to address that. (My own Byron project https://github.com/OneWingedShark/Byron for example.) …and RRSoftware's translator is about $100. It also seems to me that platform speciffic/varient ought to be a property of the project itself, no? |
Per the points raised by @OneWingedShark I would say that a database-friendly IR would give such a project a significant leg up over the competition: library repositories like CPAN are riddled with dependency problems and all other manner of bothersome inconveniences. However, one would need a rather advanced lexer and parser in order to generate such an IR. I would say that the community should at least attempt to concurrently produce the parser as well as a system that can leverage current tools to address the immediate need. IMO, the system to address the immediate need could be built with abstraction layers that would prevent the leveraged and currently existing tools from becoming a hard-coded dependency: design interfaces that wrap the back-end repo tools (whether github or RDBMS-backed IR) in a way that allows gradual transition to the ultimate IR-based repository, while still giving a relatively quick solution to the community's need. FWIW, I'm not an Ada developer, but an interested observer who hopes to have the time to dive into Ada in fairly short order. If my points are so far off in the weeds as to be irrelevant to the topic at hand, this is likely the reason. |
@OneWingedShark Perhaps we can leverage each compiler own facilities when existing, at least in the beginning. I simply cannot bear the idea of replicating gprbuild massive features. I'm no fan of the fragility of project files/bleeding edge gnat versions, but as long as code enters the system in a coherent state I see no problem. Besides, tying in @CoherentLogic comment, properly abstracted layers should prevent hardcoded dependencies? At most those would be unimplemented features :D I don't want to sound like I'm opposing ideas; I'm trying to balance effort/maintenance/short term goals and I may suffer from a certain shortsightedness (besides the filter/bias of my own knowledge) because I want this to turn into working code sooner than later ;-) But I'm in no hurry and I'm finding the discussions really interesting. On the gnat point I'm quite pessimistic, in the sense I don't foresee this being used with any other compiler in the short term; and should we really do the work of RRSoftware? I think an abstraction layer is justified but enough at this stage. |
@mosteo - It crtainly would be nice to leverage each translator's own facilities. However, I'm not sure that wouldn't be detremental in the longer run. As noted gpr files are rather fragile. (And I certainly don't mean to sound like we should work for RRSoftware for free.) One thing that could be a problem is how GNAT doesn't allow more than one compilation-unit per file. This means that technically-valid Ada could be rejected/unusable -- this would not be the case for an IR based storage system where the population of local storage woulld have to be done by the 'client'. (A similar problem occours for file-based storage: case-sensitivity vs case-insensitivity; I have had issues w/ github where my casing use meant the file had to be renamed on a Linux machine.) -- the way I view it, a more system that essentially forces these issues to be addressed (albeit tangentially) is worth the extra effort/complexity. |
gpr fragility would be solved the same way as code consistency, with proper version control, so that doesn't particularly bother me. And since each compiler has its way of ensuring consistent compilation, unless we want to duplicate that job for each compiler, I'd try to use what the compiler provides. What is the alternative? I can't really see a practical one. As for the IR, you're right. That's a transformation that can be done during checkout. Since it's likely that most potential users are on gnat, we might as well use its convention for code storage and have a null filter for it. But I certainly would like to hear experiences from people maintaining Ada code for several compilers. In the end, it seem we can enforce as much as we want at submit/checkout by doing it all through the client tool. |
@mosteo - It seems to me that doing it through the submit/checkout while seeming to offer the same enforcement options would in reality be more work as you would have to link in multiple systems. Yes, using an IR as described is essentially requiring the front end of a compiler. That would indeed be "duplicated effort", but the proposed "ensure consistancy by checked builds" is also duplicated effort. -- As an example, consider continuous integration systems (like Jenkens) where every new comit is compiled with errors building reported. This is completely avoided where the system prevents non-compilable code from being stored in the first place. As to the development of such a DB friendly IR, I have been planning to use one since even before the Byron project. And, while I haven't yet done any implementation work on it, I do want to use it in Byron. (Along w/ the APM project-files.) -- I had actualy panned to do the IR as a sub-project of the compiler, finish/restaart work on APM, then use the IR and APM in an Ada package-manager. |
@OneWingedShark, I guess we have a different idea of what means more work :-) but I understand better now what you're proposing, thanks. I'll follow your APM/Byron projects with interest. Certainly a purely non-commercial compiler would bring some variety! |
A quick point regarding Haskell packaging as inspiration is that momentum in the community is steering towards stackage instead of using The stackage team curates a list of package combinations known to build correctly together in a managed environment, avoiding breakage when APIs change. |
@kerscher: good to know. I think it says a lot about what we should aim at 'straight off the bat'. |
Someone pointed to the nix project on reddit: https://nixos.org/nix/ |
@OneWingedShark: super. |
So… any news on the Design? Or progress in-general. |
I'm having a workload peak these days... Afraid won't come back to this for a while :S |
Hey! Sorry did not give news sooner. Me too it is super busy on all fronts. |
Has it been a while yet? |
It's been, but I'm in the middle of my teaching semester so still kind of overloaded. Able to sporadic general bantering I guess. I have given occasional thoughts to our previous discussions. One point that I was curious about: IIRC you (@OneWingedShark ) advocated some kind of intermediate symbolic storage of libs. Would that mean reconstructing source code from that? Storing both? |
That depends on the IR. DIANA made an effort to be very good about regenerating source, to include comments, IIRC. AIR, on the other hand, was unconcerned with the [exact/near-exact] restoration of source-text. IMO, we should think of the text as a serialization of the program [as an object] and instead try to make that our basis rather than (eg) trying to fit text-based [and file-based] toolings into the place and "working around" the inherent deficiencies that will bring. (e.g. 'patching' a diff so that the changes of line-endings aren't recorded, retroactive/'smart' indentation, etc, due to the tool operating on text.) |
PS -- In my experience styles are often used to cover-up design flaws, like Yoda conditionals in C-style languages. |
I'm trying to figure out the difference with using version control. Could we consider a particular commit a program/library object, as you say? |
I'm not sure I follow what you mean here.
I don't see why not. |
No matter, I'm unsure too at this point. I guess I'm trying to fit the IR concepts into my code-centric worldview. |
Well, in one sense you could think of the source-code as a serialization of the IR (Especially if, like DIANA, source-regeneration was a priority.) -- OTOH, you can think of the relationship like calculus's integration and derivative, such that integration is sometimes termed anti-derivative. (So |
Is this a dead project? |
Hi there, as it happens I've been recently working on a prototype with my ideas to have something more tangible over which to discuss. I'll come back in a few weeks (I hope!) |
Have you take a look at the work of John Marino, ravenports?
http://www.ravenports.com
The main tool is ravenadm, fully written in Ada.
https://github.com/jrmarino/ravenadm
I used it to build/Install many Ada package on a freebsd box and it's works
like a charm.
…On Feb 18, 2018 6:43 AM, "Alejandro R Mosteo" ***@***.***> wrote:
Hi there, as it happens I've been recently working on a prototype with my
ideas to have something more tangible over which to discuss. I'll come back
in a few weeks (I hope!)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<mosteo/alire#1 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABXG1lWE0Hh0CIN5mozTHqQTranVV_ZNks5tWAzmgaJpZM4HZJrU>
.
|
Alas, no, but it looks gorgeous. I'm going to try the Linux port right away. |
Awesome! |
As I advanced, I've gathered my code at https://github.com/alire-project/ Since I needed several repositories it felt appropriate to make an umbrella project for it. The alr repository is where the only (and scarce) documentation is. In the end I went overboard with the pure-Ada approach (or pure-GNAT, really, because it relies on GPR files), so in the client side an Ada specification file is used to represent project dependencies, and more generally the database of available projects is also simply specification files. Incidentally, I looked at ravenports. It's very cool, but my impression after playing with it is that it is at a whole another level: it requires root permissions and is capable of installing anything, not only Ada. The BSD ports experience of the author is evident. For myself, in alr I've drawn the line for now on what can be purely compiled with project files (yes, it's a big limitation), with the exception being that native packages can be used, if available (e.g., prepackaged GtkAda). My solution is user-space only, more in the vein of python virtualenvs. For sure there's plenty to like and hate on my approach, so I look forward to your comments. |
I don't like the manifests(?) written in Ada. The syntax looks too complicated for the purpose. |
I totally agree with you mosteo:
|
Manifests(?) written in Ada, why not? IMO the syntax looks self explanatory; after all it is Ada syntax. Maybe I do not understand correctly the implications but by using a .ads: a) you get "free" parsing. |
Evident needs: indexing, storing, client tool for deployment.
Suggestions have been made to study how it has been done for Haskell [1] and D [2]
[1] http://hackage.haskell.org/
[2] http://code.dlang.org/
Without knowing how those have been done, I'd solve storage by pointing to concrete commits in open source repositories like github and bitbucket.
I guess ideas agreed upon could be transferred to the wiki part of the project.
The text was updated successfully, but these errors were encountered: