Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Debian Stretch package #377

Closed
brungo opened this issue Jun 30, 2017 · 30 comments
Closed

Debian Stretch package #377

brungo opened this issue Jun 30, 2017 · 30 comments
Labels
CI: Travis-CI Continuous Integration issues related to the Travis-CI platform (Linux and MacOS-based CI)
Milestone

Comments

@brungo
Copy link

brungo commented Jun 30, 2017

Please add the .deb package for Debian Stretch.

Thanks!

@Paebbels Paebbels added this to the v0.34 milestone Jul 1, 2017
@Paebbels Paebbels added the CI: Travis-CI Continuous Integration issues related to the Travis-CI platform (Linux and MacOS-based CI) label Jul 1, 2017
@eine
Copy link
Collaborator

eine commented Jul 1, 2017

Hi @brungo, have you tried to execute any of the available binary packages (not deb)?

https://github.com/tgingold/ghdl/releases/tag/2017-03-01

The one in Ubuntu 12.04 is built with these dependencies and versions:

https://github.com/1138-4EB/ghdl-tools/blob/img-ubuntu1204-llvm/Dockerfile

Then, it should work in your Debian Stretch as long as you install these three packages (for LLVM version): libgnat-6 g++-6 zlib1g-dev. Note that these are runtime dependencies and they are debian strectch package names. On top of that, you may have to add ln -s /usr/bin/gcc-6 /bin/gcc or set some specific envvar so that ghdl finds a compiler.

Moreover, I am adding stretch and sid to the builds matrix: https://github.com/1138-4EB/ghdl/tree/updatebuildimgs/dist/linux/docker This will only ensure that another binary release can be added. You'd still have to download it and install the runtime dependencies mentioned above.

According to the tracker at debian.org, ghdl is no longer an oficial package: https://tracker.debian.org/pkg/ghdl This means that there is no maintainer in charge of making the deb package. Furthermore, there are non-official ghdl-0.31 and ghdl-0.33 packages: https://sourceforge.net/projects/ghdl-updates/files/Builds/

I assume that it is quite straighforward to adapt those to new binary releases. Indeed, since releases are created in a CI environment, it is 'cheap' to add packaging to the artifact deployment pipeline. However, I have never done it. Are you willing to help with this specific task? I'll do the scripting and integration in the CI. Just need to know what's the difference between those binary releases and a deb package.

Also note that, at now, all the builds are for x86_64 (we do not cross-compile at all). I don't know if Debian accepts packages which are available in a single platform. Did you mean to download it from here, or you'd like to have it available in official repos? Nevertheless, this is not a technical limitation, since the builds in the CI can be extended to support cross-compilation. No one has focused on that, though.

@Paebbels
Copy link
Member

Paebbels commented Jul 1, 2017

Debian dropped GHDL support, because GHDL contains non-free content: the IEEE libraries. The IEEE package license is not considered free by Debian. But we know a person helping out in building *.deb packages if a new version is ready for assembly.

The IEEE P1076 working group is working with IEEE-SA to make the VHDL packages available as open source under Apache License 2.0. We applied the working group as a pilot project. If the packages are then under a real open source license, GHDL can bundle them (incl. VHDL-2017 updates and bug fixes) and can become a part of Debian again.

@eine
Copy link
Collaborator

eine commented Jul 1, 2017

So great news! Then, on my side, I just have to add debian platforms to the CI build matrix. Maybe not in the daily build set, but for releases. Is it OK to be for x86_64 only?

During the rewrite, I have split 'build' and 'test' steps to separate containers, based on the same image but with different installed dependencies. This allows us to explicitly tell which are minimum runtime dependencies in each platform. For example, for Debian Stretch:

https://github.com/1138-4EB/ghdl/blob/updatebuildimgs/dist/linux/docker/builder--debian--stretch-slim
https://github.com/1138-4EB/ghdl/blob/updatebuildimgs/dist/linux/docker/runner--debian--stretch-slim

These are the installed tools in the builder:

GNAT 6.3.0
gcc (Debian 6.3.0-18) 6.3.0 20170516
clang version 3.8.1-24 (tags/RELEASE_381/final)
GNU Make 4.1
git version 2.11.0

I am trying to add another platform, based on Debian Sid with gcc-7. Should we consider a Jessie based one with gcc-4.9?

Also, @tgingold, I had to add gnat-6 (60MB) instead of libgnat-6 (4MB) as a runtime dependency in order to run the full test suite, because gnatmake is used in https://github.com/tgingold/ghdl/blob/master/testsuite/testsuite.sh#L36 However, I feel that most users will be OK with libgnat-6. Then, I can either install gnat only to run vests, or we can find a workaround to not use gnatmake in the testsuite. What do you think?

PS: this is part of the 'new' documentation, with the example provided by dossmatik as a 'runnable demo'.

@tgingold
Copy link
Member

tgingold commented Jul 2, 2017 via email

@tgingold
Copy link
Member

There is now a build on stretch in travis-ci that generates a tgz. Next step is to create a .deb

@Paebbels Paebbels modified the milestones: v0.35-b1, v0.34 Sep 5, 2017
@eine
Copy link
Collaborator

eine commented Dec 8, 2017

I cannot reproduce the issue I reported in a previous comment about gnat and libgnat related to get_entries anymore. @tgingold, can you remember any modification you introduced in these last 4-5 months which could have affected this?

About .deb packaging, @Paebbels mentioned that there is someone helping out in building .deb packages if a new version is ready for assembly. How is this going? I saw that both of you cloned a repo about it (jorisvr/ghdl_debian), but the last commit (by @tgingold) was in 2016. I also saw that you created a GPL variant of the package on stretch, but I don't know which are the difference. Anyway, I suppose that this is the build @tgingold referred to. Do we have to either build or deploy anything in travis to have 'latest' GHDL in debian again? Can be apply the same packaging procedure to ubuntu tarballs?

@tgingold
Copy link
Member

tgingold commented Dec 8, 2017 via email

@eine
Copy link
Collaborator

eine commented Dec 9, 2017

Yes, get_entries is not used anymore. It is done directly in the shell script.

Awesome.

  • creating a .deb package so that it could be part of debian. This is the GPL variant. It won't support vhdl-2008 right now, but will be easily available.
  • Is there any issue/discussion where I can see which are the exact differences between stretch-mcode and stretch-mcode-gpl, besides vhdl-2008 support? Your comment, but will be easily available, leads me to think that some sources required to fully support vhdl-2008 are not fully gpl-compilant, that is, ieee2008, vital2000 and/or mentor. However, I don't quite get why GHDL needs those to support it.

  • Does it make any sense to create GPL variants for other distributions? I mean the tarballs here, not the packages.

creating a .deb package that is published on github. This is mainly a packaging issue. Not sure if this is very interesting, as then someone will want to have the ubuntu package, or rpm or ...

Does debian use pre-built binaries or are all the packages compiled from sources? I think that the easiest path would be to create a deb package with the already available tarball and some description file. It is straightforward and can be easily extended to other platforms. If not possible, i.e. build instructions must be described in the packager, extending it can be harder.

Anyway, I agree with you. We should not be publishing deb, rpm.. packages on github. Yet, I think there is no problem to support rpm too, as long as fedora accepts it as Debian does. The same applies to Alpine, although it is not an option now.

Note that I haven't dived into it. Just thinking loud for you to tell me what NOT to do.

@Paebbels Paebbels modified the milestones: v0.35, v0.36 Dec 14, 2017
@CrazyCasta
Copy link

If you're just looking for how to build, I'm working on some Vagrantfile s to do just that. I've got the mcode working here: https://github.com/CrazyCasta/ghdl-vagrant-builders . I'll also be looking to possibly package them into .deb packages at some point (to make setting things up faster).

@eine
Copy link
Collaborator

eine commented Dec 14, 2017

Hi @CrazyCasta! Building GHDL is actually automated in Travis CI and Appveyor. Indeed, we use docker containers to execute gnu/linux builds (see dist/linux/docker/), which is functionally equivalent to vargrant files. The next proposed step is to make ready-to-use images available for anyone to use (see #489).

However, on the one hand there is no script available to build with gcc backend right now (see .travis.yml). On the other hand, packaging deb or rpm is not included yet. This issue is about the later, so don't hesitate to share with us any step forward in this issue.

@CrazyCasta
Copy link

Ok, I didn't mean to replace what you've got but rather to provide simple straightforward shell commands that will lead to ghdl being compiled and installed on debian stretch. Not exactly .deb files, but closer. I'm not sure if brungo is interested in compiling from scratch or not, but I found the llvm and gcc backends to be hard to make according to the instructions given. Not sure why llvm was so hard as it seems easy now. GCC on the other hand has a lot of steps and the documentation doesn't provide a clear working example (which wouldn't necessarily be appropriate as some of the steps like packages are debian specific).

On a separate note, I updated that repo w/ Vagrantfile s for gcc and llvm backends.

@CrazyCasta
Copy link

@1138-4eb also, is there an interest in making a build for the gcc backend? I could modify the buildtest script if there is. The reason I ask is because it takes forever to build the gcc backend and perhaps it was not included for that reason.

@CrazyCasta
Copy link

CrazyCasta commented Dec 15, 2017

Ok, final post I hope. I'm now realizing the Travis CI builds are not just for testing, they're also for release. So to change my question @1138-4eb , if I added in gcc backend functionality to the build script, would that be useful, or is someone else already working on that?

@tgingold also, now that I'm reading this better, two questions

  1. f I were to provide packaging for a number of operating systems (at least debian, ubuntu, red hat/centos and gentoo) would there be interest in hosting them somewhere official? I'm speaking of non-GPL packages that include all the IEEE and w/e other license problematic stuff there is.

  2. As far as splitting the non-GPL stuff out, do you have a plan that I could follow? Do you have a way of providing a source only package (that could be compiled by any binary GHDL)? I think the main reason to provide packaging is to avoid all the "what packages and versions do I need to make this thing run", so if the non-GPL code is source only and quick install, the instructions could be something like "get ghdl for your system, download this other thing and run this command as root" (assuming you want it to install the non-GPL stuff system-wide. If you don't have a plan for the non-GPL stuff at the moment I would certainly like to at least see some "download this .deb/.rpm/etc and install it" type solutions, which I'd be happy to help make happen.

@eine
Copy link
Collaborator

eine commented Dec 15, 2017

Ok, I didn't mean to replace what you've got but rather to provide simple straightforward shell commands that will lead to ghdl being compiled and installed on debian stretch.

I didn't mean to sound rude. Sorry if I did. I just wanted to point that, up to you previous message, we are/were at the same point. I mean, there are currently ready-to-use scripts both in the repo and in the docs for mcode and llvm:

./dist/linux/buildtest.sh --build mcode --pkg mytarball.tgz

which is an alternative to the "handmade" process in the docs: mcode Backend on GNU/Linux with GCC/GNAT. The difference is that buildtest.sh will also run the testsuites.

For llvm on Fedora it would be (LLVM Backend on GNU/Linux with GCC/GNAT):

./dist/linux/buildtest.sh --build llvm --pkg mytarball.tgz

On debian/ubuntu, however, the version needs to be defined (see buildtest.sh#L72-L87. For example, for 3.8 it is:

./dist/linux/buildtest.sh --build llvm-3.8 --pkg mytarball.tgz

This is actually what is executed in the travis ci environment to produce the artifacts which are then made available through releases. See .travis.yml#L28-L32 and travis-ci.sh#L50-L51

These scripts are not mentioned in the docs because we consider them part of the ci build system. Furthermore, we somehow agree that it is not good to just execute scripts without knowing what's happening. Thus, the docs reflect manual steps to make sure that the users understands/needs to understand what is doing. However, it might be worth adding a note to building/Building so that more experienced users can just read it and run.

Indeed, I agree with you when you point that the docs are lacking platform specific details, such as appending dependency versions when installing packages through apt-get. Anyone would notice when searching for them, but even knowing that debian/ubuntu requires that might require some time. So, in order to make it more straighforward for unexperienced users, I'd like to add this information in a non-invasive way. I.e., we don't want the docs full of platform-specific details.

Docker is my approach to this. Check 'Use cases for the actually available images' in #477, and see #489. After/if #489 is merged, we can add something like this to the docs:

Q: Which are the minimum dependencies I need to install in my platform to build GHDL from sources?
A: Check the file which most closely relates to your platform from here. For example, to build GHDL with mcode backend on Debian stretch, you need docker/build/stretch#L6-L7

Q: Which are the minimum dependencies I need to install in my platform to run/execute/use GHDL?
A: Check the file which most closely relates to your platform from here. For example, to execute GHDL built with llvm backend (version 3.9) on Ubuntu 16.04, you need docker/run/ubuntu16#L14-L15

As a result, a user will be able to:

Note that, even though the user is not required to use docker, the dockerfiles are a great place to show platform-specific details. Furthermore, we know that they are ok, because they are used in the ci environment, so we can easily see if the name/version of some library breaks the build.

but I found the llvm and gcc backends to be hard to make according to the instructions given

I must admit I am partially cheating here. We've had an issue in the last couple of days, because we removed BUILD.txt, but the docs are not showing all the content. See #490. While #491 is merged, you can find the updated docs at ghdl-devfork.readthedocs.io. Considering this, do you still find instructions unclear? Is it only because of platform-specific details (as I said above) or did you expect anything more?

GCC on the other hand has a lot of steps and the documentation doesn't provide a clear working example (which wouldn't necessarily be appropriate as some of the steps like packages are debian specific).

About the docs, see GCC Backend on GNU/Linux with GCC/GNAT.

@1138-4eb also, is there an interest in making a build for the gcc backend? I could modify the buildtest script if there is. The reason I ask is because it takes forever to build the gcc backend and perhaps it was not included for that reason.

Indeed, I've been working on this these last hours. See #149 and #479. It's almost ready. I'm working on this branch.

It takes 20min in travis, so it is not so much. These tests might not be run everyday, but just for releases. Note that enabling/disabling a tests is as easy as (un)commenting a line in .travis.yml.

Note that, following the concepts explained above, the modifications I made to buildtest.sh are platform-agnostic. Platform-specific details are described in the corresponding dockerfiles (build-fedora26+gcc and build-stretch+gcc).

On a separate note, I updated that repo w/ Vagrantfile s for gcc and llvm backends.

I wonder if you can/are willing to use buildtest.sh (or build.sh and test.sh if #489 is merged), instead of handwriting the steps in each vargrant file. In the end, a vargrant box is not very different from a docker container, so there should be no conflict.

Moreover, I am not really sure about the advantages/disadvantages of vargrant compared to docker. I'd say that spinning a vargrant box is much heavier that a docker image, specially on linux. On Windows and macos, however, docker is executed inside a VirtualBox machine. Therefore, it should pretty much the same. Yet, vargrant will run a separate machine for each instance and all docker container are executed in a single machine. May I ask why do you use vargrant and not docker? Just out of curiosity. I think that vargrant can be executed on travis, so there might be a good reason to add at least a test which uses it.

@eine
Copy link
Collaborator

eine commented Dec 15, 2017

Ok, final post I hope. I'm now realizing the Travis CI builds are not just for testing, they're also for release. So to change my question, if I added in gcc backend functionality to the build script, would that be useful, or is someone else already working on that?

Sorry, I saw this after I sent the previous comment. As stated above, I am trying to do it. However, if you want to collaborate, you can open a PR to my fork, wait until I open it here or make any comment you wish. Indeed, the modification I did to buildtest is just a quick one. E.g., it should be parameterized to select the gcc version you want to use.

I have no problem to let you finish it, since I will be busy in the following days. However I'd like to discuss vargrant/docker and where to put platform-specific details first.

The same applies to generating deb/rpm packges. I can help to have them integrated in travis. Even if they are not published for any reason, I think it is not bad to have the scripts as a reference to test them from time to time.

@CrazyCasta
Copy link

I didn't mean to sound rude. Sorry if I did. I just wanted to point that, up to you previous message, we are/were at the same point. I mean, there are currently ready-to-use scripts both in the repo and in the docs for mcode and llvm:

You didn't sound rude at all. I was just trying to be as pragmatic as possible wrt to brungo's question and wasn't aware of the buildtest.sh script.

I must admit I am partially cheating here. We've had an issue in the last couple of days, because we removed BUILD.txt, but the docs are not showing all the content. See #490. While #491 is merged, you can find the updated docs at ghdl-devfork.readthedocs.io. Considering this, do you still find instructions unclear? Is it only because of platform-specific details (as I said above) or did you expect anything more?

I was actually using the old documentation (this was a few weeks or so ago). Like I said, I really don't know why llvm was a problem for me, as upon trying it this time it was very straightforward. As far as the gcc backend, I think I probably would want to see a directory structure of the gcc sources, the ghdl repo, the build directory inside it and the gcc-objs directory inside that. I'm not exactly sure, but I think that might be where I got lost. Also, I was looking at this page I think http://ghdl-devfork.readthedocs.io/en/doc-gcc/building/gcc/index.html .

Moreover, I am not really sure about the advantages/disadvantages of vargrant compared to docker. I'd say that spinning a vargrant box is much heavier that a docker image, specially on linux. On Windows and macos, however, docker is executed inside a VirtualBox machine. Therefore, it should pretty much the same. Yet, vargrant will run a separate machine for each instance and all docker container are executed in a single machine. May I ask why do you use vargrant and not docker? Just out of curiosity. I think that vargrant can be executed on travis, so there might be a good reason to add at least a test which uses it.

The Vagrant is really just for me. I'm only sharing it because I found the documentation to be a tad confusing and this is a concrete test case that I know works (and that hopefully anyone can get to work with a simple vagrant up). Also, I use Vagrant as a sort of blow stuff up and start clean until I get things working. With your method I'd be afraid of leaving junk in the repo and therefore not being clean after I break something. However, your method does look very nice for building things once you've figured it all out.

As for why I prefer Vagrant to Docker, let me explain my methodologies:

Vagrant:

  1. Write Vagrantfile
  2. vagrant up
  3. Boom, it fails.
  4. vagrant ssh
  5. Figure out what to do, modify Vagrantfile
  6. exit, vagrant halt && vagrant destroy
  7. Go to 2

Docker:

  1. Write Dockerfile
  2. docker build -t image_name .
  3. Boom, it fails
  4. docker start -dit -n container_name image_name
  5. docker exec -it container_name bash
  6. Figure out what to do, modify Dockerfile
  7. exit, docker stop container_name && docker rm container_name && docker rmi image_name
  8. Go to 2

More things to remember in Docker (image_name, container_name, any other stuff like ports to expose that need to get put on the docker start line) and enough complexity that I don't actually know if I got that all right (I'd have to try it/look at the docs).

As far as the Docker vs Vagrant issue, I agree that Vagrant is a bit more heavy-weight, but it seems to act more like I would expect a properly installed OS to act. It's possible that I just don't know how to use Docker properly in which case please correct any of my problems. For me, if I want to do anything in Docker I do sudo docker exec -it bash. This likely requires that I type in my password. Then once inside, the keymappings seem a bit off. When I try to use programs like vim the keymaps are very much off. Also, other things like bind mounts and the like are things I have to type on the command line, whereas vagrant is just "vagrant up". Now some of this would be fixed if I just got familiar with docker stacks/services/w/e, but some of it I feel is just the way docker works.

As far as what you're saying. If it's in the docs that you can run these x,y,z docker commands and get ghdl installed in some docker, that would be great. Just don't leave any specifics out (don't use <version x> stuff). I.e. what I want is some lines that I can just basically copy paste (maybe deleting the $ at the beginning) and have it work. I say this because ghdl has a wonderfully long history, but with that seems to be a fair bit of legacy documentation (though I think it's getting much better) that suggests things that are a bit out of date. It can be aggravating to bang your head against some documentation only to find out it's out of date or I'm understanding it wrong, which is where the copy paste and it either works or doesn't comes in. (Or if I'm told to run docker build -t ghdl/build:ubuntu-llvm-3.8 --target llvm-3.8 - < Dockerfile and 5 years in the future we're on llvm 7.x I can go, hmmm, maybe this is out of date).

Sorry, I saw this after I sent the previous comment. As stated above, I am trying to do it. However, if you want to collaborate, you can open a PR to my fork, wait until I open it here or make any comment you wish. Indeed, the modification I did to buildtest is just a quick one. E.g., it should be parameterized to select the gcc version you want to use.

I was actually just looking to do this if no one else was doing it. I probably couldn't get to it for another week or so anyway. What I would actually be most interested in is automating the process of making packages, as I think that's the way most people want to consume this. For instance, I only got into the building side of things because I wanted to install ghdl with an llvm or gcc backend on debian (actually I'd really like it on gentoo, but that's a whole other story).

Summary

My final goal would be to have ghdl running natively on my machine (gentoo). In particular I want this done with a proper package manager (as opposed to a binary tar file). I suspect that many others would like the same thing (run natively on their OS through the package system). I'm only using Vagrant as a sandbox environment to blow stuff up in repeatedly until I figure out how to get stuff built. I'd also like to support other OSes like debian, ubuntu, redhat, centos, etc as a way of popularizing GHDL.

@tgingold
Copy link
Member

tgingold commented Dec 15, 2017 via email

@eine
Copy link
Collaborator

eine commented Dec 15, 2017

I'll reply from bottom to top, to keep content more related to this issue on top:

[@CrazyCasta] My final goal would be to have ghdl running natively on my machine (gentoo). In particular I want this done with a proper package manager (as opposed to a binary tar file). I suspect that many others would like the same thing (run natively on their OS through the package system).

We are all in the same boat then. However, it is easier to be said than done XD. The main limitation we have is that Travis is based on Ubuntu. We introduced docker as a workaround, which let's us work with debian, ubuntu and fedora. AFAIK centos, arch and gentoo are available too, although we had not used them, yet. Indeed, docker, vargrant, virtualbox... in the end the important issue is that we can now build binaries for many platforms in the ci environment (travis).

We need to focus on 'packaging' now. As commented in #477, at now tarballs are generated 'raw', i.e., there is no build, license, copying..., which is quite 'ugly'. However, I know very little about package managers. On the one hand, I believe that some platforms will allow us to provide the raw tarball with some companion metainformation, so we just need to add info. On the other hand, AUR packages (and I think that apk too) allow to provide sources and build instructions, in order for the user to be able to modify the build procedure 'on the fly'. Therefore, we need to target each package manager separately, which is quite arduous.

[@CrazyCasta] I'd also like to support other OSes like debian, ubuntu, redhat, centos, etc as a way of popularizing GHDL.

Do we really need to 'support' each distro or can we focus on dependency versions? I.e., the GHDL mcode code tarball built in stretch should be valid for stretch, buster, ubuntu14, ubuntu16, etc., isn't it? The same applies to fedora, centos and redhat.

I mean the worst case is to require a separate build for each distro, backend and version. But we can try to reduce it by building less tarballs than the number of packages we are going to generate.

[@tgingold] Yes, but it would be much much better if you provide the scripts to automatically build them. Bonus point if this is integrated in travis-CI, so that for each new versions they are made available.

The concept above is the motivation behind the inclusion of stage 'Pack artifacts' in #477.

[@tgingold] The simplest plan would be to provide two different versions: the normal one and the pure gpl. It would be also possible to have the non-GPL code as a sub-package (like you told), but nothing was done for that.

Once we have a GPL package for a platform, I think it is straightforward to copy-extend or copy-edit to provide the non-GPL on its own or as a subpackage. In the end, the hardest part is to have the first package introduced in a distro.

[@CrazyCasta] I was actually just looking to do this if no one else was doing it. I probably couldn't get to it for another week or so anyway. What I would actually be most interested in is automating the process of making packages, as I think that's the way most people want to consume this. For instance, I only got into the building side of things because I wanted to install ghdl with an llvm or gcc backend on debian (actually I'd really like it on gentoo, but that's a whole other story).

That's great. I will focus on building with multiple backends in different platforms in order to get a raw tarball. Then we can make it fit (either the tarball or the build procedure) in the packages, whenever you (or any other) have time to do it.


[@CrazyCasta] As far as the gcc backend, I think I probably would want to see a directory structure of the gcc sources, the ghdl repo, the build directory inside it and the gcc-objs directory inside that. I'm not exactly sure, but I think that might be where I got lost. Also, I was looking at this page I think http://ghdl-devfork.readthedocs.io/en/doc-gcc/building/gcc/index.html .

I am currently using this structure: https://github.com/1138-4EB/ghdl/blob/gcc-in-docker/dist/linux/buildtest.sh#L92-L134

  • ghdl_repo/
    • build-gcc/
      • gcc-srcs/
      • run configure and copy-sources here
      • gcc-objs/
        • run configure, make and make install here
      • run make ghllib here
    • install-gcc/ (this the prefix, everything is installed here)

[@CrazyCasta] The Vagrant is really just for me. I'm only sharing it because I found the documentation to be a tad confusing and this is a concrete test case that I know works (and that hopefully anyone can get to work with a simple vagrant up). Also, I use Vagrant as a sort of blow stuff up and start clean until I get things working. With your method I'd be afraid of leaving junk in the repo and therefore not being clean after I break something.

There is not such thing as 'my method'. The command I put above is just a direct replacement to running buildtest.sh locally, which is 'the traditional way', i.e., it leaves some 'junk' in the repo. But that's only one of the uses cases for docker. Indeed, part of the 'complexity' of docker is due to it's versatility:

Furthermore, for the daemon mode, you can have a docker-compose file. Indeed, this is what you should compare to vargrant, because this is the one that let's you do docker-compose up -d, docker-compose down and docker-compose rm.

[@CrazyCasta] However, your method does look very nice for building things once you've figured it all out.

When it comes to 'figure it out', I usually run a container in interactive mode with the rm option, say docker -it --rm run debian:stretch-slim bash. Inside, I apt-get git gcc gcc-gnat..., then git clone... and start trying things. When I know what I needed, I just exit and everything vanishes. I did not touch any local dir at all.

Alternatively, I can remove the rm option, so that just after exiting the container I docker commit <container> <image> and have it saved as an image. Later, only if I want, I explore the image layers with microbadger or portainer, and seamlessly write a Dockerfile and a script that work.

Then, my workflow is:

  1. docker run -itn container_name image_name bash

  2. Figure out what to do, build, test...

  3. exit, docker commit container_name image_name

  4. Explore image with microbadger or Portainer, and write Dockerfile and script.sh

  5. docker build -t image_name_tmp . && docker run --rm -it image_name_tmp bash -c "$(script.sh)". Or, alternatively, use a docker-compose file with build: <Dockerfile>, which will do all this with a single docker-compose up.

  6. Figure out what to fix. If required, go to 4.

  7. Remove image_name, image_name_tmp and all dangling images.

  8. Write/enhance docker-compose.yml not to remember docker run options, such as volumes, ports, networks, hostname, entrypoint command, etc.

[@CrazyCasta] More things to remember in Docker (image_name, container_name, any other stuff like ports to expose that need to get put on the docker start line) and enough complexity that I don't actually know if I got that all right (I'd have to try it/look at the docs).

Hope these references above help you understand the flow. I mean I find these equivalences pretty clear:

This is what I meant yesterday when I suggested you to use buildtest.sh. I don't want to make you switch to docker, but try to make both workflows compatible, so that the tarballs you get and the tests you perform in the vargrant image are equivalent to the jobs we actually run in travis. Although I like docker and use it a lot, I don't think such a heavy dependency to be sane. Letting users know that they can use docker, vargrant or none of them and still get the same result would be great. You want a one liner? Use vargrant. You want a couple of lines and a few more params? Use docker. You want it full custom yet performant? Run docker. You want full local? Forget about vargrant/docker.

Indeed, your thoughts about the complexity of the workflow with docker are very useful and didactic to me.

[@CrazyCasta] It's possible that I just don't know how to use Docker properly in which case please correct any of my problems. For me, if I want to do anything in Docker I do sudo docker exec -it bash.

You are doing nothing wrong per se. Yet, running a container in daemon mode, -d, and the using docker exec -it bash makes little sense if you really want to docker run -it image bash. When does it make sense? Say you want to have a single GHDL container running as a service and execute multiple compilations/tests as a result of calls from different scripts running in parallel.

[@CrazyCasta] This likely requires that I type in my password.

Having to type the password is likely related to not having done linux-postinstall steps. You have to add your user to a group (named docker) in order to interact with the daemon without sudo.

[@CrazyCasta] Then once inside, the keymappings seem a bit off. When I try to use programs like vim the keymaps are very much off.

I have never had this kind of issues. I have seen them when using play-with-docker, but that's because the terminal is xterm.js, so it is expected. However, I think that it can be related to using docker exec. I think it is somehow similar to screen in the sense that it let's you attach and detach to containers. Then, those shortcuts may conflict with vim. Could you try the alternative (i.e., not using docker exec)?

Note that I am not a heavy vim user. Yet, I use it and nano randomly to edit sources inside containers.

[@CrazyCasta] Also, other things like bind mounts and the like are things I have to type on the command line, whereas vagrant is just "vagrant up". Now some of this would be fixed if I just got familiar with docker stacks/services/w/e, but some of it I feel is just the way docker works.

I know very little about stack/services, etc. when it comes to swarms, multiple nodes, replicas, load balancing, etc. However, docker-compose files are useful even for single containers in a single node/machine. See comparison to you vargrant file above.

[@CrazyCasta] As far as what you're saying. If it's in the docs that you can run these x,y,z docker commands and get ghdl installed in some docker, that would be great. Just don't leave any specifics out (don't use stuff). I.e. what I want is some lines that I can just basically copy paste (maybe deleting the $ at the beginning) and have it work.

That's the general idea, but it is not straightforward. As explained above we can:

  • Build locally (follow the docs as they are right now).
  • Build in docker:
    • Batch (buildtest.sh inside already available ghdl/build image)
    • Interactive (follow the docs inside a ghdl/build image)
    • Either batch or interactive, with local sources or without touching local sources
    • Build your own (with extras like git or python, use the docs and/or buildtest as a reference)
  • Run in docker:
    • ready-to-use images (@tgingold is not convinced about publishing them #489)
      • Batch
      • Interactive
      • GUI
    • build your own image:
      • from ghdl/build (compile and build image)
      • from ghdl/run (just pull latest image and add latest release tarball, or your custom GHDL tarball built with any of the paths above)
      • from sources (your own Dockerfile)
      • In any of the above, optionally add your extras (git, python...)

As you see, providing C&P code blocks is neither easy nor short, given all the possible use cases. That's why I'm concerned about putting all this info in the docs with the really GHDL-related details (see #477). I think we should stick with using ghdl/build and ghdl/run images internally but not explaining how to use them. At most, use them to explicitly tell which are the required libraries in each platform. Then I would elaborate the how to run GHDL in ready-to-use images. All the other details/possibilities should not be documented.

Note that this approach means that we won't fulfill this: you can run these x,y,z docker commands and get ghdl installed in some docker. But we will provide you can run these x,y,z docker commands and have GHDL executed on your design sources.

[@CrazyCasta] I say this because ghdl has a wonderfully long history, but with that seems to be a fair bit of legacy documentation (though I think it's getting much better) that suggests things that are a bit out of date. It can be aggravating to bang your head against some documentation only to find out it's out of date or I'm understanding it wrong, which is where the copy paste and it either works or doesn't comes in.

Indeed, the long-time effort of a single man is, at the same, time the beauty and the main stopper for GHDL. As programmers we know i) how hard it is to let others break into you project/framework (i.e. your brain) and ii) how hard it is to keep documentation up to date. It might be quite 'vintage', but that's because Tristan always put quality (i.e. being strict with the standard) before supporting latest features, so he focused on it as long as the build-system and docs worked. Yet, he is doing a great effort letting others collaborate (python interface, docker, new docs...). I think that it was not until the hype with FPGAs that users need info which is updated to 'latest standards'. Indeed, most of the active users open issues about VHDL support, not about how to compile GHDL, although it is rather required right now.

[@CrazyCasta] (Or if I'm told to run docker build -t ghdl/build:ubuntu-llvm-3.8 --target llvm-3.8 - < Dockerfile and 5 years in the future we're on llvm 7.x I can go, hmmm, maybe this is out of date).

Well, until six months ago GHDL was tested in Ubuntu 12.04. I mean, it is not a great deal if you are told to use ubuntu-llvm-3.8 (which is based on 14.04) in 5 years in the future, as long as that image is built with latest GHDL sources. That's the main feature of docker/vargrant.

Anyway, I think that the way to go is to add a reference to ghdl/ghdl/tags where users can see when was each image last updated.

@CrazyCasta
Copy link

Do we really need to 'support' each distro or can we focus on dependency versions? I.e., the GHDL mcode code tarball built in stretch should be valid for stretch, buster, ubuntu14, ubuntu16, etc., isn't it? The same applies to fedora, centos and redhat.

By support I mean "provide a system to create binary packages for". I don't like the idea of tarballs because they don't integrate well with the packaging system. I don't see any reason not to just provide packages if we can do it in a reasonable way.

Yes, but it would be much much better if you provide the scripts to automatically build them. Bonus point if this is integrated in travis-CI, so that for each new versions they are made available.

That's my bad, I didn't communicate well. I do very much mean an automated system to generate binary packages, not just the packages themselves.

Once we have a GPL package for a platform, I think it is straightforward to copy-extend or copy-edit to provide the non-GPL on its own or as a subpackage. In the end, the hardest part is to have the first package introduced in a distro.

I think the issue is certain distributions (debian I believe being one of them) have a no non-free code policy. The trouble if I understand is with the IEEE package. It would be nice if there was a way to simply provide a source tarball for all that stuff that would then be able to be installed on any machine that had ghdl already installed. I know it goes against my above comment about the distros packaging systems, but I'm not sure quite what the alternative is.

I am currently using this structure: https://github.com/1138-4EB/ghdl/blob/gcc-in-docker/dist/linux/buildtest.sh#L92-L134
...

Ok, what I meant is that that should be in the documentation. I managed to work out that structure (or at least something similar). I'm just saying that's what gave me trouble building the gcc version myself.

... All the docker stuff ...

Thanks, I'd forgotten about the ability to give run a command to run and along with the --rm command that makes things simpler. I know all the docker equivalents of Vagrantfile things in Dockerfile, I just didn't recall/know the shortcuts to more easily do docker stuff.

Indeed, the long-time effort of a single man is, at the same, time the beauty and the main stopper for GHDL. As programmers we know i) how hard it is to let others break into you project/framework (i.e. your brain) and ii) how hard it is to keep documentation up to date. It might be quite 'vintage', but that's because Tristan always put quality (i.e. being strict with the standard) before supporting latest features, so he focused on it as long as the build-system and docs worked. Yet, he is doing a great effort letting others collaborate (python interface, docker, new docs...). I think that it was not until the hype with FPGAs that users need info which is updated to 'latest standards'. Indeed, most of the active users open issues about VHDL support, not about how to compile GHDL, although it is rather required right now.

I don't really care about standards beyond '93. The '08 is great, but if I had to do w/o it I'd be fine. What I'm saying is that stuff like http://ghdl.free.fr/ghdl/ still comes up in google from time to time. Thankfully I can't find any old pages on how to build ghdl, so that doesn't appear to be an issue. My point is merely that stuff on the web sticks around for a really long time, so doing stuff like "../configure --with-gcc=/path/to/gcc/source/dir --prefix=/usr/local" doesn't work well because I can't tell what version of gcc should work with this version of ghdl. To the extent that these things are documented, I'm simply suggesting that we make them specific and keep them up to date.

@CrazyCasta
Copy link

Now, as far as packaging is concerned. Of the three packaging systems I'm somewhat familiar with (gentoo ebuilds, debian .deb's and redhat .rpm's) they all want at least the following data. I'll tell you what I have so far, but I'm asking for any input to make these better as appropriate.

Description - both a long and (sometimes) short description. What I'm using for now is "The GHDL simulator for VHDL."
Homepage - using https://github.com/ghdl/ghdl/ at the moment.
License - GPLv2, only question here is: do we have a problem with gcc being GPLv3 since we're likely making a derivative product.

Finally, it would be nice if the source .tar.gz files were named similar to ghdl-0.35.tar.gz instead of v0.35.tar.gz (i.e. https://github.com/ghdl/ghdl/archive/v0.35.tar.gz). I'm not sure if that's a constraint of github or anything. It's not really a big deal, I can work around it if necessary, it's just v0.35.tar.gz is a bit confusing.

@CrazyCasta
Copy link

Oh, and I finally remembered the last question. Do we have plans to build anything besides amd64 and perhaps x86 (like arm or anything else)? If so, how can we build any of that (for instance does travis-ci offer some arm servers)?

@eine
Copy link
Collaborator

eine commented Dec 22, 2017

@CrazyCasta , please check #497, #500 and #506, since they are related to some of your comments. I'll come back later and reply properly.

@CrazyCasta
Copy link

Ok, so sounds like some of that is still up in the air. No problem, I'll just use fillers for now. Just put it out there as stuff that eventually needs to get filled.

@eine
Copy link
Collaborator

eine commented Dec 22, 2017

By support I mean "provide a system to create binary packages for". I don't like the idea of tarballs because they don't integrate well with the packaging system. I don't see any reason not to just provide packages if we can do it in a reasonable way.

We first create a tarball, because that's what 'buildtest.sh' produces. Then, we do whatever we want with that tarball. On the one hand we upload it raw to github releases. On the other hand, it is extracted inside a docker image.

The question is, do we need a different build (thus tarball) in order to generate each package? Or can we build a single tarball and use it to generate multiple packages?

I think the issue is certain distributions (debian I believe being one of them) have a no non-free code policy. The trouble if I understand is with the IEEE package. It would be nice if there was a way to simply provide a source tarball for all that stuff that would then be able to be installed on any machine that had ghdl already installed. I know it goes against my above comment about the distros packaging systems, but I'm not sure quite what the alternative is.

@tgingold or @Paebbels will tell better, but I think that providing a source tarball which can be extracted to the github installation path is not enough. Those libraries need to be precompiled in order to have an installation functionally equivalent to the GPL version. Then, we should provide some installation script, and/or instructions.

Alternatively, can this be integrated in GHDL as ghdl install non-gpl [-f <tarbarll.tgz>] [-u <url>]?

Ok, what I meant is that that should be in the documentation. I managed to work out that structure (or at least something similar). I'm just saying that's what gave me trouble building the gcc version myself.

Well... I hadn't tried to build GHDL with gcc myself until last week. Hope it is easier now for new users...

I don't really care about standards beyond '93. The '08 is great, but if I had to do w/o it I'd be fine.

With 'latest standards' I meant people being used to install something with a single command and have it ready to go. Having to compile GHDL with gcc is so vintage for some (only) javascript, python, ruby users.

What I'm saying is that stuff like http://ghdl.free.fr/ghdl/ still comes up in google from time to time. Thankfully I can't find any old pages on how to build ghdl, so that doesn't appear to be an issue.

Although hidden in the home, you can still find old installation info: http://ghdl.free.fr/site/pmwiki.php?n=Main.Installation Indeed, this is the main motivation for #506. I hope that redirecting the domain to a different host helps us get rid of those deprecated pages.

My point is merely that stuff on the web sticks around for a really long time, so doing stuff like "../configure --with-gcc=/path/to/gcc/source/dir --prefix=/usr/local" doesn't work well because I can't tell what version of gcc should work with this version of ghdl. To the extent that these things are documented, I'm simply suggesting that we make them specific and keep them up to date.

Well, it is documented now. However, it doesn't tell much: http://ghdl.readthedocs.io/en/latest/building/gcc/index.html That's because GHDL is expected to work with any version of GCC. When any conflict is detected, or when new versions introduce changes, Tristan fixes them as soon as he is aware of it.

It is somehow the same with architecture support. It is explicitly said that mcode backend can only be used on x86[_64]. My interpretation is that GHDL is expected to work on any architecture where the dependencies can be installed.

Now, as far as packaging is concerned. Of the three packaging systems I'm somewhat familiar with (gentoo ebuilds, debian .deb's and redhat .rpm's) they all want at least the following data.

From #500, note that there are two up-to-date packages which are being maintained by different people:

Maybe the rpm from fedora can be reused for redhat.

What I'm using for now is "The GHDL simulator for VHDL."

In #506 I am using "GHDL, where VHDL meets gcc", because that's what I found in the old site. @tgingold, would you mind telling us which is the short description/subtitle/tagline you want?

Finally, it would be nice if the source .tar.gz files were named similar to ghdl-0.35.tar.gz instead of v0.35.tar.gz (i.e. https://github.com/ghdl/ghdl/archive/v0.35.tar.gz). I'm not sure if that's a constraint of github or anything. It's not really a big deal, I can work around it if necessary, it's just v0.35.tar.gz is a bit confusing.

On the one hand, if the packages are generated in travis, don't worry about it at all. The repository is cloned by default, so we don't need to get the archive from github, just tar the active dir. What we will push are debs/rpms. On the one hand, check #496, as naming of artifacts was discussed there recently.

Oh, and I finally remembered the last question. Do we have plans to build anything besides amd64 and perhaps x86 (like arm or anything else)? If so, how can we build any of that (for instance does travis-ci offer some arm servers)?

Travis offers something better... docker containers! Cross-compilation in docker containers is pretty straighforward. However, I am afraid of opening Pandora's box. How the hell do we handle 30-50 different builds?

  • [x86_64] stretch+mcode
  • [x86_64] stretch+mcode+gpl
  • [x86_64] stretch+gcc-5_5_0
  • [x86_64] buster+mcode
  • [x86_64] buster+mcode+gpl
  • [x86_64] buster+gcc-7_2_0
  • [x86_64] ubuntu14+mcode
  • [x86_64] ubuntu14+llvm-3.8
  • [x86_64] ubuntu16+mcode
  • [x86_64] ubuntu16+llvm-3.9
  • [x86_64] ubuntu18+mcode
  • [x86_64] ubuntu18+llvm-5.0
  • [x86_64] fedora26+mcode
  • [x86_64] fedora26+llvm
  • [x86_64] fedora26+gcc-6_4_0

Note that mcode requires 2m30, LLVM 6-7m and gcc >25m: https://travis-ci.org/1138-4EB/ghdl/builds/317436590 Indeed, in the master branch only five of the above are executed: https://travis-ci.org/ghdl/ghdl

Should we repeat for x86, armv7, armhf, raspbian, fedberry, pidora...? What about bananaPi, OrangePi, etc.? Are binary compatible with Raspberries? Also library versions?

Don't take me wrong. I'd be delighted to have all this options available with the same entry script. Furthermore, I think we can provide a docker-based approach so that any user can execute ghdl-install.sh [--arch <myarch>] [--os <my_os:version>] [--backend <gcc:version>]. But, shall we explicitly state that any architecture (x86[_64], ARM, PowerPC and MIPS) is supported, how can we verify that those script are up to date?

I don't think it is just a matter of not being abusive with travis, but we would easily hit the limit even if we didn't bother about it. With effort, we can split the release process and schedule it in several days to reduce the impact. Alternatively, it can be executed locally in some maintainers/collaborators box. But, I find it 'cleaner' to have all the process exposed.

Anyway, in order to do any, we need a well defined matrix of all the options and the path from the sources to each output, before focusing on how can that be technically done. This connects with my first question: is a separate build required for each package?

For example, the AUR package is absolutely indepent of what we do in travis now. Might be interesting to have it cloned in the new ghdl org and add a job to travis which installs it in an arch-based docker container. However, we would not push any artifact, because it does not fit in the concept of AUR. If all the package managers where like this, it'd be so easy :D.

@tgingold
Copy link
Member

tgingold commented Dec 23, 2017 via email

@tgingold
Copy link
Member

tgingold commented Dec 23, 2017 via email

@eine
Copy link
Collaborator

eine commented Dec 23, 2017

I am pretty sure that debian build system must start from sources, not from a tarball.

Then, I believe we will be replacing each job with a different one each time we get the packaging procedure done. I.e., we are not going to reuse build.sh. I think that the alternate procedure can be:

  • Build ghdl/build:stretch-mcode based on debian:stretch-slim
  • Create deb package in ghdl/build:stretch-mcode
  • Execute a container based on debian:stretch-slim:
    • Install deb package
    • Execute test.sh

The main difference is that we get rid of ghdl/run:stretch-mcode, because dependencies should be defined in the package and, therefore, they are installed with GHDL.

My idea for debian is to have two packages: a pure gpl one, and a second one with only the non-free libraries. If a user wants to use the non-free libraries, the user 'just' needs to install them.

Does this imply having the 'pure gpl' distributed trhough debian repositories and making users download the non-free libraries from GitHub releases?

I'd prefer not.

Ok.

This tagline is not from me, and I think it doesn't describe correctly ghdl (in particular you can build ghdl without gcc!). So we need to find a better one!

Agree. I kept it because I think that the main feature of GHDL is that it takes VHDL as ADA and it gets compiled. "where VHDL meets gcc" somehow reflects that, in the sense that gcc is 'The Compiler'. However, I agree that we might find a better wording for it. Sadly, I'm so bad in marketing :S.

We don't need to build the full matrix, but then we have to choose (maybe after looking on the download figures).

Agree. But we might be chasing our own tails here. We need to make options available so that users can download them, in order for us to have figures. Shall we create a "permanent issue" where we list all the 'expected to work' combinations and let users reply/comment with their requirements? It will allow us to track the 5-10 requested packages which will be included in the travis matrix. Then, we can add refs from releases, RTD and the site to the issue.

Building with docker shouldn't be difficult, but I fear testing isn't completely easy.

When I said that cross-compilation in docker is quite straighforward I was thinking about running qemu inside docker. Doing so would allow us to execute both build.sh (or the package creation equivalent) and test.sh. I do think that performance might be a little worse than doing proper cross-compilation (i.e., avoiding the qemu layer). Yet, I think this is the way to go, because it is closer to what a user building GHDL in, say Raspbian/RPI, would do.

@eine
Copy link
Collaborator

eine commented Dec 24, 2017

Just found this:

Created in 2005, the Open Build Service (OBS) is a generic system for building and distributing packages or images from source code in an automatic, consistent, and reproducible way. OBS can create images and installable packages for a wide range of operating systems (SUSE, Debian, Ubuntu, Red Hat, Windows, etc.) and hardware architectures (x86, AMD64, z Systems, POWER etc.).

Might be the way to go. Run travis as it is now for every push and run openbuildservice for releases. I don't know at what extend would the two processes overlap, tho. Have any of you heard of openbuildservice?

Shall we use openbuildservice instead of packaging for each of the supported options 'by hand', we have two alternatives:

  • A hosted service, such as opensuse's, which has a running backend and provides a frontend.
  • Run the service in a travis stage as a docker container and use the available cli tool instead of the frontend.

@eine
Copy link
Collaborator

eine commented Dec 30, 2017

@set-soft maintains somehow up-to-date debian (stretch) package sources (set-soft/ghdl-debian), described as Unofficial Debian package. deb files are not distributed through GitHub releases, but made available to apt from http://fpgalibre.sourceforge.net/ice40.html#tp22. He explains here (in spanish) that he run the testsuite.

Options:

  • Fork the repo.
    • Just ask him if he would let us know which are the scripts he runs to get the deb file built, so that we can integrate those steps in travis.
  • Ask him to move it to the organization.

@set-soft no need to read all the issue. Short version: we currently execute build.sh, which generates a tarball. Then, we install it and run test.sh. We do this in a Travis CI job. We would like to replace generating the Debian tarball(s) with a generating a deb.

@set-soft
Copy link

set-soft commented Jan 2, 2018

If you copy the "debian" dir then you can generate the .deb just like you do with any Debian package.
To generate the .deb file you can run:
# dpkg-buildpackage -b -uc
If you are not root:
$ fakeroot dpkg-buildpackage -b -uc
The -b is used to create the binary distro (skipping the source tarball).
The -uc is to generate a package that isn't signed. I sign the repository files.
My files creates an LLVM ghdl. To change this you need to edit debian/rules:
./configure --with-llvm-config --prefix=/usr
I think that's all. The debian file I used are the last available, I took the dependencies from this thread and adjusted some small details. Nothing special.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI: Travis-CI Continuous Integration issues related to the Travis-CI platform (Linux and MacOS-based CI)
Projects
None yet
Development

No branches or pull requests

6 participants