New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build server: new hardware #1715

Closed
markus2330 opened this Issue Dec 4, 2017 · 17 comments

Comments

Projects
None yet
3 participants
@markus2330
Contributor

markus2330 commented Dec 4, 2017

If anyone is interested: We have new hardware available which could be included for the build server. (AMD Ryzen)

@BernhardDenner

This comment has been minimized.

Show comment
Hide comment
@BernhardDenner

BernhardDenner Dec 10, 2017

Contributor

Oh, nice ... where to find and how to setup ;)

Maybe during the X-mas holidays there is some time for that.

Contributor

BernhardDenner commented Dec 10, 2017

Oh, nice ... where to find and how to setup ;)

Maybe during the X-mas holidays there is some time for that.

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Dec 10, 2017

Contributor

Thank you, this is great news! It is also the perfect opportunity to write a tutorial of how to setup a new computer with puppet-libelektra ;)

I'll send you the login details once it has a public IP. (Currently it has an internal IP which would require us to tunnel over another computer.)

Does anyone know if the difference of Ryzen 5 and 7 is relevant for us? Or is it only a matter of seconds for the build time? More cores could be relevant though, I'll check the exact CPUs they have. The computer we will not use, will be used as low-load mail/web server (currently served by an AMD X2 Dual Core with a load of 0).

@tom-wa Are there any news about the power9 computer?

Contributor

markus2330 commented Dec 10, 2017

Thank you, this is great news! It is also the perfect opportunity to write a tutorial of how to setup a new computer with puppet-libelektra ;)

I'll send you the login details once it has a public IP. (Currently it has an internal IP which would require us to tunnel over another computer.)

Does anyone know if the difference of Ryzen 5 and 7 is relevant for us? Or is it only a matter of seconds for the build time? More cores could be relevant though, I'll check the exact CPUs they have. The computer we will not use, will be used as low-load mail/web server (currently served by an AMD X2 Dual Core with a load of 0).

@tom-wa Are there any news about the power9 computer?

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Dec 18, 2017

Contributor

The ryzen hardware is reachable at a7.complang.tuwien.ac.at

Contributor

markus2330 commented Dec 18, 2017

The ryzen hardware is reachable at a7.complang.tuwien.ac.at

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Jan 5, 2018

Contributor

Seems like a7 is down, will try to fix it.

Contributor

markus2330 commented Jan 5, 2018

Seems like a7 is down, will try to fix it.

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Jan 5, 2018

Contributor

We restarted it and I temporarily fixed /etc/resolv.conf by removing the symlink to NetworkManager. I am not sure if it will survive a reboot, otherwise it should work, though. Our admin will take a look at it on Monday.

Contributor

markus2330 commented Jan 5, 2018

We restarted it and I temporarily fixed /etc/resolv.conf by removing the symlink to NetworkManager. I am not sure if it will survive a reboot, otherwise it should work, though. Our admin will take a look at it on Monday.

@BernhardDenner

This comment has been minimized.

Show comment
Hide comment
@BernhardDenner

BernhardDenner Jan 6, 2018

Contributor

The first build jobs already passed on the a7 machine.

However, I've decided to do a simple POC for docker based builds: see https://build.libelektra.org/jenkins/job/test-docker/
Of course, this is far away from being complete, but maybe it gives you some impressions and ideas 😄

Contributor

BernhardDenner commented Jan 6, 2018

The first build jobs already passed on the a7 machine.

However, I've decided to do a simple POC for docker based builds: see https://build.libelektra.org/jenkins/job/test-docker/
Of course, this is far away from being complete, but maybe it gives you some impressions and ideas 😄

markus2330 added a commit that referenced this issue Jan 6, 2018

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Jan 6, 2018

Contributor

Thank you, this is great!

The pipeline config looks really nice. Does the pipeline run with two different images (stretch and xenial)? (The loop looks like only one image is used: "docker.image('elektra-builddep:stretch').inside()").

Is it safe to use sudo and install Elektra or will this modify the docker image? (I added two more stages but commented them out for now.)

I also enabled triggering from GitHub (by default and phrases, see fae2fbf). It first did not work because I forgot to add it as "GitHub" project.

The speed of the hardware seems to be good.

Contributor

markus2330 commented Jan 6, 2018

Thank you, this is great!

The pipeline config looks really nice. Does the pipeline run with two different images (stretch and xenial)? (The loop looks like only one image is used: "docker.image('elektra-builddep:stretch').inside()").

Is it safe to use sudo and install Elektra or will this modify the docker image? (I added two more stages but commented them out for now.)

I also enabled triggering from GitHub (by default and phrases, see fae2fbf). It first did not work because I forgot to add it as "GitHub" project.

The speed of the hardware seems to be good.

@BernhardDenner

This comment has been minimized.

Show comment
Hide comment
@BernhardDenner

BernhardDenner Jan 6, 2018

Contributor

Does the pipeline run with two different images (stretch and xenial)? (The loop looks like only one image is used: "docker.image('elektra-builddep:stretch').inside()").

Oh, yes indeed. should be replaced by $it. I've added a jessie version too now 😏

Is it safe to use sudo and install Elektra or will this modify the docker image?

In general, yes, because Docker images are immutable, all changed files go into the "runnging" container (copy-on-write). So each new container gets exactly the same file, no modification from previous runs.

The Jenkins Docker plugin defaults uses a unprivileged user with same UID within the container, to allow writes to the workspace outside the container.
I have already experimented with running everything in the container as root, to get "run_all" passing, but may shell recorder tests are failing with this (e.g. "using system/..." instead of "using user/..."). So I've reverted that for now.

In general we should design the Docker images in a way to have all tests passing without requiring root privileges, while allowing to test modifications in system space too.
I'm not quite sure, what is really required here? Does a chmod -R 777 /etc/kdb do the trick?

Contributor

BernhardDenner commented Jan 6, 2018

Does the pipeline run with two different images (stretch and xenial)? (The loop looks like only one image is used: "docker.image('elektra-builddep:stretch').inside()").

Oh, yes indeed. should be replaced by $it. I've added a jessie version too now 😏

Is it safe to use sudo and install Elektra or will this modify the docker image?

In general, yes, because Docker images are immutable, all changed files go into the "runnging" container (copy-on-write). So each new container gets exactly the same file, no modification from previous runs.

The Jenkins Docker plugin defaults uses a unprivileged user with same UID within the container, to allow writes to the workspace outside the container.
I have already experimented with running everything in the container as root, to get "run_all" passing, but may shell recorder tests are failing with this (e.g. "using system/..." instead of "using user/..."). So I've reverted that for now.

In general we should design the Docker images in a way to have all tests passing without requiring root privileges, while allowing to test modifications in system space too.
I'm not quite sure, what is really required here? Does a chmod -R 777 /etc/kdb do the trick?

@BernhardDenner

This comment has been minimized.

Show comment
Hide comment
@BernhardDenner

BernhardDenner Jan 6, 2018

Contributor

For handling Docker images, I would suggest:

  • We add all Docker image recipes (Dockerfile) into our libelektra repo
  • create a build job to build all images automatically
  • upload them to Dockerhub for sharing

This way, we can use them for builds in a well defined way and other users/devs can use these images for testing too.

Contributor

BernhardDenner commented Jan 6, 2018

For handling Docker images, I would suggest:

  • We add all Docker image recipes (Dockerfile) into our libelektra repo
  • create a build job to build all images automatically
  • upload them to Dockerhub for sharing

This way, we can use them for builds in a well defined way and other users/devs can use these images for testing too.

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Jan 6, 2018

Contributor

shell recorder tests are failing with this

Can you create an issue? Or are they only related to not being able to write to /etc/kdb?

In general we should design the Docker images in a way to have all tests passing without requiring root privileges, while allowing to test modifications in system space too.

While it is possible to test Elektra without ever being root, it makes the setup unrealistic. So we should take advantage of the non-harmful root access and run sudo make install into the /.

I'm not quite sure, what is really required here? Does a chmod -R 777 /etc/kdb do the trick?

A chown is needed and it needs to be executed as root.
See doc/TESTING.md (spec folders need to be chowned, too).

For handling Docker images, I would suggest:

  • We add all Docker image recipes (Dockerfile) into our libelektra repo

We already have a doc/docker/Dockerfile. You are welcomed to add more Dockerfiles.

  • create a build job to build all images automatically
  • upload them to Dockerhub for sharing

Yes, these are excellent suggestions but as always it is a question of available time. The most urgent point is that we understand the setup you have done, so that others can add further Docker images and so on.

Contributor

markus2330 commented Jan 6, 2018

shell recorder tests are failing with this

Can you create an issue? Or are they only related to not being able to write to /etc/kdb?

In general we should design the Docker images in a way to have all tests passing without requiring root privileges, while allowing to test modifications in system space too.

While it is possible to test Elektra without ever being root, it makes the setup unrealistic. So we should take advantage of the non-harmful root access and run sudo make install into the /.

I'm not quite sure, what is really required here? Does a chmod -R 777 /etc/kdb do the trick?

A chown is needed and it needs to be executed as root.
See doc/TESTING.md (spec folders need to be chowned, too).

For handling Docker images, I would suggest:

  • We add all Docker image recipes (Dockerfile) into our libelektra repo

We already have a doc/docker/Dockerfile. You are welcomed to add more Dockerfiles.

  • create a build job to build all images automatically
  • upload them to Dockerhub for sharing

Yes, these are excellent suggestions but as always it is a question of available time. The most urgent point is that we understand the setup you have done, so that others can add further Docker images and so on.

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Jan 9, 2018

Contributor

Is there any problem with tagging the ryzon HW build agent also as stable/stretch? It would speed up the build time a lot.

Contributor

markus2330 commented Jan 9, 2018

Is there any problem with tagging the ryzon HW build agent also as stable/stretch? It would speed up the build time a lot.

@BernhardDenner

This comment has been minimized.

Show comment
Hide comment
@BernhardDenner

BernhardDenner Jan 9, 2018

Contributor

until now, I've skipped the installation of Elektra build deps. But I can do that in addition.
I'll label the agent accordingly afterwards.

Contributor

BernhardDenner commented Jan 9, 2018

until now, I've skipped the installation of Elektra build deps. But I can do that in addition.
I'll label the agent accordingly afterwards.

@BernhardDenner

This comment has been minimized.

Show comment
Hide comment
@BernhardDenner

BernhardDenner Jan 9, 2018

Contributor

Labeled agent as "stretch" an first native (non-docker) build job succeeded: https://build.libelektra.org/jenkins/job/elektra-gcc-configure-debian-stretch/662

Contributor

BernhardDenner commented Jan 9, 2018

Labeled agent as "stretch" an first native (non-docker) build job succeeded: https://build.libelektra.org/jenkins/job/elektra-gcc-configure-debian-stretch/662

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Jan 10, 2018

Contributor

Thank you, this is great! Let us see how it improves build time.

Did you install the deps directly on the hosts or within a container?

Btw. seems like we get a second ryzen hardware for at least one year. But it is not directly reachable via Internet but we would need a ssh tunnel over a7. Anyone interested in setting this up?

Accounts should be available within the next days.

Contributor

markus2330 commented Jan 10, 2018

Thank you, this is great! Let us see how it improves build time.

Did you install the deps directly on the hosts or within a container?

Btw. seems like we get a second ryzen hardware for at least one year. But it is not directly reachable via Internet but we would need a ssh tunnel over a7. Anyone interested in setting this up?

Accounts should be available within the next days.

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Jan 30, 2018

Contributor

@e1528532 Having the new ryzen included as agent would be great, the build server is under really heavy load. I am afraid the load also causes the lost connections among other problems.

$ w
19:39:19 up 43 days, 10:34,  0 users,  load average: 10,88, 9,42, 10,50

Ideally we should avoid any build job to be built on the hardware where jenkins is running. (Even the build server website sometimes is hardly responding.)

Contributor

markus2330 commented Jan 30, 2018

@e1528532 Having the new ryzen included as agent would be great, the build server is under really heavy load. I am afraid the load also causes the lost connections among other problems.

$ w
19:39:19 up 43 days, 10:34,  0 users,  load average: 10,88, 9,42, 10,50

Ideally we should avoid any build job to be built on the hardware where jenkins is running. (Even the build server website sometimes is hardly responding.)

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Feb 5, 2018

Contributor

The debian unstable agents seem to be the new bottleneck. Maybe we can add one more docker agent on the v2?

And we should reduce the build jobs on the jenkins server itself, it still gets too high load. (Somtimes even 502 errors.)

Contributor

markus2330 commented Feb 5, 2018

The debian unstable agents seem to be the new bottleneck. Maybe we can add one more docker agent on the v2?

And we should reduce the build jobs on the jenkins server itself, it still gets too high load. (Somtimes even 502 errors.)

@e1528532 e1528532 removed their assignment Jun 11, 2018

@e1528532

This comment has been minimized.

Show comment
Hide comment
@e1528532

e1528532 Jun 11, 2018

Contributor

a lot of stuff happened with the build infrastructure, so i think this issue can be closed.

Contributor

e1528532 commented Jun 11, 2018

a lot of stuff happened with the build infrastructure, so i think this issue can be closed.

@markus2330 markus2330 closed this Jun 11, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment