-
Notifications
You must be signed in to change notification settings - Fork 5
Conversation
Nice work! It is a lot of work, I wish we could have done it in more steps so it would be easier. In any case, I have taken it for a spin in aws and lxd, here are the errors I got: http://pastebin.ubuntu.com/24034264/ and http://pastebin.ubuntu.com/24033990/ Is noble-spider your pet? :) |
@ktsakalozos Ah, I missed that |
I removed the old cwr subordinate and added the new one and got the follwoing error: Then I logged in jenkins and did a lxc image remove cwrbox On a clean install of jenkins+cwr on lxd: |
When deployed on a new image, the LXD storage pool won't be configured. The charm needs to ensure that `lxd init` is run to do so. If deployed on a localhost/lxd provider with an already initialized LXD, the charm should continue gracefully. Also turned off script debugging and added additional echos to improve the job console log. Also ensure that immediate exit on any error is enabled for all jobs by setting it at the top of cwr-helpers.sh.
I rebased against master and fixed the The second failure is somewhat expected; if you delete the image you'll also need to remove the signature file from The last error I can't replicate, likely because I'm using ZFS for my LXD storage. I'll try to replicate by bootstrapping Juju with LXD on an Amazon instance, but any debugging you can do on your end would be appreciated. |
This seems to be the issue with |
When using directory-backed storage for LXD, the perms require that the containers be marked as privileged. We were already mapping the container's root user to the charm's jenkins user, so we don't get any additional security from unprivileged containers anyway.
All of the issues that @ktsakalozos hit are resolved now. |
This is working great for me.. I tested with cwr-52 and ran a charm and bundle job concurrently. Watching I really want to push the merge button because i'm that excited about this. However, I'll let @ktsakalozos do it so he can verify his earlier comments have been addressed in cwr-52. +1, lgtm. |
Nooooo! I spoke too soon.. Bundle job finished clean, but charm job hit a http://juju.does-it.net:8081/job/charm_openjdk_in_cs__kwmonroe_bundle_java_devenv/6/consoleFull Edit: seemed to be a transient issue. Re-running both jobs succeeded. I retract my "Noooooo", but I would like to see the |
This was from a previous attempt to manage networking with an older version of lxd.
@kwmonroe The timeout seems to be from |
This looks super cool but travis seems to hate it :( |
install_sources: | ||
description: PPAs from which to install LXD and Juju | ||
type: string | ||
default: | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i have a dumb question. Why use the apt packages over the snaps? It seems like a lot of tooling isn't going to be maintained in debs anymore... I cite:
- charm-tools
- conjure-up
as two candidates in question. Are we signing up for pain later not integrating with snaps out of the gate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had run in to issues with the snaps during development before I found out about the squashfuse work-around. It would probably be good to switch to snaps where possible, though snaps do make the restricted network story more complicated. Is there a way to run a snap mirror similar to an apt mirror?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chuckbutler The Travis failures are due to an upstream packaging issue with libcharmstore when installing charm-tools on trusty. We're waiting on @marcoceppi to resolve that. I tried to use the snap, but that failed due to this issue. I'd like it if we could figure out a way to use the snap in Travis but I have no idea how to proceed there. |
@@ -54,7 +53,7 @@ def add_job(): | |||
branch = "*/master" | |||
elif repo_access == 'poll': | |||
trigger = TRIGGER_PERIODICALLY | |||
skip_builds = SKIP_BUILDS | |||
skip_builds = 'skip_builds' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this string is in too places, it's probably better to leave it in the constant. Avoids the problem where someone alters one down the line, but not the other.
Overall, I am +1 on this. Nothing major jumped out to me in a readthrough of the code, and I'm able to deploy without errors to aws, and setup and run the tests. |
@kwmonroe The timeout that you ran into is more likely a problem with the charm in general, rather than a problem with containerizing, correct? If so, I think that we should merge this ... |
LGTM2! Merging it! |
This is a pretty significant refactor, obviously. I'd really like to see all of the logic not directly related to managing the LXD image, Jenkins jobs, and Juju config (and possibly the release logic) moved in to the underlying tooling (cwr, bundletester, matrix). Specifically, I think we need a well-defined way of providing general override information for bundles for the purposes of testing. This would need to cover not just overriding specific charms with other revs or builds from repos, but also things like adding a testing specific charm, overriding the default number of units, etc. Having all of that in the tooling would make the charm much simpler.
In the meantime, we might consider moving much of the logic into the cwrbox image. It would allow us to push out updates to the logic in the container that would be picked up on the next build (unless a given deployment was using a locally attached resource version of the cwrbox image, in which case it would be manual for that deployment).
On the point of the image source, manually hosting the tarball in S3 was the quickest way to have it work out of the box, but is less than ideal. Ideally, we could run a public LXD remote server, but that would require more resources, a domain, and I'm not sure how to—or if you even can—lock down all operations other than copying images from it. I also looked in to running a simplestreams host for the images, which would be read-only out of the box, but that requires repackaging the image that gets exported (because simplestreams doesn't support unified images and only supports xz compression), and we'd still need to host that.