Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Support 2.0 resources #13
Conversation
|
Overall, it seems ok. I guess we will need to upload all the kafka tarballs we have (one per architecture). I got the following error when testing it under lxc on a trusty machine and using the Juju 1.25. |
chuckbutler
approved these changes
Sep 16, 2016
Overal this is +1 LGTM. I left some feedback comments on our own discoveries when using resources as a thought exercise.
| @@ -14,14 +16,29 @@ def __init__(self, dist_config=None): | ||
| self.resources = { | ||
| 'kafka': 'kafka-%s' % utils.cpu_arch(), | ||
| } | ||
| - self.verify_resources = utils.verify_resources(*self.resources.values()) | ||
| + | ||
| + def verify_resources(self): |
chuckbutler
Sep 16, 2016
One thing I did here in our own charms was add a filesize threshold so we could "work around" the zerobyte resource we're publishing our charms with. I suggest you add an additional check:
import os
if os.stat(self.resources['kafka']).st_size < 100000:
return False
or something to this effect. this makes the resource validation a bit more robust (must be > 1mb to be considered a valid resource)
johnsca
Oct 13, 2016
Owner
Ok, so we had the discussion and I understand that the 0-byte resource is to make it optional from the admin but to fall back to non-resource behavior if it's not provided. I don't want to do that in this case, since I'd like to phase out the S3 bucket entirely in favor of 2.0-style resources.
| - jujuresources.install(self.resources['kafka'], | ||
| - destination=self.dist_config.path('kafka'), | ||
| - skip_top_level=True) | ||
| + filename = hookenv.resource_get(self.resources['kafka']) |
chuckbutler
Sep 16, 2016
I liked this fallback mechanism. I had similar in the etcd charm. The one thing I did notice is that it made it less clear where the package was coming from. I feel like we need a way to report this back to us (the charm author) when running support.
Perhaps a state or an action to generate this information would be helpful
| - self.verify_resources = utils.verify_resources(*self.resources.values()) | ||
| + | ||
| + def verify_resources(self): | ||
| + if hookenv.resource_get(self.resources['kafka']): |
chuckbutler
Sep 16, 2016
•
Anywhere you call resource_get will also need to be encapsulated in a try/except block if you want to retain backwords compat with 1.25 deployments.
johnsca
Sep 21, 2016
•
Owner
Hrm. I misread the try / except built in to the helper. That's rather annoying, but I suppose it is helpful to distinguish between not supported and not available.
|
For reference, this fixes #12 |
| + filename = hookenv.resource_get('kafka') | ||
| + extracted = fetch.install_remote('file://' + filename) | ||
| + # get the nested dir | ||
| + extracted = os.path.join(extracted, os.listdir(extracted)[0]) |
petevg
Oct 3, 2016
Contributor
Can we always rely on the tarball being constructed so that it contains the dir that we want? What happens if someone cds into the kafka dir and creates the tarball from there?
johnsca
Oct 4, 2016
Owner
Then it will break, the same as if they uploaded some random binary blob instead of a properly structured tarball. I suppose we could assert that the extracted path only contains a single item, but at a certain point we can no longer protect the user from themselves. :)
|
LGTM/+1 (I'm good with putting the onus on the user to construct their tarball properly.) |
| + destination = self.dist_config.path('kafka') | ||
| + if os.path.exists(destination): | ||
| + shutil.rmtree(destination) | ||
| + shutil.copytree(extracted, destination) |
kwmonroe
Oct 13, 2016
Member
this feels uber clumsy.. are we really calling fetch.install_remote to extract in one dir only to copy it to another? why not just extract to the destination in the first place?
johnsca
Oct 15, 2016
•
Owner
install_remote doesn't have an analogue to skip_top_level, so this is the best way I could come up with to get the nested contents of the archive. I'm open to alternatives.
(That is, the archive contains a kafka/ or similar dir with all of the contents nested under that, and we need the contents to live directly in destination.)
kwmonroe
Oct 20, 2016
Member
Ah, ok, i didn't notice that you were working around the top dir. Carry on.
johnsca commentedSep 14, 2016
No description provided.