Skip to content
This repository has been archived by the owner on Jan 19, 2018. It is now read-only.

Add persistent storage #460

Merged
merged 4 commits into from
Jan 7, 2016
Merged

Conversation

cdrage
Copy link
Member

@cdrage cdrage commented Dec 15, 2015

DONE:

Initial work on persistent storage. Work's correctly. Just need to clean-up most of the code.

It's a Work-In-Progress so excuse the multiple log messages, but you get the gist of how I implemented it.

  • Code comments / docs
  • More tests!
Name                        Stmts   Miss  Cover   Missing
---------------------------------------------------------
atomicapp/requirements.py      36      3    92%   40, 43, 46
----------------------------------------------------------------------
Ran 5 tests in 0.009s

tests cover 92% and under kubernetes.py persistent_storage function too

  • Clean up log messages

TODO:

  • Grep kubectl pvc output or something to see if there is actually any persistent volumes yet for Kubernetes. Warn the user if so.
  • Logs are a mess (this will be cleaned up in refactor logging #134)
  • Documentation in Nulecule spec or nulecule-library example should be added.
  • Split into different commits

Notes:

  • For now this will only be for Kubernetes persistent storage. OpenShift will be added at a future date (Docker maybe too with the Flocker plugin).
  • Only creating persistent storage has been adding, no removing (yet?) we need to discuss this.

@cdrage cdrage force-pushed the persistent-storage branch 6 times, most recently from 93140ad to a0073c7 Compare December 18, 2015 18:16
cmd = [self.kubectl, "create", "-f", tmp_file, "--namespace=%s" % self.namespace]
if self.config_file:
cmd.append("--kubeconfig=%s" % self.config_file)
self._call(cmd)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add deleting / removing persistent storage or just running?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess for now we can stick with this and revisit when we do uninstall.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ping @dustymabe how should we check before running this? another if statement at the beginning of the function to check what actions are available?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right.. I would move "checks" to the beginning of the function. No reason to do work if we fail the checks

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given this is "persistent" we probably don't want to ever be deleting. I believe OpenShift does not delete storage when all objects are destroyed. It's treated special so as to do no unexpected harm.

@cdrage cdrage added the done label Dec 18, 2015
@cdrage cdrage changed the title [WIP] Add persistent storage Add persistent storage Dec 18, 2015
@cdrage
Copy link
Member Author

cdrage commented Dec 18, 2015

Oh, another note, logging is starting to get SO MESSY. Way too much logging coming out now when running each application, this will be fixed in #134

@dustymabe
Copy link
Contributor

Ok.. ran through it once.. not sure I quite understand it all just yet.. still digesting. I think this would be better/easier to understand if you broke up the big commit into smaller commits with good commit messages. Let's chat later today or next week about the architecture

# Pass configuration, path of the app, graph, provider as well as dry-run for provider init()
if REQUIREMENTS_KEY in self.graph[0]:
logger.debug("Requirements key detected. Running action.")
Requirements(self.config, self.basepath, self.graph[0][REQUIREMENTS_KEY], provider_key, dryrun).run()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be

requirements = Requirements(etc...)
requirements.run()

@cdrage
Copy link
Member Author

cdrage commented Dec 18, 2015

Merged in changes from @dustymabe 's suggestions :) Let's split this into different commit's and have a look at this after the weekend

@cdrage cdrage force-pushed the persistent-storage branch 2 times, most recently from 806a5ba to 1aaa682 Compare December 21, 2015 16:55
@cdrage
Copy link
Member Author

cdrage commented Dec 21, 2015

Okay, so I've split the commits as well as added checking kubectl get pv to see if any persistent volumes actually exist in order to claim them (with persistentVolumeClaim).

Ping @dustymabe

The only thing that's left is documentation to be added.

We will need a discussion with @bexelbie about this. With a new feature like this added, it's hard for users to see the significant changes to atomicapp if we only update the Nulecule spec. A detailed documentation that lists all Nulecule functions in atomicapp should be here. I can volunteer to help add this.

Side note:
Let's add OpenShift functionality after this PR once we know that Kubernetes 100% works. (and possibly Docker with the flocker plugin? maybe. not many people using it at the moment IMO)

# Pass configuration, path of the app, graph, provider as well as dry-run for provider init()
if REQUIREMENTS_KEY in self.graph[0]:
logger.debug("Requirements key detected. Running action.")
r = Requirements(self.config, self.basepath, self.graph[0][REQUIREMENTS_KEY], provider_key, dryrun)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split up this line.. too long

@cdrage cdrage force-pushed the persistent-storage branch 2 times, most recently from 117e7cd to b0970fc Compare December 22, 2015 15:47
"%s action is not available for provider %s. Doing nothing." %
(action, self.key))
return

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dustymabe like this? don't see any other way since each cmd call will be different for delete / create / etc. when more functions are added.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would rather do:

if action not in ['run']:
    logger.warning()

In the future when we add more it would be:

if action not in ['run', 'stop', etc..]:
    logger.warning()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated ^^

@cdrage
Copy link
Member Author

cdrage commented Dec 22, 2015

Tested with changes and can confirm functionality:

dropbox/dev/atomicapp  persistent-storage ✗                                                                                                                                                                                                        22h56m ◒  ⍉
▶ kubectl get pv
NAME      LABELS    CAPACITY   ACCESSMODES   STATUS    CLAIM     REASON

dropbox/dev/atomicapp  persistent-storage ✗                                                                                                                                                                                                         22h56m ◒  
▶ kubectl get pvc
NAME                   LABELS    STATUS    VOLUME
var-lib-mongodb-data   map[]     Pending   
var-log-mongodb        map[]     Pending   

dropbox/dev/atomicapp  persistent-storage ✗                                                                                                                                                                                                         22h56m ◒  
▶ kubectl get po
NAME                   READY     STATUS                                                RESTARTS   AGE
helloapache            0/1       Image: centos/httpd is ready, container is creating   0          1m
k8s-master-127.0.0.1   3/3       Running                                               5          1m

@cdrage cdrage force-pushed the persistent-storage branch 2 times, most recently from 377c149 to ea722d5 Compare December 22, 2015 16:17
p = plugin.getProvider(provider)
self.provider = p(config, basepath, dryrun)
self.provider.init()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^^

@dustymabe
Copy link
Contributor

@goern @aweiteka. we would really like to get your input on this before we merge. If you are available today or tomorrow before the holidays that would be great.

from atomicapp.plugin import Plugin

plugin = Plugin()
plugin.load_plugins()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move to inside of __init__

@aweiteka
Copy link
Contributor

Sorry I haven't been following this from the start. Question: where is the volume supposed to be mounted so the application can actually use it? In my experience if persistent storage is needed it's defined in a template and referenced in the pod spec so it's actually mounted. But to specify all of that in Nulecule feels like we're duplicating kubernetes.

I think the spec is lacking here but this looks like a good start to implement what we have. LGTM

@goern
Copy link
Contributor

goern commented Dec 27, 2015

@aweiteka I think you got the approach right: the Nulecule Specification is making a statement about what type of storage an application needs, Atomic App is going to auto generate a Kubernetes/OpenShift snippet for that and the application can use it. We can also keep the storage requirements within the artifacts of the provider, but than we will obfuscate the application, eg the deployment guy will not be able to parameterize the size or type of the storage.

Furthermore, what is the spec missing wrt a storage requirement?

@dustymabe
Copy link
Contributor

considering @aweiteka's new comment in projectatomic/nulecule#193

the application artefact will define how it wants to use the volume, where it should be mounted. We don't need to provide a path in the spec.

I think we should move forward with this review. @aweiteka @goern can you please review so that we can get this merged?

cmd.append("--kubeconfig=%s" % self.config_file)
lines = self._call(cmd)

# If there are no persistent volumes to claim, warn the user
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are more error cases. I can think of two:

  1. claim with that name has already been created.
  2. user is claiming more space than allowed.

These (and other errors like this I can't think of) can be generally caught with one check, I would think.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, this should error out automatically though when calling the API. But it would be nice to parse and create a good logging output for this :)

@aweiteka
Copy link
Contributor

aweiteka commented Jan 6, 2016

Nice work! LGTM.

@cdrage
Copy link
Member Author

cdrage commented Jan 6, 2016

^^ changed the small comment in code from @vpavlin 's comment :) I'll update notes / comments in a different PR for changes unrelated to persistent storage.

@dustymabe
Copy link
Contributor

@cdrage, since there has been a lot of work in master can we get a rebase on it. Might be good to test on top of master anyway.

@cdrage
Copy link
Member Author

cdrage commented Jan 6, 2016

Rebased and checks pass!

@dustymabe
Copy link
Contributor

With @aweiteka LGTM and a LGTM from me.. merging.

dustymabe added a commit that referenced this pull request Jan 7, 2016
Add persistent storage for kubernetes
@dustymabe dustymabe merged commit c2ec7fb into projectatomic:master Jan 7, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants