CronJob daemonset (previously ScheduledJob) #36601

Closed
rothgar opened this Issue Nov 10, 2016 · 31 comments

Comments

@rothgar
Contributor

rothgar commented Nov 10, 2016

Feature Request:

  • Ability to run a scheduled job as a daemonset so it can run on each node. e.g. image garbage collection with something like docker-gc

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:42:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

What you expected to happen:
The ability to target more than one node in a scheduled job. If I specify a nodeSelector for workers the job will only run on 1 worker node. I would like the job to run on all worker nodes.

How to reproduce it (as minimally and precisely as possible):

apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
  name: hello
spec:
  schedule: 0/1 * * * ?
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
        nodeSelector:
          nodetype: worker

I think there is a notion of a node service being worked on but I A) couldn't find the link/doc describing how it works B) don't think it applies to scheduled jobs

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Nov 17, 2016

Member

Would be possible if ScheduledJob could launch a Template.

cc @erictune @pwittrock

Member

bgrant0607 commented Nov 17, 2016

Would be possible if ScheduledJob could launch a Template.

cc @erictune @pwittrock

@kargakis

This comment has been minimized.

Show comment
Hide comment
Member

kargakis commented Nov 18, 2016

cc: @soltysh

@soltysh soltysh changed the title from ScheduledJob daemonset to CronJob daemonset (previously ScheduledJob) Nov 21, 2016

@soltysh

This comment has been minimized.

Show comment
Hide comment
@soltysh

soltysh Nov 21, 2016

Contributor

Would be possible if ScheduledJob could launch a Template.

Yup, I don't see any problems with CJ creating any type of resources available in the cluster.

First I'd like to address the most important issues, so that we could graduate CJ to beta (maybe in 1.6) and then while in beta add features as this, @erictune agreed?

Contributor

soltysh commented Nov 21, 2016

Would be possible if ScheduledJob could launch a Template.

Yup, I don't see any problems with CJ creating any type of resources available in the cluster.

First I'd like to address the most important issues, so that we could graduate CJ to beta (maybe in 1.6) and then while in beta add features as this, @erictune agreed?

@soltysh soltysh self-assigned this Nov 21, 2016

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Nov 22, 2016

Member

This could also just be a DaemonSet of containers that sleep to create whatever duty cycle they want.

Member

bgrant0607 commented Nov 22, 2016

This could also just be a DaemonSet of containers that sleep to create whatever duty cycle they want.

@rothgar

This comment has been minimized.

Show comment
Hide comment
@rothgar

rothgar Nov 22, 2016

Contributor

@bgrant0607 That was actually the route I was going. I had a shell script that would just sleep for a day and then run the commands I wanted. I figured this should be supported natively. Hence this open issue.

Contributor

rothgar commented Nov 22, 2016

@bgrant0607 That was actually the route I was going. I had a shell script that would just sleep for a day and then run the commands I wanted. I figured this should be supported natively. Hence this open issue.

@sheerun

This comment has been minimized.

Show comment
Hide comment
@sheerun

sheerun Feb 25, 2017

In my case kubernetes doesn't clear exited containers properly, and I need to run cleanup job on all workers periodically. This would be a useful feature. As for now probably the only solution is to setup cron job on each node...

sheerun commented Feb 25, 2017

In my case kubernetes doesn't clear exited containers properly, and I need to run cleanup job on all workers periodically. This would be a useful feature. As for now probably the only solution is to setup cron job on each node...

@andrewwebber

This comment has been minimized.

Show comment
Hide comment
@andrewwebber

andrewwebber Feb 25, 2017

For me this is a very important issue but more in the broader scope of Jobs (not just scheduled).
CoreOS has deprecated Fleet (distributed systemd). With Fleet one could broadcast a global systemd unit to run on and update all machines in a cluster with a label selector.

Given a kubernetes cluster provisioned with no configuration management tool such as Puppet, Fleet or Chef it would be logical for kubernetes to support modification of its self using just kubernetes. This is why I personally would like job scheduling choice:

  • Run or schedule a job on all machines
  • Run or schedule a job on a sub-set of machines using a label selector
  • Run or schedule a job on a single machine

For me this is a very important issue but more in the broader scope of Jobs (not just scheduled).
CoreOS has deprecated Fleet (distributed systemd). With Fleet one could broadcast a global systemd unit to run on and update all machines in a cluster with a label selector.

Given a kubernetes cluster provisioned with no configuration management tool such as Puppet, Fleet or Chef it would be logical for kubernetes to support modification of its self using just kubernetes. This is why I personally would like job scheduling choice:

  • Run or schedule a job on all machines
  • Run or schedule a job on a sub-set of machines using a label selector
  • Run or schedule a job on a single machine
@kargakis

This comment has been minimized.

Show comment
Hide comment
@kargakis

kargakis Feb 26, 2017

Member

Maybe CronJobs could have a node selector and if specified, empty would mean run this on every node, otherwise select the nodes you want to run on. otherwise if unspecified run this as currently.

Member

kargakis commented Feb 26, 2017

Maybe CronJobs could have a node selector and if specified, empty would mean run this on every node, otherwise select the nodes you want to run on. otherwise if unspecified run this as currently.

@soltysh

This comment has been minimized.

Show comment
Hide comment
@soltysh

soltysh Feb 27, 2017

Contributor

Although I really like the NodeSelector idea, we need to be very careful about it. We currently struggling with explaining RestartPolicy which applies to a Pod not a Job itself.

Having said that I'm thinking, maybe we should have a similar primitive for running Jobs on all nodes, like we have Deployments and DaemonSets. Eventually, I'm hoping (#28349) CronJobs will allow scheduling any kind of objects, so that part will be the least problem. @kubernetes/sig-apps-feature-requests @erictune wdyt?

Contributor

soltysh commented Feb 27, 2017

Although I really like the NodeSelector idea, we need to be very careful about it. We currently struggling with explaining RestartPolicy which applies to a Pod not a Job itself.

Having said that I'm thinking, maybe we should have a similar primitive for running Jobs on all nodes, like we have Deployments and DaemonSets. Eventually, I'm hoping (#28349) CronJobs will allow scheduling any kind of objects, so that part will be the least problem. @kubernetes/sig-apps-feature-requests @erictune wdyt?

@kow3ns

This comment has been minimized.

Show comment
Hide comment
@kow3ns

kow3ns Mar 1, 2017

Member

How will CronJob interact with Nodes that are added to or removed from the cluster during a scheduled execution?

  1. If a node is removed from the cluster during execution does that constitute a failure for that scheduled execution of the CronJob?
  2. If a node is added (say via auto-scaling) during execution of the current iteration of the CronJob, should the currently running CronJob schedule a new Pod on that node as well, or will the CronJob only run on that node during the next scheduled execution?

What are the guarantees, if any, that we will provide with respect to execution under these conditions?

Member

kow3ns commented Mar 1, 2017

How will CronJob interact with Nodes that are added to or removed from the cluster during a scheduled execution?

  1. If a node is removed from the cluster during execution does that constitute a failure for that scheduled execution of the CronJob?
  2. If a node is added (say via auto-scaling) during execution of the current iteration of the CronJob, should the currently running CronJob schedule a new Pod on that node as well, or will the CronJob only run on that node during the next scheduled execution?

What are the guarantees, if any, that we will provide with respect to execution under these conditions?

@soltysh

This comment has been minimized.

Show comment
Hide comment
@soltysh

soltysh Mar 2, 2017

Contributor

@kow3ns good points, I'm guessing that would depend on the action performed and should be configurable by a user. Having a strict requirement to go with one or the other options isn't good way, imo.

Contributor

soltysh commented Mar 2, 2017

@kow3ns good points, I'm guessing that would depend on the action performed and should be configurable by a user. Having a strict requirement to go with one or the other options isn't good way, imo.

@etoews

This comment has been minimized.

Show comment
Hide comment
@etoews

etoews May 13, 2017

Contributor

Here's my workaround until this feature become a reality, Run Once DaemonSet on Kubernetes.

Contributor

etoews commented May 13, 2017

Here's my workaround until this feature become a reality, Run Once DaemonSet on Kubernetes.

@kargakis

This comment has been minimized.

Show comment
Hide comment
@kargakis

kargakis Jun 10, 2017

Member

Older issue asking about this: #17182

Member

kargakis commented Jun 10, 2017

Older issue asking about this: #17182

@taherv

This comment has been minimized.

Show comment
Hide comment
@taherv

taherv Jun 23, 2017

Thanks for the workaround @everett-toews, but some work needs to be done to ensure that workaround actually works when nodes are added to a cluster :(

Looking at it another way, and I'm sure we must have thought about this, why don't we use "exit code of 0" from a daemonset to mean "please dont restart" ? This approach has several advantages :

  1. Use plain old DaemonSet object (no customizations necessary)
  2. Avoid unnecessary sleep-forever pods (and dirty scripting just to keep something alive, for no reason).

The open question then would be : What is the status of the DaemonSet if the pod has exited successfully ? Is there a "Completed" status ? I guess we could use the same status as a completed job ?

Thoughts ?

taherv commented Jun 23, 2017

Thanks for the workaround @everett-toews, but some work needs to be done to ensure that workaround actually works when nodes are added to a cluster :(

Looking at it another way, and I'm sure we must have thought about this, why don't we use "exit code of 0" from a daemonset to mean "please dont restart" ? This approach has several advantages :

  1. Use plain old DaemonSet object (no customizations necessary)
  2. Avoid unnecessary sleep-forever pods (and dirty scripting just to keep something alive, for no reason).

The open question then would be : What is the status of the DaemonSet if the pod has exited successfully ? Is there a "Completed" status ? I guess we could use the same status as a completed job ?

Thoughts ?

@kargakis

This comment has been minimized.

Show comment
Hide comment
@kargakis

kargakis Jun 24, 2017

Member

DaemonSets are designed for long-running daemons, not for batch workloads. I still prefer we solve this in the CronJob API.

Member

kargakis commented Jun 24, 2017

DaemonSets are designed for long-running daemons, not for batch workloads. I still prefer we solve this in the CronJob API.

@rootfs

This comment has been minimized.

Show comment
Hide comment
@rootfs

rootfs Jul 31, 2017

Member

I contribute a use case for run-once daemonset (details at ceph/ceph-container#733).

When installing Ceph OSD daemonset on a node, we need to properly initialize the devices (make filesystem and get keyrings see ceph-disk prepare) before using them (i.e. ceph-disk activate).

So we need two DS here: ceph-osd-prepare and ceph-osd-activate, where ceph-osd-prepare is a run-once DS, ceph-osd-activate is a long running DS.

Member

rootfs commented Jul 31, 2017

I contribute a use case for run-once daemonset (details at ceph/ceph-container#733).

When installing Ceph OSD daemonset on a node, we need to properly initialize the devices (make filesystem and get keyrings see ceph-disk prepare) before using them (i.e. ceph-disk activate).

So we need two DS here: ceph-osd-prepare and ceph-osd-activate, where ceph-osd-prepare is a run-once DS, ceph-osd-activate is a long running DS.

@soltysh

This comment has been minimized.

Show comment
Hide comment
@soltysh

soltysh Aug 4, 2017

Contributor

I contribute a use case for run-once daemonset (details at ceph/ceph-container#733).

This sounds more like a job daemonset, rather than cronjob one.

Contributor

soltysh commented Aug 4, 2017

I contribute a use case for run-once daemonset (details at ceph/ceph-container#733).

This sounds more like a job daemonset, rather than cronjob one.

@kow3ns

This comment has been minimized.

Show comment
Hide comment
@kow3ns

kow3ns Aug 4, 2017

Member

@rootfs Why can't you prepare and activate the osd storage using init containers on the DaemonSet?

Member

kow3ns commented Aug 4, 2017

@rootfs Why can't you prepare and activate the osd storage using init containers on the DaemonSet?

@rootfs

This comment has been minimized.

Show comment
Hide comment
@rootfs

rootfs Aug 4, 2017

Member

@kow3ns good point, let me try that, thanks for the pointer.

Member

rootfs commented Aug 4, 2017

@kow3ns good point, let me try that, thanks for the pointer.

@resouer

This comment has been minimized.

Show comment
Hide comment
@resouer

resouer Aug 20, 2017

Member

DaemonSets are designed for long-running daemons, not for batch workloads. I still prefer we solve this in the CronJob API.

@kargakis Thanks for redirect. Would like to continue discussion here. Covering my use case (run-once job on every node) by CronJob seems to be making things more complicated.

  1. How CronJob API fits to the feature of "spreading to all Nodes"? Let's say, we can add a "every node" as previously discussed in this thread.
  2. But the Job daemon (run-once job) has no requirement for "Cron", so any schedule: 0/1 * * * ? will not make sense in this case.
  3. Or, we are actually mixing two issues here: CronJob which can run on every node, Job which can run on every node. That remind me can we extract the run on every node logic out, so both Daemon, CronJob and Job can share it in common? The both issues can be fixed in elegant way.

cc @kow3ns

Member

resouer commented Aug 20, 2017

DaemonSets are designed for long-running daemons, not for batch workloads. I still prefer we solve this in the CronJob API.

@kargakis Thanks for redirect. Would like to continue discussion here. Covering my use case (run-once job on every node) by CronJob seems to be making things more complicated.

  1. How CronJob API fits to the feature of "spreading to all Nodes"? Let's say, we can add a "every node" as previously discussed in this thread.
  2. But the Job daemon (run-once job) has no requirement for "Cron", so any schedule: 0/1 * * * ? will not make sense in this case.
  3. Or, we are actually mixing two issues here: CronJob which can run on every node, Job which can run on every node. That remind me can we extract the run on every node logic out, so both Daemon, CronJob and Job can share it in common? The both issues can be fixed in elegant way.

cc @kow3ns

@guangxuli

This comment has been minimized.

Show comment
Hide comment
@guangxuli

guangxuli Aug 21, 2017

Contributor

/cc

Contributor

guangxuli commented Aug 21, 2017

/cc

@iMartyn

This comment has been minimized.

Show comment
Hide comment
@iMartyn

iMartyn Nov 30, 2017

Another thing to think about if using CronJobs is timings - If you want something running at intervals on all your nodes, you probably don't want them running on the same minute (stampede) this would be a case for something like a ~ operator, for instance :

~5 * * * *

could run at close to :05 every hour, but if determined randomly by the node, (whilst possible, not likely) should not run on all nodes at the same time.

This would be useful in cronjobs in general not just run-once cronjobs.

iMartyn commented Nov 30, 2017

Another thing to think about if using CronJobs is timings - If you want something running at intervals on all your nodes, you probably don't want them running on the same minute (stampede) this would be a case for something like a ~ operator, for instance :

~5 * * * *

could run at close to :05 every hour, but if determined randomly by the node, (whilst possible, not likely) should not run on all nodes at the same time.

This would be useful in cronjobs in general not just run-once cronjobs.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Nov 30, 2017

Member

A once-per-node cronjob would be non-trivial to make it work reliably, since pods won't necessarily be able to schedule on all selected nodes. Theoretically that problem exists for Jobs already, but probably isn't as much of a problem in practice.

Running an actual DaemonSet solves that problem. Could cron be put into a container?

Member

bgrant0607 commented Nov 30, 2017

A once-per-node cronjob would be non-trivial to make it work reliably, since pods won't necessarily be able to schedule on all selected nodes. Theoretically that problem exists for Jobs already, but probably isn't as much of a problem in practice.

Running an actual DaemonSet solves that problem. Could cron be put into a container?

@RenaudWasTaken

This comment has been minimized.

Show comment
Hide comment
@RenaudWasTaken

RenaudWasTaken Jan 11, 2018

Member

Hello !

DaemonSets are designed for long-running daemons, not for batch workloads. I still prefer we solve this in the CronJob API.

I'd like to contribute another use case. Running a job on each nodes to detect GPUs and label the node accordingly.

Currently for mixed clusters (GPUs and non GPUs nodes), which are pretty common, the NVIDIA GPU device plugin which is a daemonset will be deployed on all nodes.

When it detects that a node does not have any GPU it will juste select{} forever. A better solution would be to either:

  • Run a job on each node (and nodes that join) assigning GPU labels and have the GPU device plugin be gated by a node selector
  • Have the GPU device plugin only restart on failure

I believe the second option was suggested by @resouer higher.
Are there any patterns I'm missing that might solve this (other than having cluster admins label their nodes manually)?

Member

RenaudWasTaken commented Jan 11, 2018

Hello !

DaemonSets are designed for long-running daemons, not for batch workloads. I still prefer we solve this in the CronJob API.

I'd like to contribute another use case. Running a job on each nodes to detect GPUs and label the node accordingly.

Currently for mixed clusters (GPUs and non GPUs nodes), which are pretty common, the NVIDIA GPU device plugin which is a daemonset will be deployed on all nodes.

When it detects that a node does not have any GPU it will juste select{} forever. A better solution would be to either:

  • Run a job on each node (and nodes that join) assigning GPU labels and have the GPU device plugin be gated by a node selector
  • Have the GPU device plugin only restart on failure

I believe the second option was suggested by @resouer higher.
Are there any patterns I'm missing that might solve this (other than having cluster admins label their nodes manually)?

@rothgar

This comment has been minimized.

Show comment
Hide comment
@rothgar

rothgar Jan 11, 2018

Contributor

Having a cron target of @reboot would be really handy for this. But need a way to do it on a per node basis. Would be much cleaner than long running daemons but may have complications with nodes where GPUs are hot pluggable.

If we do at @reboot can we also make sure we get @teatime?

Contributor

rothgar commented Jan 11, 2018

Having a cron target of @reboot would be really handy for this. But need a way to do it on a per node basis. Would be much cleaner than long running daemons but may have complications with nodes where GPUs are hot pluggable.

If we do at @reboot can we also make sure we get @teatime?

@RenaudWasTaken

This comment has been minimized.

Show comment
Hide comment
@RenaudWasTaken

RenaudWasTaken Jan 12, 2018

Member

but may have complications with nodes where GPUs are hot pluggable.

FWIW we don't really support Hot pluggable GPUs or with very specific hardware (i.e some motherboards, ...).

Member

RenaudWasTaken commented Jan 12, 2018

but may have complications with nodes where GPUs are hot pluggable.

FWIW we don't really support Hot pluggable GPUs or with very specific hardware (i.e some motherboards, ...).

@enisoc

This comment has been minimized.

Show comment
Hide comment
@enisoc

enisoc Jan 12, 2018

Member

@RenaudWasTaken wrote:

I'd like to contribute another use case. Running a job on each nodes to detect GPUs and label the node accordingly.

What about a DaemonSet with node anti-affinity for any Node that already has been labeled (one way or another) with the relevant label key? Assuming DS re-syncs when Node labels change, the controller should remove the Pod after it does its job.

Member

enisoc commented Jan 12, 2018

@RenaudWasTaken wrote:

I'd like to contribute another use case. Running a job on each nodes to detect GPUs and label the node accordingly.

What about a DaemonSet with node anti-affinity for any Node that already has been labeled (one way or another) with the relevant label key? Assuming DS re-syncs when Node labels change, the controller should remove the Pod after it does its job.

@RenaudWasTaken

This comment has been minimized.

Show comment
Hide comment
@RenaudWasTaken

RenaudWasTaken Jan 12, 2018

Member

What about a DaemonSet with node anti-affinity for any Node that already has been labeled (one way or another) with the relevant label key? Assuming DS re-syncs when Node labels change, the controller should remove the Pod after it does its job.

I think we can do that for now, but that sounds like a side effect that I'm not sure we can rely on.
Anyways I'll be testing that :)

Member

RenaudWasTaken commented Jan 12, 2018

What about a DaemonSet with node anti-affinity for any Node that already has been labeled (one way or another) with the relevant label key? Assuming DS re-syncs when Node labels change, the controller should remove the Pod after it does its job.

I think we can do that for now, but that sounds like a side effect that I'm not sure we can rely on.
Anyways I'll be testing that :)

@mumoshu mumoshu referenced this issue in kubernetes-incubator/kube-aws Feb 22, 2018

Open

Implement fast worker roll #908

0 of 3 tasks complete

@kow3ns kow3ns added this to Backlog in Workloads Feb 27, 2018

wallrj added a commit to wallrj/navigator that referenced this issue Mar 14, 2018

Remove the sysctl feature and add documentation instead
* Remove sysctl from example and test manifests.
* Remove sysctl from the API.
* Remove the sysctl init containers
* Remove the sysctl API validation code
* Add documentation about Elasticsearch OS configuration
* Add a daemonset to configure the E2E nodes with the required virtual memory settings (kubernetes/kubernetes#36601 would be a better solution).

Fixes: #286

wallrj added a commit to wallrj/navigator that referenced this issue Mar 15, 2018

Remove the sysctl feature and add documentation instead
* Remove sysctl from example and test manifests.
* Remove sysctl from the API.
* Remove the sysctl init containers
* Remove the sysctl API validation code
* Add documentation about Elasticsearch OS configuration
* Add a daemonset to configure the E2E nodes with the required virtual memory settings (kubernetes/kubernetes#36601 would be a better solution).

Fixes: #286

wallrj added a commit to wallrj/navigator that referenced this issue Mar 15, 2018

Remove the sysctl feature and add documentation instead
* Remove sysctl from example and test manifests.
* Remove sysctl from the API.
* Remove the sysctl init containers
* Remove the sysctl API validation code
* Add documentation about Elasticsearch OS configuration
* Add a daemonset to configure the E2E nodes with the required virtual memory settings (kubernetes/kubernetes#36601 would be a better solution).

Fixes: #286

wallrj added a commit to wallrj/navigator that referenced this issue Mar 16, 2018

Remove the sysctl feature and add documentation instead
* Remove sysctl from example and test manifests.
* Remove sysctl from the API.
* Remove the sysctl init containers
* Remove the sysctl API validation code
* Add documentation about Elasticsearch OS configuration
* Add a daemonset to configure the E2E nodes with the required virtual memory settings (kubernetes/kubernetes#36601 would be a better solution).

Fixes: #286
@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Apr 12, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot May 12, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 May 14, 2018

Member

I can't imagine we'll do this.

Member

bgrant0607 commented May 14, 2018

I can't imagine we'll do this.

@bgrant0607 bgrant0607 closed this May 14, 2018

Workloads automation moved this from Backlog to Done May 14, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment