Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add image pruning #1841

Merged
merged 21 commits into from
May 22, 2015
Merged

Add image pruning #1841

merged 21 commits into from
May 22, 2015

Conversation

ncdc
Copy link
Contributor

@ncdc ncdc commented Apr 21, 2015

No description provided.


const minimumImageAgeInMinutesToPrune = 60

func (c *PruningController) addImagesToGraph(g graph.Graph, images *imageapi.ImageList) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style note, this really doesn't have anything to do with pruning controller. Just make it a method addImagesToGraph

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I considered that, but I need c.registryURL. Do you want me to pass that in?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create

type PruneImagesAlgorithm struct {
// things needed for prune
}

func (PruneImagesAlgorithm) Run() {}

I'd like the traversal code be separate from the graph setup code separate from the actual "things to delete". For instance, creating the image graph is one step NewPruneImagesAlgorithm(), you call Run(fn func(Image)) or something and return a count of the number of images affected.

----- Original Message -----

  •   return fmt.Errorf("error listing all replication controllers: %v", err)
    
  • }
  • g := graph.New()
  • c.addImagesToGraph(g, images)
  • c.addImageStreamsToGraph(g, streams)
  • c.addPodsToGraph(g, pods)
  • c.addReplicationControllersToGraph(g, rcs)
  • c.prune(g)
  • return nil
    +}

+const minimumImageAgeInMinutesToPrune = 60
+
+func (c *PruningController) addImagesToGraph(g graph.Graph, images
*imageapi.ImageList) {

I considered that, but I need c.registryURL. Do you want me to pass that in?


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1841/files#r28803613

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so would you do this?

type ImagePruner struct {
  g graph.Graph
  images client.ImageInterface
  // ...
}

And what would the func that takes an Image that you pass to Run do?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

----- Original Message -----

  •   return fmt.Errorf("error listing all replication controllers: %v", err)
    
  • }
  • g := graph.New()
  • c.addImagesToGraph(g, images)
  • c.addImageStreamsToGraph(g, streams)
  • c.addPodsToGraph(g, pods)
  • c.addReplicationControllersToGraph(g, rcs)
  • c.prune(g)
  • return nil
    +}

+const minimumImageAgeInMinutesToPrune = 60
+
+func (c *PruningController) addImagesToGraph(g graph.Graph, images
*imageapi.ImageList) {

so would you do this?

type ImagePruner struct {
  g graph.Graph
  images client.ImageInterface
  // ...
}

And what would the func that takes an Image that you pass to Run do?

I don't think the Pruner needs to know about client.ImageInterface - the Function you pass to run would do. That way you can dry run without having to change the algorithm.

So a set of stages:

  1. Need to load and fetch objects and add them to a graph
  2. Identify any images that aren't referenced by the graph
  3. Delete those images

But you could replace #3 with print the images out. Or replace #3 with a function that deletes the images AND removes them from an IR. Or removes them from the IR, and then deletes them later.

It's important to keep that last step separate from the first two.


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1841/files#r28806542

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something along these lines?

type ImagePruner interface {
  ListPrunableImages() []*imageapi.Image
}

type imagePruner struct {
  g graph.Graph
}

func NewImagePruner(images client.ImageInterface, streams ...) ImagePruner {
  g := graph.New()
  addImagesToGraph(g, images)
  // ...
  return &imagePruner{g:g}
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or should NewImagePruner take the actual lists of images, streams, pods, rcs that have been retrieved elsewhere?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's probably the easiest way. Yo could configure your own graph or call the NewMethod.

----- Original Message -----

  •   return fmt.Errorf("error listing all replication controllers: %v", err)
    
  • }
  • g := graph.New()
  • c.addImagesToGraph(g, images)
  • c.addImageStreamsToGraph(g, streams)
  • c.addPodsToGraph(g, pods)
  • c.addReplicationControllersToGraph(g, rcs)
  • c.prune(g)
  • return nil
    +}

+const minimumImageAgeInMinutesToPrune = 60
+
+func (c *PruningController) addImagesToGraph(g graph.Graph, images
*imageapi.ImageList) {

Or should NewImagePruner take the actual lists of images, streams, pods, rcs
that have been retrieved elsewhere?


Reply to this email directly or view it on GitHub:
https://github.com/openshift/origin/pull/1841/files#r28809750

@ncdc ncdc force-pushed the image-pruning branch 9 times, most recently from 2a323e4 to 6cc747e Compare April 27, 2015 19:40
@bparees
Copy link
Contributor

bparees commented Apr 28, 2015

Can you summarize somewhere the overall pruning strategy? what gets pruned, what is sufficient to prevent something from being pruned, etc?

It looks like any image that's the current image for a tag won't be pruned, but it's less clear how running pods and RCs are intended to factor in (looks like they keep images around). What about a BuildConfig that references a particular ImageStreamImage? Should also note that pruning old images could make it impossible to rerun a previous build since the exact image it used won't be available anymore (not suggesting we shouldn't prune those, but it's a ramification that should be doc'd)

@ncdc
Copy link
Contributor Author

ncdc commented Apr 28, 2015

The description is here: https://trello.com/c/dWeQNKRL

BuildConfigs are not taken into account at the moment. Should they be?

Sent from my iPhone

On Apr 27, 2015, at 10:04 PM, Ben Parees notifications@github.com wrote:

Can you summarize somewhere the overall pruning strategy? what gets pruned, what is sufficient to prevent something from being pruned, etc?

It looks like any image that's the current image for a tag won't be pruned, but it's less clear how running pods and RCs are intended to factor in (looks like they keep images around). What about a BuildConfig that references a particular ImageStreamImage? Should also note that pruning old images could make it impossible to rerun a previous build since the exact image it used won't be available anymore (not suggesting we shouldn't prune those, but it's a ramification that should be doc'd)


Reply to this email directly or view it on GitHub.

@bparees
Copy link
Contributor

bparees commented Apr 28, 2015

seems like they should be. Again, ImageStream and ImageStreamTag references from BuildConfigs are presumably already covered by the "image that represents the the current state of a tag in a stream" rule, but at a minimum an ImageStreamImage reference from a BuildConfig should prevent pruning.

What we want to do about Builds that consumed a particular Image is less clear to me, but it seems like at least an option to not prune those images would be a good idea. or better, not pruning them by default.

@ncdc
Copy link
Contributor Author

ncdc commented Apr 28, 2015

Let's discuss build configs soon.

If we have a plan for pruning builds, I'm fine including them in the criteria for image pruning.

Sent from my iPhone

On Apr 27, 2015, at 10:17 PM, Ben Parees notifications@github.com wrote:

seems like they should be. Again, ImageStream and ImageStreamTag references from BuildConfigs are presumably already covered by the "image that represents the the current state of a tag in a stream" rule, but at a minimum an ImageStreamImage reference from a BuildConfig should prevent pruning.

What we want to do about Builds that consumed a particular Image is less clear to me, but it seems like at least an option to not prune those images would be a good idea. or better, not pruning them by default.


Reply to this email directly or view it on GitHub.

@ncdc ncdc force-pushed the image-pruning branch 2 times, most recently from 7a9aa6a to 50da0f0 Compare April 29, 2015 19:54
// addBuildConfigsToGraph adds build configs to the graph.
//
// Edges are added to the graph from each build config to the image specified by its strategy.from.
func addBuildConfigsToGraph(g graph.Graph, bcs *buildapi.BuildConfigList, algorithm pruneAlgorithm) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addBuild* methods look right to me.

@ncdc ncdc force-pushed the image-pruning branch 4 times, most recently from a9b2191 to e7b0136 Compare May 6, 2015 19:30

/*
NewImagePruner creates a new ImagePruner.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the right place to have a high-level comment describing the lack of a delete API in the registry and explaining why this is factored like it is.

@ncdc ncdc changed the title [WIP, DO_NOT_MERGE] Image pruning WIP - Image pruning May 20, 2015
@@ -9,7 +9,7 @@ import (
)

type Node struct {
concrete.Node
graph.Node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought it was needed for something I was working on a while ago, but it's not - I can undo it.

@smarterclayton
Copy link
Contributor

Not a lot of comments

@ncdc
Copy link
Contributor Author

ncdc commented May 21, 2015

@smarterclayton did you look at my changes to Godeps?

@smarterclayton
Copy link
Contributor

Yes, nothing particularly jumped out as bad.

@ncdc
Copy link
Contributor Author

ncdc commented May 21, 2015

Cool, thanks. I'm working on a bit more cleanup (to hopefully get rid of the retries when updating the image streams).

@ncdc
Copy link
Contributor Author

ncdc commented May 21, 2015

[test]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/2331/)

@ncdc
Copy link
Contributor Author

ncdc commented May 22, 2015

@smarterclayton ok to merge so QE can start testing? or perhaps a fork ami?

@derekwaynecarr
Copy link
Member

This was lgtm as well. Forgot to comment as such.

Sent from my iPhone

On May 21, 2015, at 8:55 PM, Andy Goldstein notifications@github.com wrote:

@smarterclayton ok to merge so QE can start testing? or perhaps a fork ami?


Reply to this email directly or view it on GitHub.

@smarterclayton
Copy link
Contributor

Go ahead and merge, there really no negative impact.

On May 21, 2015, at 8:55 PM, Andy Goldstein notifications@github.com wrote:

@smarterclayton ok to merge so QE can start testing? or perhaps a fork ami?


Reply to this email directly or view it on GitHub.

@ncdc ncdc changed the title WIP - Image pruning Add image pruning May 22, 2015
@ncdc
Copy link
Contributor Author

ncdc commented May 22, 2015

Ok, hopefully my changes to the registry to support pruning don't have any adverse effects. I've been testing locally and haven't had any issues pushing/pulling images.

[merge]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/2055/) (Image: devenv-fedora_1597)

@openshift-bot
Copy link
Contributor

Evaluated for origin up to 05d1dae

openshift-bot pushed a commit that referenced this pull request May 22, 2015
@openshift-bot openshift-bot merged commit f63dad9 into openshift:master May 22, 2015
jboyd01 pushed a commit to jboyd01/origin that referenced this pull request Mar 26, 2018
…service-catalog/' changes from c3e3071633..231772fcc0

231772fcc0 origin build: add origin tooling
98af588 v0.1.11 release changes
01e2f90 v0.1.10 release changes
49af948 clear polling queue before starting new operation (openshift#1855)
252958e Refactor common serviceclass validations (openshift#1858)
68f55c6 Catalog Controller should listen on https and serve metrics over TLS secured channel (openshift#1851)
5d0f773 Refactor common broker validations (openshift#1865)
eeaf285 Add NamespacedServiceBroker switch to helm chart (openshift#1864)
d2c960c Add NamespacedServiceBroker Feature (openshift#1863)
ef15310 Fix NamespaceScoped doc text for ns types (openshift#1862)
8d0a637 fix async deprovision retry (openshift#1832)
a918a16 Update registry code from serviceclass to clusterserviceclass (openshift#1852)
4dfd13c Bump dependency on go-open-service-broker-client to 0.0.6 (openshift#1856)
958b7cd Rename SharedServicePlanSpec to CommonServicePlanSpec (openshift#1850)
426aec3 pick a better random port to listen on in integration tests (openshift#1844)
6a59ada OrphanMitigation condition and different handling of retry timeout (openshift#1789)
c9b8f60 Extracting common broker spec elements into embeddable struct (openshift#1841)
3f8fab6 Extract common service class spec fields (openshift#1834)
93dab13 Support for OSB [PR#452](openservicebrokerapi/servicebroker#452). (openshift#1849)
5e1e90d [WIP] Pass correct plan ID in deprovision request (for both deleting and orphan mitigation) (openshift#1847)
74f73c0 disable tests for deployment stage (openshift#1845)
82fc6e4 Revert "Pass correct plan ID in deprovision request (for both deleting and orphan mitigation) (openshift#1803)" (openshift#1843)
94b5795 gitignore integration.test binary (openshift#1840)
014c468 Extracting common plan spec into embeddable struct (openshift#1833)
5d7041b Use k8s NewUUID method exclusively  (openshift#1836)
eac3f96 A new test was added after prechecks happened for last pr. (openshift#1838)
4b5d159 Pass correct plan ID in deprovision request (for both deleting and orphan mitigation) (openshift#1803)
cc02f0e [svcat] Adding a filter to get plan. (openshift#1758)
70afb56 reset RemovedFromBroker flag on plans that are re-added by broker (openshift#1824)
712dd4a Add behavior to print deleted instance name (openshift#1806)
55505be Update catalog charts README configuration (openshift#1823)
6426c98 Controller reconciliation rework - part 2 (ServiceBinding) (openshift#1819)
c606560 Integrate svcat docs with Service Catalog's (openshift#1784)
e9aeeb0 Synchronize some generated code that was missed along the way (openshift#1801)
a63ebf7 Fix failing test: TestReconcileServiceInstanceWithFailedCondition (openshift#1813)
07ef743 Controller reconciliation rework - part 1 (ServiceInstance) (openshift#1779)
a7d602b Export the touch instance command (openshift#1809)
bddb9a7 Allow retries for instances with Failed condition after spec changes (openshift#1751)
a777af5 Add enhanced parameter options to provision (openshift#1785)
fd1a0b9 Print deleted bindings (openshift#1799)
36d437a Adding the ability to sync a service instance (openshift#1762)
1f60676 Remove Failed condition if there was no terminal failure (openshift#1788)
cd831de allow brokers to respond to getCatalog() with no services (openshift#1772) (openshift#1781)
e5c37ad Add ObservedGeneration and Provisioned into ServiceInstanceStatus (openshift#1748)
9021d8b Add carolynvs to OWNERS (openshift#1780)
b7643d6 Add all-namespaces flag to svcat (openshift#1782)
01e652f Use docker to interact with files made by docker (openshift#1777)
REVERT: c3e3071633 origin build: add origin tooling

git-subtree-dir: cmd/service-catalog/go/src/github.com/kubernetes-incubator/service-catalog
git-subtree-split: 231772fcc00be08b6b2665a39c4a3bacb0b2271f
jboyd01 pushed a commit to jboyd01/origin that referenced this pull request Mar 27, 2018
…service-catalog/' changes from c3e3071633..231772fcc0

231772fcc0 origin build: add origin tooling
98af588 v0.1.11 release changes
01e2f90 v0.1.10 release changes
49af948 clear polling queue before starting new operation (openshift#1855)
252958e Refactor common serviceclass validations (openshift#1858)
68f55c6 Catalog Controller should listen on https and serve metrics over TLS secured channel (openshift#1851)
5d0f773 Refactor common broker validations (openshift#1865)
eeaf285 Add NamespacedServiceBroker switch to helm chart (openshift#1864)
d2c960c Add NamespacedServiceBroker Feature (openshift#1863)
ef15310 Fix NamespaceScoped doc text for ns types (openshift#1862)
8d0a637 fix async deprovision retry (openshift#1832)
a918a16 Update registry code from serviceclass to clusterserviceclass (openshift#1852)
4dfd13c Bump dependency on go-open-service-broker-client to 0.0.6 (openshift#1856)
958b7cd Rename SharedServicePlanSpec to CommonServicePlanSpec (openshift#1850)
426aec3 pick a better random port to listen on in integration tests (openshift#1844)
6a59ada OrphanMitigation condition and different handling of retry timeout (openshift#1789)
c9b8f60 Extracting common broker spec elements into embeddable struct (openshift#1841)
3f8fab6 Extract common service class spec fields (openshift#1834)
93dab13 Support for OSB [PR#452](openservicebrokerapi/servicebroker#452). (openshift#1849)
5e1e90d [WIP] Pass correct plan ID in deprovision request (for both deleting and orphan mitigation) (openshift#1847)
74f73c0 disable tests for deployment stage (openshift#1845)
82fc6e4 Revert "Pass correct plan ID in deprovision request (for both deleting and orphan mitigation) (openshift#1803)" (openshift#1843)
94b5795 gitignore integration.test binary (openshift#1840)
014c468 Extracting common plan spec into embeddable struct (openshift#1833)
5d7041b Use k8s NewUUID method exclusively  (openshift#1836)
eac3f96 A new test was added after prechecks happened for last pr. (openshift#1838)
4b5d159 Pass correct plan ID in deprovision request (for both deleting and orphan mitigation) (openshift#1803)
cc02f0e [svcat] Adding a filter to get plan. (openshift#1758)
70afb56 reset RemovedFromBroker flag on plans that are re-added by broker (openshift#1824)
712dd4a Add behavior to print deleted instance name (openshift#1806)
55505be Update catalog charts README configuration (openshift#1823)
6426c98 Controller reconciliation rework - part 2 (ServiceBinding) (openshift#1819)
c606560 Integrate svcat docs with Service Catalog's (openshift#1784)
e9aeeb0 Synchronize some generated code that was missed along the way (openshift#1801)
a63ebf7 Fix failing test: TestReconcileServiceInstanceWithFailedCondition (openshift#1813)
07ef743 Controller reconciliation rework - part 1 (ServiceInstance) (openshift#1779)
a7d602b Export the touch instance command (openshift#1809)
bddb9a7 Allow retries for instances with Failed condition after spec changes (openshift#1751)
a777af5 Add enhanced parameter options to provision (openshift#1785)
fd1a0b9 Print deleted bindings (openshift#1799)
36d437a Adding the ability to sync a service instance (openshift#1762)
1f60676 Remove Failed condition if there was no terminal failure (openshift#1788)
cd831de allow brokers to respond to getCatalog() with no services (openshift#1772) (openshift#1781)
e5c37ad Add ObservedGeneration and Provisioned into ServiceInstanceStatus (openshift#1748)
9021d8b Add carolynvs to OWNERS (openshift#1780)
b7643d6 Add all-namespaces flag to svcat (openshift#1782)
01e652f Use docker to interact with files made by docker (openshift#1777)
REVERT: c3e3071633 origin build: add origin tooling

git-subtree-dir: cmd/service-catalog/go/src/github.com/kubernetes-incubator/service-catalog
git-subtree-split: 231772fcc00be08b6b2665a39c4a3bacb0b2271f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants