Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] Dry run mode for prune commands #30623

Open
albers opened this issue Feb 1, 2017 · 101 comments
Open

[Feature request] Dry run mode for prune commands #30623

albers opened this issue Feb 1, 2017 · 101 comments
Labels
area/cli exp/beginner kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny

Comments

@albers
Copy link
Member

albers commented Feb 1, 2017

The new prune commands are potentially dangerous and I find it hard to predict what actually would be deleted. The user might use docker system df and docker image ls --filter dangling=true to get to this information, but this seems like an unneccessary step.

I propose to add a new operating mode where the normal output is produced without actually deleting the objects.

$ docker system prune --dry-run
Warning: Running in dry run mode, not actually deleting anything.
Deleted Volumes:
dockerdev-go-pkg-cache-gopath
dockerdev-go-pkg-cache-goroot-linux_amd64_netgo

Total reclaimed space: 112.1 MB

The user could then decide to run the same command again without --dry-run.

@AkihiroSuda AkihiroSuda added area/cli exp/beginner kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny labels Feb 1, 2017
@jphuynh
Copy link
Contributor

jphuynh commented Feb 1, 2017

I would like to try implementing that. #dibs

@KarthikNayak
Copy link
Contributor

KarthikNayak commented Mar 1, 2017

@jphuynh If you're not working on this, can I pick this up?

@chaynika
Copy link

chaynika commented Mar 2, 2017

@jphuynh Is it okay that I work on this?

@jphuynh
Copy link
Contributor

jphuynh commented Mar 2, 2017

Hiya. Yes sorry I've been willing to work on this but it would take me quite some time. I'm happy for someone else to take it as I won't have much time in the next few weeks.

@chaynika
Copy link

chaynika commented Mar 2, 2017

Thank you!

@cpuguy83
Copy link
Member

cpuguy83 commented Mar 2, 2017

I like dry-run... the original design had a "preview" of what would happen and "would you like to continue", the problem with such an approach is that it is racey.

I think something like --dry-run makes sense and implies that the result you get may not be exactly what the dry-run said.

ping @mlaventure WDYT?

@thaJeztah
Copy link
Member

I agree, as long as we clearly communicate / document that it may be an approximation, I think it would be a nice addition

@mlaventure
Copy link
Contributor

SGTM, didn't put back the preview when moving the pruning to the daemon side because I was trying to get mostly accurate data.

If we don't mind some inaccuracy (especially when images are concerned), it should be easy to add

@boaz0
Copy link
Member

boaz0 commented Mar 15, 2017

Hey @chaynika,
Did you get the chance to work on this?
Do you want some help?

@chaynika
Copy link

chaynika commented Mar 15, 2017 via email

@boaz0
Copy link
Member

boaz0 commented Mar 19, 2017

@chaynika - in daemon/prune.go you will find the prune functions such as VolumesPrune.

I guess you should think of a way that doesn't really remove the resources (hint: to skip this line f.e.) but just print the results.

The common way to do so, probably, is transforming the --dry-run option to a query parameter called dry_run which will be given to those prune function.

I hope it helps.
Thanks.

@boaz0
Copy link
Member

boaz0 commented Mar 28, 2017

@chaynika any progress? 😉

@adhulipa
Copy link

adhulipa commented Apr 4, 2017

@ripcurld0 May I work on this task?

  • I have downloaded the docker src
  • Built and tested the existing src
  • Familiarising myself with the docker arch -- (client -> server via. rest-api design)

So far, I've added a --dry-run option to the docker system prune cmd.
I'm working on implementing basic functionality for daemon/prune.VolumesPrune function.
I will post an update shortly to https://github.com/adhulipa/docker

Update: #dibs 😉

@boaz0
Copy link
Member

boaz0 commented Apr 4, 2017

@adhulipa 👍 😉

@adhulipa
Copy link

adhulipa commented Apr 4, 2017

Hi @ripcurld0 -- a quick update.

I've implemented a bare-bones version of the dry-run for volumes prune.

  1. We want to implement this feature for images and network pruning as well, right?

  2. Do we usually put up work-in-progress for a CR?
    OR
    Do we put up a CR only after completing the entire issue and including unit-tests?

@boaz0
Copy link
Member

boaz0 commented Apr 4, 2017

@adhulipa,

  1. Yep, that's correct.
  2. That is a good question. It depends. If the PR/CR is going to be a long one then probably adding the "WIP" prefix is a good idea. Otherwise, work on it and push the change to your fork and when it's ready open a pull request.

Anyway, if you're afraid someone will "hijack" this from you, I can understand you but we're playing nice here so I think you should send your pull request once you have at least something that's working (unit-tests can come later although it is preferable to have them alongside with the code changes).

I hope I answered your questions right.

@thaJeztah
Copy link
Member

Yes, opening a "WIP" PR is fine, but don't open too early if it's really in an early stage yet. You can push a branch to your repo and post a link to that here

@adhulipa
Copy link

adhulipa commented Apr 5, 2017

Thank you @ripcurld0 & @thaJeztah :)

  1. Please find my WIP commit for docker volume prune --dry-run here, on my repo
    I've also described what testing I have done as a comment at the bottom of the page.

  2. Next steps:
    2.a. I'll implement --dry-run in images, networks, containers and other places that support prune.
    2.b. I'll refactor my code to reduce duplication in the above places.
    2.c. I'll write unit-tests and update the doc.

@adhulipa
Copy link

adhulipa commented Apr 5, 2017

Update:
2.a. (from above comment)
Implemented ... prune --dry-run for networks and containers, here, in my repo (Testing steps as comment at the bottom of the page).

This can be accessed via docker system prune --dry-run or docker network prune --dry-run ( similarly for container).

Pruning images seems to be a little more involved.
I'll work on that next.

@adhulipa
Copy link

adhulipa commented Apr 7, 2017

Update:

  1. I got imaging pruning in dry-run to work. Although, the reclaimed space reported in dry run is incorrect. Working on fixing that.
  2. Working on writing unit-tests

@adhulipa
Copy link

adhulipa commented Apr 7, 2017

@ripcurld0 @thaJeztah

Looks like image pruning is harder than I expected.

Do you think it makes sense to break this task into:

  1. Dry run for volume pruning & tests (done; wrote some tests. Can finish all other test cases soon)
  2. Dry run for network pruning & tests (impl done; need to write tests)
  3. Dry run for container pruning & tests (impl done; need to write tests)
  4. Dry run for image pruning & tests (impl not done)

if yes, then may I create new sub-issues and open pull requests for each sub task?

@boaz0
Copy link
Member

boaz0 commented Apr 19, 2017

@adhulipa ping: any news? 😃

@adhulipa
Copy link

Hi @ripcurld0
Image dry pruning is challenging -- I have an implementation that reports the images being untagged but incorrectly reports the space reclaimed (when cmds docker system prune --dry-run or docker image prune --dry-run --all are run)

I will provide more details and a link to my current stash later today (at 9PM PT).

Since, it has been two weeks since my last comment, I will put image dry-pruning on hold and open pull requests for volume, network, container pruning.

@thaJeztah
Copy link
Member

Perhaps it's acceptable to initially just list the images that will be removed (without size) (wondering)

@adhulipa
Copy link

adhulipa commented Apr 20, 2017

Two updates:

  1. Image dry pruning is reporting estimated size as double that of the correct size i.e. this means that I'm counting the layers that can be removed each twice.
    This is better than what I had before. Until now I had a bug where it was always reported as 0B.

  2. For 14 days I was stuck on this bug where I was supposed to do if imageMeta == nil but mistakenly had written if imageMeta != nil and just couldn't figure it out (until I had println everywhere)! For 14 days! :'(
    Man, I feel stupid!

WIP patch: adhulipa@2d54746

@kaushik94
Copy link

@cpuguy83 can you give me a small example of how to set up with the master? Because I tried compiling with the docker/cli but the docker --version inside the shell(I followed the dockerception code) is giving an error. But when I do with the 17.06-ce, it doesn't give any error. Can you please help me set it up?

@cpuguy83
Copy link
Member

The make script in this repo looks for DOCKER_CLI_PATH, which it will mount into the container.
You'll need to make sure this is statically compiled and with GOOS=linux.

@kaushik94
Copy link

@cpuguy83 sorry for late reply. Your reply helped me a lot. Now it seems to be working again. Not sure how I missed this. Will get back soon. Thanks so much.

@kaushik94
Copy link

@cpuguy83 I made a CLI side base PR: docker/cli#2698.
I am working on container prune since it is less complex than others. Networks prune seems complex. Please give me some time to come up with PR on moby. Should be in a couple of days. Thanks for your input.

@kaushik94
Copy link

@cpuguy83 the check is failing for validation. Can you please help me how to fix it. I believe it doesn't have anything to do with moby. But may be I am doing a writeString. Also I have not written tests to support my feature. Could that be a problem?
In the meanwhile I will also work on moby and tests on CLI.

@kaushik94
Copy link

@cpuguy83 nvm I fixed it. Just had to pull from latest master for vendors update.

@kaushik94
Copy link

kaushik94 commented Sep 14, 2020

@cpuguy83 @albers I added a PR for container prune: #41449
This works as expected for now. But I will check for edge cases ad include tests. Please let me know if I missed something.

@peabnuts123
Copy link

@kaushik94 I respect you for your dedication and your hard work

@kaushik94
Copy link

@peabnuts123 thanks. That means a lot.

@kaushik94
Copy link

I am writing applications for college. So I will be busy for a couple of days. Will pick this up once that is done.

@jhollowe
Copy link

jhollowe commented Jul 9, 2021

any updates @kaushik94 ?

@kaushik94
Copy link

kaushik94 commented Jul 10, 2021

@jhollowe yes I am working on this. Will try to push a commit by the day after tomorrow.

@kaushik94
Copy link

Hi all, when I am trying to build the cli from this repository: docker/cli I am getting the following error:

Kaushiks-MacBook-Pro:cli ghost$ docker buildx bake
[+] Building 7.2s (16/17)                                                                                                                                                                                                                     
 => [internal] load build definition from Dockerfile                                                                                                                                                                                     0.0s
 => => transferring dockerfile: 32B                                                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                        0.0s
 => => transferring context: 34B                                                                                                                                                                                                         0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                                                                                                                                                             2.3s
 => CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc                                                                                                        0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                                                                     0.0s
 => => transferring dockerfile: 32B                                                                                                                                                                                                      0.0s
 => [internal] load metadata for docker.io/tonistiigi/xx@sha256:620d36a9d7f1e3b102a5c7e8eff12081ac363828b3a44390f24fa8da2d49383d                                                                                                         0.0s
 => [internal] load metadata for docker.io/library/golang:1.16-alpine                                                                                                                                                                    0.8s
 => [xx 1/1] FROM docker.io/tonistiigi/xx@sha256:620d36a9d7f1e3b102a5c7e8eff12081ac363828b3a44390f24fa8da2d49383d                                                                                                                        0.0s
 => [internal] load build context                                                                                                                                                                                                        0.5s
 => => transferring context: 1.07MB                                                                                                                                                                                                      0.5s
 => CACHED FROM docker.io/dockercore/golang-cross:xx-sdk-extras                                                                                                                                                                          0.0s
 => => resolve docker.io/dockercore/golang-cross:xx-sdk-extras                                                                                                                                                                           0.9s
 => [golatest 1/1] FROM docker.io/library/golang:1.16-alpine@sha256:3b11fcdf0d8436e1a5ba3a8bf5206c7f28f5a652ac899f2107ebe127b668bcfa                                                                                                     0.0s
 => CACHED [build-base-alpine 1/3] COPY --from=xx / /                                                                                                                                                                                    0.0s
 => CACHED [build-base-alpine 2/3] RUN apk add --no-cache clang lld llvm file git                                                                                                                                                        0.0s
 => CACHED [build-base-alpine 3/3] WORKDIR /go/src/github.com/docker/cli                                                                                                                                                                 0.0s
 => CACHED [build-alpine 1/1] RUN xx-apk add --no-cache musl-dev gcc                                                                                                                                                                     0.0s
 => ERROR [build 1/1] RUN --mount=ro --mount=type=cache,target=/root/.cache     --mount=from=dockercore/golang-cross:xx-sdk-extras,target=/xx-sdk,src=/xx-sdk     --mount=type=tmpfs,target=cli/winresources     xx-go --wrap &&     TA  2.8s
------                                                                                                                                                                                                                                        
 > [build 1/1] RUN --mount=ro --mount=type=cache,target=/root/.cache     --mount=from=dockercore/golang-cross:xx-sdk-extras,target=/xx-sdk,src=/xx-sdk     --mount=type=tmpfs,target=cli/winresources     xx-go --wrap &&     TARGET=/out ./scripts/build/binary &&     xx-verify $([ "static" = "static" ] && echo "--static") /out/docker:                                                                                                                                                
#16 0.349 wget: bad address 'github.com'                                                                                                                                                                                                      
#16 0.352 error: building for darwin requires ld64 linker                                                                                                                                                                                     
#16 0.942 Building static docker-darwin-amd64
#16 2.733 # runtime/cgo
#16 2.733 clang-11: error: invalid linker name in argument '-fuse-ld=ld64'
------
error: failed to solve: failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = failed to build LLB: executor failed running [/bin/sh -c xx-go --wrap &&     TARGET=/out ./scripts/build/binary &&     xx-verify $([ "$GO_LINKMODE" = "static" ] && echo "--static") /out/docker]: runc did not terminate sucessfully

Can someone please help me out here 🙏. I rebased my branch with the latest master.

@kaushik94
Copy link

kaushik94 commented Jul 18, 2021

Ok I was using the wrong command. After make -f docker.Makefile things are normal now. update: I am working on moving the prune to an API level now.

@kaushik94
Copy link

Hi, so I managed to add commands for dry run for containers. Please find the PRs here:

  1. Moby side PR
  2. Docker CLI changes PR

The previous PR for CLI changes had sign-off issues so I opened a new one. I did not write tests for this and I could not test this on another OS. I tried it on my laptop that has these specifications:

Software:

    System Software Overview:

      System Version: macOS 11.0.1 (20B50)
      Kernel Version: Darwin 20.1.0
      Boot Volume: Macintosh HD
      Boot Mode: Normal
      Computer Name: Kaushik’s MacBook Pro
      User Name: Kaushik Varanasi (ghost)
      Secure Virtual Memory: Enabled
      System Integrity Protection: Enabled
      Time since boot: 3 days 18:13

@hydrargyrum
Copy link

thank you for your work! Are tests the only thing left?

@tgross35
Copy link

tgross35 commented Dec 2, 2021

@kaushik94 I figure you must be busy but do you have any updates? Looks like your PRs are still open but haven't been touched in a little bit. Hoping they make it in soon:)

@kaushik94
Copy link

Hey, so I think writing tests for this case is pretty straightforward. But I am thinking about how to extend dry-run to images, networks, and volumes. Because for example in the case of networks we need to offer flexibility to extend it to external drivers. So I might need some help with that. As well as someone to who can spare some valuable time to review the PRs. Also a general question: Dry-run implementation for images etc can go into separate PRs or should I include them in a single PR? My opinion is to divide it into multiple PRs as it is better. So I am a bit stuck here.

@adhulipa
Copy link

@kaushik94 i want to applaud you for the continued effort! 🙌🏾

I agree with your assessment. All those years ago, I found it reliably easy to get the tests and feature working for volumes iirc. Relatively the same for containers and networks too. But images really stumped me.

I'd highly highly advise merging smaller units, perhaps whatever you have working (volumes etc) before tackling others that can get you stuck (images perhaps).

@Dentrax
Copy link

Dentrax commented Aug 16, 2022

Looking forward to this flag. Any updates on this? Did someone already work on this?

@thaJeztah
Copy link
Member

Some initial work was done in;

However, that work only focussed on containers; having a more representative overview of what will be pruned will be quite complicated, for various reasons;

  • race conditions (can be resolved by documenting the limitations); a container/image/volume/network may not be in use at the time that "dry run" is used, but may be in use the moment the actual prune is executed (or vice-versa), so dry run will always be an "approximation" of what will be pruned.
  • the more difficult part is due to how objects (containers, images, networks etc.) depend on each other. For example, an image can be deleted if it no longer has references to it (no more tags, no more containers using it); this is the reason that docker system prune deletes objects in a specific order (first remove all unused containers, then remove unused images). In order to replicate the same flow for "dry-run", it will be needed to temporarily construct representation of all objects and where they're referenced based on that (basically; duplicate all reference-counters, and then remove references from that "shadow" representation).
  • finally; with the work being done on integrating the containerd snapshotter (image and layer store), things may change more; for example, images can now be multi-arch, and (to be discussed), "pruning" could remove unused variants (architectures) from an image to clean up space, which brings another dimension to calculating "what can be removed".

@kaushik94
Copy link

@thaJeztah yes, I initially thought in this direction. Basically copy over the whole table of images, and remove in the same fashion as for the prune without dryRun and then output the space reclaimed. However for the containers part, I just finished writing the tests and the tests testBuildWithCgroupNs are failing. I'm not sure if it is related to this.

I think I will be able to get the approximation done for images part.

@Nassiel
Copy link

Nassiel commented Aug 21, 2022

The dry run option is nice, but the issue from docker/cli#3742, I'd say that it goes further.

Here you're offering the docker image prune --dry-run command but, IMO, every time that you run without the --dry-run, you will see the list (and it must be final) of items that are going to be deleted before you accept to the yes/no question. Same scenario for the docker volume prune and so on.

The reason? It's critical to know for real what you're accepting. Like if I decide to delete a package and (if I don't say different) all the dependencies will also be deleted, I see all the changes before I accept "yes, do as I say". But here, with the current system, you are blindly trusting docker. Not to mention system prune...

Because even if you do a dry-run, because is not final, the final decision may be to delete something I don't expect (because of a bug) Like in my case, I lost all the volumes while containers were running and using them, causing a full system failure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cli exp/beginner kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny
Projects
None yet
Development

No branches or pull requests