Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gsutil -m rm on empty bucket fails with CommandException #417

Open
msigdel opened this issue Mar 22, 2017 · 10 comments

Comments

Projects
None yet
9 participants
@msigdel
Copy link

commented Mar 22, 2017

Running the following command fails with "CommandException: 1 files/objects could not be removed."

<gsutil -m rm gs://mybucket/empty_bucket/*>

The error isn't meaningful as in fact the issue is that there are no files, instead it says 'objects could not be removed'. Also, should the bucket not having files raise an exception?

@houglum

This comment has been minimized.

Copy link
Collaborator

commented Mar 23, 2017

Ah, interesting. Our exception handler for rm doesn't treat "No URLs matched: " exceptions any differently from other exceptions; it assumes all captured exceptions were failures from trying to remove valid files/objects.

I agree that the error message here is misleading. And as for the exception: while nothing technically went wrong in deleting all the objects/files corresponding to the command arguments (even if this evaluates to 0), we run under the assumption that if a user says, "I want to delete gs://foo/bar," then they believe that object exists - this is just a nicety to inform them that the objects they assumed were present were, in fact, not.

@LivesleyWA

This comment has been minimized.

Copy link

commented Jul 2, 2017

Disagree - I might not know the state of a directory that I am clearing. I have an automated build process and the target directory may or may not have something in it depending on the previous build state.
In any case, the -f flag should ensure that the "error" is not reported. It is not working and I am still getting the error which kills my build process.

@andrelocher

This comment has been minimized.

Copy link

commented Aug 8, 2017

I'm encountering the same issue here, what's the workaround?

@houglum

This comment has been minimized.

Copy link
Collaborator

commented Aug 8, 2017

If you'd like to know whether the bucket is empty first, you could make a call to ls. The ls command doesn't currently provide a way to set max desired results, so if there will potentially be a lot of objects in the bucket, resulting in lots of listing calls, you could instead make a raw call to the JSON API's objects.list method. This would allow you to specify a value of 1 for the maxResults parameter, resulting in only one quick listing call. If you prefer the XML API, it offers this functionality via the max-keys option.

@shanFitGit

This comment has been minimized.

Copy link

commented Aug 21, 2017

I have similar issue. I wish this could be handled in a better way. We have automated build process where we clear the directory without knowing if it's empty or not. The error thrown by gsutil while trying to delete empty directory breaks the build process.

@houglum

This comment has been minimized.

Copy link
Collaborator

commented Dec 19, 2017

In the spirit of tracking how much user pain this causes, I saw another report of this behavior being confusing at
https://stackoverflow.com/questions/47885197/gsutil-cp-command-error-commandexception-no-urls-matched/47895858#47895858

@nojvek

This comment has been minimized.

Copy link

commented Feb 20, 2018

gsutil -m cp -n -r ../assets/* gs://foo/assets
CommandException: No URLs matched: ../asset-cache/*

This is like cp -r foo bar throwing an error if foo directory is empty.

Would love to see this edge case fixed

@pol1000000

This comment has been minimized.

Copy link

commented Feb 4, 2019

noticied strange behavior of
gsutil -m {mv, cp, rm} gs://some-bucket/dir1 gs://some-bucket/dir2
sometimes it still returns the error: "CommandException: count_of_files files/objects could not be copied\moved\removed" when I ran processing a lot of files.
Usually, count_of_files is a small count, the max that I noticed is 7.

The first that I thought is network troubles, but when I started to ping-tests and moving files as it shows the above, I just got 0% of the packet loss and the error with 2 not moved files =(

it's critical moment cause I using buckets as part of the CDN network and have a lot of builds of static content during the day, so, every error just ruined build processes on my test environment.

is there another way to fast managing files remotely in GC buckets instead of gsutil?

OS: Deb 9
google-cloud-sdk version: 232.0.0-0

$ cat .boto | grep -v "^#" | sed '/^$/d'
[Credentials]
[Boto]
https_validate_certificates = True
[GoogleCompute]
[GSUtil]
content_language = en
default_api_version = 2
[OAuth2]

gsutil version: 4.35
checksum: 3b22dd7820bcb962909f4f401010fb17 (OK)
boto version: 2.49.0
python version: 2.7.15+ (default, Nov 28 2018, 16:27:22) [GCC 8.2.0]
OS: Linux 4.9.0-6-amd64
multiprocessing available: True
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): ~/.boto,
gsutil path: /usr/lib/google-cloud-sdk/bin/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False

@vinaygopalkrishnan

This comment has been minimized.

Copy link

commented Apr 16, 2019

Hi

We are using gsutil version: 4.38, and seem to be facing the same issue:

The command: gsutil rm -f
is throwing the error CommandException: 1 files/objects could not be removed.

Is this issue fixed or is there a work around

@nandosangenetto

This comment has been minimized.

Copy link

commented Apr 24, 2019

Is there a plan to make it work? How can we help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.