Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recursive deletion of space containing async provided service instance fails #613

Closed
friday11 opened this issue Jun 6, 2016 · 12 comments
Closed

Comments

@friday11
Copy link

friday11 commented Jun 6, 2016

Issue

Recursive deletion of spaces containing async provided service instances will be rejected with the following error message:

"Deletion of space ABC failed because one or more resources within could not be deleted. An operation for service instance XYZ is in progress.".

The same applies for organizations with spaces containing async provided service instances.

We would expect, that the Cloud Controller can handle this kind of (temporal) dependency, especially when invoked with the async flag.

Are there any plans to support recursive (cascading) deletions where such dependencies are handled by the Cloud Controller?

Context

Working as a developer for the Swisscom application cloud project. Our application cloud is based on CloudFoundry.

Some of the Swisscom application cloud services can be configured to be provided asynchronously (e.g. MongoDB, Redis).

Steps to Reproduce

  1. Create a space
  2. Create an async provided service instance within that space
  3. Delete that space after the async service instance has been successfully created

Expected result

The deletion succeeds. The Cloud Controller waits until the async provided service instance has been successfully deleted and then deletes the space.

Current result

The deletion fails. The Cloud Controller doesn't wait until the async provided service instance has been successfully deleted and therefore aborts the deletion.

@cf-gitbot
Copy link

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/120941969

The labels on this github issue will be updated when the story is started.

@friday11
Copy link
Author

Are there any plans to fix the current behaviour? If yes, when can we expect a fix or solution?

@SocalNick
Copy link
Contributor

SocalNick commented Sep 14, 2016

@friday11 can you let us know how you attempted to delete the space? did you use the CLI or did you use the API directly? if API directly, can you give us the endpoint and any parameters you may have passes?

@friday11
Copy link
Author

@SocalNick I've used the CLI. Here is the corresponding CLI trace output:
Really delete the space DEV?> y
Deleting space DEV in org org-f142 as admin...

REQUEST: [2016-09-21T10:14:39+02:00]
DELETE /cf-ext/v2/spaces/729b7720-c47f-46f9-8953-0ec7f8fac1c9?async=true&recursive=true HTTP/1.1
Host: localhost:8080
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.16.1+924508c / windows

RESPONSE: [2016-09-21T10:14:41+02:00]
HTTP/1.1 202 Accepted
Content-Length: 270
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json;charset=utf-8
Date: Wed, 21 Sep 2016 08:14:40 GMT
Expires: 0
Pragma: no-cache
Server: nginx
Strict-Transport-Security: max-age=15768000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: 17b575a8-6755-4aec-739e-ad57c64a295f
X-Vcap-Request-Id: 17b575a8-6755-4aec-739e-ad57c64a295f::a2547e3a-2d67-45fb-94be-c06e8900e7f8
X-Xss-Protection: 1; mode=block

{
"metadata": {
"guid": "c7455b8c-8544-4c9b-8e1e-f250901ecff5",
"created_at": "2016-09-21T08:14:40Z",
"url": "/v2/jobs/c7455b8c-8544-4c9b-8e1e-f250901ecff5"
},
"entity": {
"guid": "c7455b8c-8544-4c9b-8e1e-f250901ecff5",
"status": "queued"
}
}

REQUEST: [2016-09-21T10:14:41+02:00]
GET /cf-ext/v2/jobs/c7455b8c-8544-4c9b-8e1e-f250901ecff5 HTTP/1.1
Host: localhost:8080
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.16.1+924508c / windows

RESPONSE: [2016-09-21T10:14:42+02:00]
HTTP/1.1 200 OK
Content-Length: 270
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json;charset=utf-8
Date: Wed, 21 Sep 2016 08:14:42 GMT
Expires: 0
Pragma: no-cache
Server: nginx
Strict-Transport-Security: max-age=15768000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: c8aa2cd0-3f63-4da4-4295-ed9b5cdb6362
X-Vcap-Request-Id: c8aa2cd0-3f63-4da4-4295-ed9b5cdb6362::6674951a-db6f-4308-98ba-658ab3a990b4
X-Xss-Protection: 1; mode=block

{
"metadata": {
"guid": "c7455b8c-8544-4c9b-8e1e-f250901ecff5",
"created_at": "2016-09-21T08:14:40Z",
"url": "/v2/jobs/c7455b8c-8544-4c9b-8e1e-f250901ecff5"
},
"entity": {
"guid": "c7455b8c-8544-4c9b-8e1e-f250901ecff5",
"status": "queued"
}
}

REQUEST: [2016-09-21T10:14:47+02:00]
GET /cf-ext/v2/jobs/c7455b8c-8544-4c9b-8e1e-f250901ecff5 HTTP/1.1
Host: localhost:8080
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.16.1+924508c / windows

RESPONSE: [2016-09-21T10:14:47+02:00]
HTTP/1.1 200 OK
Content-Length: 631
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json;charset=utf-8
Date: Wed, 21 Sep 2016 08:14:47 GMT
Expires: 0
Pragma: no-cache
Server: nginx
Strict-Transport-Security: max-age=15768000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: db5006d0-2fa8-4d6f-5660-54741d84932d
X-Vcap-Request-Id: db5006d0-2fa8-4d6f-5660-54741d84932d::50a7e00b-7f96-4a40-a43f-cd2081034891
X-Xss-Protection: 1; mode=block

{
"metadata": {
"guid": "c7455b8c-8544-4c9b-8e1e-f250901ecff5",
"created_at": "2016-09-21T08:14:40Z",
"url": "/v2/jobs/c7455b8c-8544-4c9b-8e1e-f250901ecff5"
},
"entity": {
"guid": "c7455b8c-8544-4c9b-8e1e-f250901ecff5",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of entity>error_details.",
"error_details": {
"code": 290008,
"description": "Deletion of space DEV failed because one or more resources within could not be deleted.\n\n\tAn operation for service instance service-f142 is in progress.",
"error_code": "CF-SpaceDeletionFailed"
}
}
}
FAILED
Deletion of space DEV failed because one or more resources within could not be deleted.

    An operation for service instance service-f142 is in progress.

FAILED
Deletion of space DEV failed because one or more resources within could not be deleted.

    An operation for service instance service-f142 is in progress.

Z:>

@SocalNick
Copy link
Contributor

Thanks, we'll take a look.

@friday11
Copy link
Author

@SocalNick: Do you have any news regarding this issue?

@SocalNick
Copy link
Contributor

Hi @friday11 - engineering has taken a look, we are trying to determine the correct behavior for this situation. The current idea is we should fail fast in this situation. We dedicate half of every day to community issues, we'll try to resume this work shortly. Please feel free to follow the conversation here: https://www.pivotaltracker.com/story/show/120941969

@friday11
Copy link
Author

friday11 commented Nov 19, 2016

Hi @SocalNick , thank you for your reply. Our preferred solution would have been that the Cloud Controller waits until the async provided service instance has been successfully deleted and then deletes the space. What are your arguments to not support this behaviour? Thanks again for your feedback

@SocalNick
Copy link
Contributor

Hi @friday11

The Service Broker API documents a Maximum Polling Duration that defaults to 1-week. Therefore, we can't assume the asynchronous deletion will complete in a time period that would allow us to wait. Asynchronous services should be cleaned up before attempting to delete the Org or Space. I think responding with an error is the best we can do.

@friday11
Copy link
Author

friday11 commented Dec 1, 2016

@SocalNick: Ok, thank you for your feedback. In which CF version will this improvement be available?

@SocalNick
Copy link
Contributor

@friday11 this bug is prioritized in our backlog, but it's not a top priority to fix.

@mattmcneeney
Copy link

This is unlikely to be fixed in v2 of the CC API. This behaviour spans many other recursive delete workflows. We will bear this use case in mind however when designing future versions of the API! We have tracked this in http://v3-dreams.sapi.life

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants