New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Watch connection manager never closed when trying to delete a non-existing POD #9932
Changes from all commits
49ad990
3cfb538
bbabfdf
f80c84a
000e135
3892ce5
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -15,6 +15,7 @@ | |
import static org.eclipse.che.workspace.infrastructure.kubernetes.Constants.CHE_WORKSPACE_ID_LABEL; | ||
import static org.eclipse.che.workspace.infrastructure.kubernetes.namespace.KubernetesObjectUtil.putLabel; | ||
|
||
import com.google.common.annotations.VisibleForTesting; | ||
import io.fabric8.kubernetes.api.model.DoneablePod; | ||
import io.fabric8.kubernetes.api.model.Event; | ||
import io.fabric8.kubernetes.api.model.ObjectReference; | ||
|
@@ -553,14 +554,19 @@ public void delete() throws InfrastructureException { | |
} | ||
} | ||
|
||
private CompletableFuture<Void> doDelete(String name) throws InfrastructureException { | ||
@VisibleForTesting | ||
CompletableFuture<Void> doDelete(String name) throws InfrastructureException { | ||
Watch toCloseOnException = null; | ||
try { | ||
final PodResource<Pod, DoneablePod> podResource = | ||
clientFactory.create(workspaceId).pods().inNamespace(namespace).withName(name); | ||
final CompletableFuture<Void> deleteFuture = new CompletableFuture<>(); | ||
final Watch watch = podResource.watch(new DeleteWatcher(deleteFuture)); | ||
|
||
podResource.delete(); | ||
toCloseOnException = watch; | ||
Boolean deleteSucceeded = podResource.delete(); | ||
if (deleteSucceeded == null || !deleteSucceeded) { | ||
deleteFuture.complete(null); | ||
} | ||
return deleteFuture.whenComplete( | ||
(v, e) -> { | ||
if (e != null) { | ||
|
@@ -569,7 +575,15 @@ private CompletableFuture<Void> doDelete(String name) throws InfrastructureExcep | |
watch.close(); | ||
}); | ||
} catch (KubernetesClientException ex) { | ||
if (toCloseOnException != null) { | ||
toCloseOnException.close(); | ||
} | ||
throw new KubernetesInfrastructureException(ex); | ||
} catch (Exception e) { | ||
if (toCloseOnException != null) { | ||
toCloseOnException.close(); | ||
} | ||
throw e; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe it would be better to wrap this exception into There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. well, honestly I prefer re-throwing exactly the same exception, so that there is no behavior change compared to the previous implementation that used to only catch There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree that it's the same as was before. But only InfrastructureException is declared as throwing and it would be safer to wrap other exception in |
||
} | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can propose you to rewrite this method in the following way:
Why it can be better - because in a case when an exception occurs during removing many pods - then there will be trying to remove all pods. With current approach - if an exception occurs then no more pods will be removed. And I think it would be better to try to clean up resources as many as we can.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind if we do this change in another PR ?
I'd prefer keep the behavior as much identical as possible to what was previously, and only fix the bug of the non-closed watch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidfestal Makes sense. I'm OK with that