-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[ws-scheduler] Make ghost workspaces more effective by integrating them with scheduler #2552
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
/werft run 👍 started the job as gitpod-build-gpl-2513-scheduler-deletes-ghosts.3 |
/werft run 👍 started the job as gitpod-build-gpl-2513-scheduler-deletes-ghosts.4 🤞 |
7133da8
to
130b2e0
Compare
/werft run 👍 started the job as gitpod-build-gpl-2513-scheduler-deletes-ghosts.6 |
I just performed a load test, which sadly triggered some |
Actually I think the current issue is an instance of what be discussed above: In rare cases the 1 second is not enough and Next attempt would be to increase the (max) grace period to maximum (30s AFAIR). Positively speaking: During the load test only in ~4% of the cases the delete took longer than 1s! 🙃 |
130b2e0
to
e9b1857
Compare
Set |
Just coming back and think maybe it's better to use a |
cf0daa9
to
5a04770
Compare
@csweichel ping |
5a04770
to
7d162a1
Compare
- remove a ghost before binding a regular workspace - make ghosts "invisible" to strategy
35f1119
to
f13f335
Compare
for _, n := range nodes { | ||
nds[n.Name] = &Node{ | ||
Node: n, | ||
Services: make(map[string]struct{}), | ||
} | ||
nodeToPod[n.Name] = make(map[string]struct{}) | ||
nodeToPod[n.Name] = &ntp{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not make this part of nds
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question - I left it as I found it. 🙃
But the reasons seems to be that the book keeping is done during the process of adding pods from pds
to each node
in nds
here. So it make sense to keep them separate.
Didn't dive into the logic of things, hence the vanity comments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM pending the offline discussed changes and squash
f13f335
to
5553fc1
Compare
…em with scheduler (gitpod-io#2552) * [ws-manager] Do not delete ghost workspace on start * [ws-scheduler] Enable asynchronous binding of pods * [ws-scheduler] Introduce ghosts - remove a ghost before binding a regular workspace - make ghosts "invisible" to strategy * [scheduler] Wait longer on ghost deletion to prevent OOM errors * [scheduler] Make isRegularWorkspace -> makeGhostsInvisible explicit * [scheduler] cancel ghost.Delete if it takes too long (5s) * [ws-scheduler] Add tests for ghost-sepcific state computation * [scheduler] Make sure ghost are only selected for deletion once * [scheduler] delete ghosts: ctxDeleteTimeout > gracePeriod * [scheduler] Don't bind terminated pods * [scheduler] Make all non-ghost workspaces replace ghosts * [scheduler] review comments
This:
non-REGULAR
GHOST
workspace start fromws-manager
tows-scheduler
TBD: