-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/servers by metric #56
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one question, besides that LTGM
models/room.go
Outdated
|
||
func getReadyRooms(tx redis.Pipeliner, schedulerName string, size int, mr *MixedMetricsReporter) ([]string, error) { | ||
var result []string | ||
rooms := tx.SScan( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why SScan and not SRandMember?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question! I didn't see there was a SRandMember
command with count. I'll change it because it's better to return random members.
This PR adds a route that returns a list of rooms ordered by a metric. For
legacy
androom
metrics it returns at mostlimit
available rooms. Forcpu
andmem
metrics it returns at mostlimit
rooms ordered by increasing cpu or memory usage.Also, I'm now reporting room usage metrics (cpu and memory) when a ping event is received if the corresponding metrics trigger is enabled.
Last but not least, I've fixed a bug that allowed a scheduler to be updated with metrics autoscaling bu no requests specified.
In a later PR we might change the cpu and memory autoscaler to retrieve the data we're now storing in Redis instead of calling Kubernetes API.