Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a road map for scale out on request ? #72

Closed
hjianhao opened this issue Jan 9, 2017 · 4 comments
Closed

Is there a road map for scale out on request ? #72

hjianhao opened this issue Jan 9, 2017 · 4 comments
Labels

Comments

@hjianhao
Copy link

hjianhao commented Jan 9, 2017

Current implementation seems only start one pod for one function, is there a road map for scale out on request like AWS Lambda? It's important feature for serverless.

@soamvasani
Copy link
Member

Yup, definitely. The idea is to scale up/down pods based on metrics such as request queue length or resource usage, and let Kubernetes services do the load balancing across pods.

@hjianhao
Copy link
Author

For the public cloud, I think the scale out by request model like AWS Lambda that one functions' runtime(container) only deal with a request at the same time maybe a better idea. I's easy to bill the runtime cost, control the resource consuming, controls the concurrency.

For private, scale out by metrics is a also a good idea, because you don't need to control the concurrency accurately across multiple router, every router can report the metrics to the monitoring service, or monitoring service collect the resource usage metrics, then monitoring service will analyze the metrics and decide when the function's runtime need to scale out.

A simple way is that we don't need a centric monitoring system, when a functions' waiting queue is beyond limitation or a function's average response time in a period exceed a predefined value in one router, it can ask the poolmgr to scale out this function, the poolmgr will do it when max runtime instance limitation is not exceed.

@soamvasani
Copy link
Member

Tracking this in #80.

@soamvasani
Copy link
Member

(BTW @hjianhao: happy to discuss this further on slack or the new issue. I didn't have much to comment on your autoscaling ideas, just because we haven't gotten to that point just yet; we'll be doing some scalability experiments at some point and trying out various autoscaling designs.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants