New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] allow autoscaling to work when vertical scaling is possible #84242
[ML] allow autoscaling to work when vertical scaling is possible #84242
Conversation
Pinging @elastic/ml-core (Team:ML) |
Hi @benwtrent, I've created a changelog YAML for you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This fix should be backported to 7.17 and 8.1, otherwise lots of 7.17 users could be affected over the coming years. |
…stic#84242) When an NLP model is deployed, or a DFA/Anomaly job is assigned, we have historically relied only on the xpack.ml.max_lazy_ml_nodes to determine if scaling is possible. But, in certain scenarios, it may be that scaling is available when xpack.ml.max_lazy_ml_nodes is fully satisfied. xpack.ml.max_ml_node_size is now checked to see if the current ML nodes exceed this size. If not, we assume vertical scaling is possible and allow the tasks to be created. closes elastic#84198
💔 Backport failed
You can use sqren/backport to manually backport by running |
…stic#84242) When an NLP model is deployed, or a DFA/Anomaly job is assigned, we have historically relied only on the xpack.ml.max_lazy_ml_nodes to determine if scaling is possible. But, in certain scenarios, it may be that scaling is available when xpack.ml.max_lazy_ml_nodes is fully satisfied. xpack.ml.max_ml_node_size is now checked to see if the current ML nodes exceed this size. If not, we assume vertical scaling is possible and allow the tasks to be created. closes elastic#84198
#84242) (#84280) * [ML] allow autoscaling to work when vertical scaling is possible (#84242) When an NLP model is deployed, or a DFA/Anomaly job is assigned, we have historically relied only on the xpack.ml.max_lazy_ml_nodes to determine if scaling is possible. But, in certain scenarios, it may be that scaling is available when xpack.ml.max_lazy_ml_nodes is fully satisfied. xpack.ml.max_ml_node_size is now checked to see if the current ML nodes exceed this size. If not, we assume vertical scaling is possible and allow the tasks to be created. closes #84198 * fixing for backport
…stic#84242) When an NLP model is deployed, or a DFA/Anomaly job is assigned, we have historically relied only on the xpack.ml.max_lazy_ml_nodes to determine if scaling is possible. But, in certain scenarios, it may be that scaling is available when xpack.ml.max_lazy_ml_nodes is fully satisfied. xpack.ml.max_ml_node_size is now checked to see if the current ML nodes exceed this size. If not, we assume vertical scaling is possible and allow the tasks to be created. closes elastic#84198
) (#84286) When an NLP model is deployed, or a DFA/Anomaly job is assigned, we have historically relied only on the xpack.ml.max_lazy_ml_nodes to determine if scaling is possible. But, in certain scenarios, it may be that scaling is available when xpack.ml.max_lazy_ml_nodes is fully satisfied. xpack.ml.max_ml_node_size is now checked to see if the current ML nodes exceed this size. If not, we assume vertical scaling is possible and allow the tasks to be created. closes #84198
…stic#84242) When an NLP model is deployed, or a DFA/Anomaly job is assigned, we have historically relied only on the xpack.ml.max_lazy_ml_nodes to determine if scaling is possible. But, in certain scenarios, it may be that scaling is available when xpack.ml.max_lazy_ml_nodes is fully satisfied. xpack.ml.max_ml_node_size is now checked to see if the current ML nodes exceed this size. If not, we assume vertical scaling is possible and allow the tasks to be created. closes elastic#84198
…stic#84242) When an NLP model is deployed, or a DFA/Anomaly job is assigned, we have historically relied only on the xpack.ml.max_lazy_ml_nodes to determine if scaling is possible. But, in certain scenarios, it may be that scaling is available when xpack.ml.max_lazy_ml_nodes is fully satisfied. xpack.ml.max_ml_node_size is now checked to see if the current ML nodes exceed this size. If not, we assume vertical scaling is possible and allow the tasks to be created. closes elastic#84198
When an NLP model is deployed, or a DFA/Anomaly job is assigned, we have historically relied only on the
xpack.ml.max_lazy_ml_nodes
to determine if scaling is possible. But, in certain scenarios, it may be that scaling is available whenxpack.ml.max_lazy_ml_nodes
is fully satisfied.xpack.ml.max_ml_node_size
is now checked to see if the current ML nodes exceed this size. If not, we assume vertical scaling is possible and allow the tasks to be created.closes #84198