Skip to content

Conversation

@linglingye001
Copy link
Member

No description provided.

memory: 128Mi
requests:
cpu: 10m
cpu: 20m
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you checked when the controller is running without any provider yaml being deployed, how many resource it take?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: az-appconfig-k8s-provider-hpa
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the full name variable

limits:
cpu: 500m
cpu: 100m
memory: 128Mi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems double the request memory is too conservative. How about using 256Mi as a limit?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@linglingye001 linglingye001 changed the base branch from main to release/v1.1 January 31, 2024 04:55
@linglingye001 linglingye001 merged commit 5716345 into release/v1.1 Jan 31, 2024
@linglingye001 linglingye001 deleted the user/linglingye/podSchedule branch January 31, 2024 08:43
linglingye001 added a commit that referenced this pull request Feb 2, 2024
* Support setting tolerations, nodeSelector, and affinity during helm install (#8)

* add affinity/nodeSelector/tolerations in helm chart deployment file

* add hpa

* configure hpa when autoscaling is true

* resolve comments

* Bump up version to 1.1.1 (#11)

* Bump up version to 1.1.1

* update ci
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants