Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
78 commits
Select commit Hold shift + click to select a range
e72aa34
fix
pawloch00 Mar 3, 2025
a1f465c
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Mar 5, 2025
2a750c4
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Apr 11, 2025
d09088b
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jun 3, 2025
5b1bae6
tMerge branch 'develop' of https://github.com/AI-Hypercomputer/xpk in…
pawloch00 Jun 3, 2025
de9f7a1
feat: Added an update to CoreDNS, and when python3 xpk/xpk.py cluster…
DannyLiCom Jun 16, 2025
0d5729a
Update cluster.py
DannyLiCom Jun 17, 2025
7392e31
Update cluster.py
DannyLiCom Jun 17, 2025
49027e1
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jun 17, 2025
1c57afe
feat: Remaining: Verify CoreDNS startup and add 'update_coredns_if_ne…
DannyLiCom Jun 18, 2025
807e7c9
feat: Added CoreDNS status check and update_coredns_if_necessary func…
DannyLiCom Jun 19, 2025
18ed2c0
refactor: Organize code
DannyLiCom Jun 20, 2025
3b3c98f
Refactor check_coredns_status() into multiple smaller functions.
DannyLiCom Jun 23, 2025
9670da1
Refactor update_coredns(args) and add _verify_coredns_readiness().
DannyLiCom Jun 24, 2025
b58c54d
Organize code
DannyLiCom Jun 24, 2025
edb54a7
Organize code
DannyLiCom Jun 26, 2025
f6dde4f
Remove the arg.enable_pathways condition.
DannyLiCom Jul 3, 2025
dc4ed2b
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jul 8, 2025
04e5a1d
Resolve lint issue and added a function to fix a bug when validating …
DannyLiCom Jul 9, 2025
1b296f7
Delete this steps listing.
DannyLiCom Jul 10, 2025
f80f3ba
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jul 11, 2025
00f5a59
Merge branch 'develop' into lidanny/feature/update-to-CoreDNS
DannyLiCom Jul 17, 2025
1330277
Merge branch 'develop' into lidanny/feature/update-to-CoreDNS
pawloch00 Jul 18, 2025
1f895d9
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jul 18, 2025
133f2a4
Fix max-nodes when creating flex queued nodepool of tpus (#541)
pawloch00 Jul 18, 2025
87467c8
Merge branch 'develop' into lidanny/feature/update-to-CoreDNS
pawloch00 Jul 21, 2025
d5e7c7f
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jul 22, 2025
8b8f767
Fix kueue version in yaml string and loosen dependecy on cloud-storag…
pawloch00 Jul 22, 2025
a95c5b1
Merge branch 'develop' into lidanny/feature/update-to-CoreDNS
pawloch00 Jul 23, 2025
674d7b6
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jul 23, 2025
e39a7a7
Remove RBAC container (#547)
pawloch00 Jul 23, 2025
26fc42b
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jul 23, 2025
ab5bc71
Merge main to develop (#542)
sharabiani Jul 23, 2025
babe94b
Merge branch 'develop' of https://github.com/AI-Hypercomputer/xpk int…
pawloch00 Jul 23, 2025
4c12e2a
fix kjob.py pyink (#552)
pawloch00 Jul 23, 2025
18817bc
Merge branch 'develop' into lidanny/feature/update-to-CoreDNS
pawloch00 Jul 24, 2025
8259007
Merge branch 'develop' into ppawl-merge-main-release-0.10.1
pawloch00 Jul 24, 2025
2b7c5f5
Update Kueue to create visibility folder (#556)
SujeethJinesh Jul 25, 2025
16e6c3c
Merge branch 'develop' into lidanny/feature/update-to-CoreDNS
pawloch00 Jul 25, 2025
c0fb3f6
Update CPU limits to 750m (#558)
SujeethJinesh Jul 28, 2025
a9e1873
Merge branch 'develop' into ppawl-merge-main-release-0.10.1
pawloch00 Jul 28, 2025
03b2b3b
Merge main release 0.10.1 (#555)
pawloch00 Jul 28, 2025
44b6d7b
Revert "Merge main release 0.10.1 (#555)" (#559)
pawloch00 Jul 28, 2025
bf6fcaf
Merge branch 'develop' into ppawl-merge-main-release-0.10.1
pawloch00 Jul 28, 2025
e776c29
Merge pull request #560 from AI-Hypercomputer/ppawl-merge-main-releas…
pawloch00 Jul 28, 2025
0945e87
Merge branch 'develop' into lidanny/feature/update-to-CoreDNS
DannyLiCom Jul 29, 2025
14b33f2
AutoscalingProfile was set to optimize_utilization (#565)
sharabiani Jul 29, 2025
25fe399
"Select TPU by topology (#525)" + Fix errors (#563)
sharabiani Jul 30, 2025
f2340d0
Update CPU limit for large scale clusters (#571)
SujeethJinesh Jul 30, 2025
98bcf4e
Merge branch 'develop' into lidanny/feature/update-to-CoreDNS
DannyLiCom Jul 31, 2025
48012d8
Merge pull request #530 from AI-Hypercomputer/lidanny/feature/update-…
DannyLiCom Aug 4, 2025
5eaffd6
Update CODEOWNERS
scaliby Aug 7, 2025
e3ae587
Merge pull request #581 from scaliby/update_owners
sharabiani Aug 7, 2025
2faa737
fix: autoprovisioning cluster create
scaliby Aug 12, 2025
b4e802a
Merge pull request #589 from AI-Hypercomputer/scaliby/b/437370853
scaliby Aug 12, 2025
42cb07b
fix: provisioning 1t tpu topologies
scaliby Aug 18, 2025
91c9127
style: reformat
scaliby Aug 18, 2025
5c3b87b
fix: reorder custom_nodepool_arguments for node-pool create command
scaliby Aug 18, 2025
880d83f
NAP memory limit increased
sharabiani Aug 18, 2025
81e1bab
Revert "NAP memory limit increased"
sharabiani Aug 18, 2025
1f6d137
NAP memory limit increased
sharabiani Aug 18, 2025
9a41e4e
NAP cpu limit increased
sharabiani Aug 18, 2025
e80a686
fix #598 only install JQ when not installed
samos123 Aug 19, 2025
ec84890
Merge pull request #595 from AI-Hypercomputer/konradkaim/fix-provisio…
scaliby Aug 19, 2025
d8a2158
Merge pull request #596 from AI-Hypercomputer/konradkaim/reorder-cust…
scaliby Aug 19, 2025
c205b26
Merge branch 'develop' into nap-memory-limit
sharabiani Aug 19, 2025
6c0fad0
Merge pull request #597 from AI-Hypercomputer/nap-memory-limit
sharabiani Aug 19, 2025
87ff0aa
Merge branch 'develop' into fix-jq-install
samos123 Aug 20, 2025
d390bff
Merge pull request #601 from AI-Hypercomputer/fix-jq-install
scaliby Aug 21, 2025
b503f54
fix: custom nodepool arguments append
scaliby Aug 21, 2025
86573a3
feat: add tpu7x support
scaliby Aug 11, 2025
0561caf
Merge pull request #602 from AI-Hypercomputer/konradkaim/custom-nodep…
scaliby Aug 21, 2025
6369383
Merge branch 'develop' into konradkaim/tpu7x-support
scaliby Aug 22, 2025
f0e0b4c
Merge pull request #586 from AI-Hypercomputer/konradkaim/tpu7x-support
scaliby Aug 22, 2025
c8af3c4
fix: provisioning scopes for nap
scaliby Aug 25, 2025
fd613de
Merge pull request #605 from AI-Hypercomputer/scaliby/b/439921648
scaliby Aug 25, 2025
0d4c860
Merge pull request #606 from AI-Hypercomputer/scaliby/b/434405026
scaliby Aug 28, 2025
21c1c13
Release v0.11.0
scaliby Aug 29, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/CODEOWNERS
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
* @Obliviour @44past4 @sharabiani @pawloch00 @BluValor @gcie @RoshaniN
* @Obliviour @44past4 @sharabiani @pawloch00 @BluValor @gcie @RoshaniN @scaliby @jamOne- @SikaGrr @FIoannides @fatoshoti
slice/ @mwysokin @mimowo @gabesaba @PBundyra @mwielgus @pajakd
10 changes: 8 additions & 2 deletions .github/workflows/build_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ jobs:
group-name: ${{ steps.set-group-name.outputs.group-name }}
zone: ${{ steps.set-zone.outputs.zone }}
tpu-type: ${{ steps.set-tpu-type.outputs.tpu-type }}
tpu-type-topology: ${{ steps.set-tpu-type-topology.outputs.tpu-type-topology }}
location: ${{steps.set-location.outputs.location}}
run-id: ${{steps.set-run-id.outputs.run-id}}
steps:
Expand Down Expand Up @@ -76,6 +77,10 @@ jobs:
id: set-tpu-type
run: |
echo tpu-type=v4-8 >> $GITHUB_OUTPUT
- name: set tpu-type-topology
id: set-tpu-type-topology
run: |
echo tpu-type-topology=v4-2x2x1 >> $GITHUB_OUTPUT
- name: set location
id: set-location
run: |
Expand Down Expand Up @@ -152,7 +157,7 @@ jobs:
with:
run-id: '${{needs.set-variables.outputs.run-id}}'
cluster-name: '${{needs.set-variables.outputs.cluster-name}}'
tpu-type: '${{needs.set-variables.outputs.tpu-type || inputs.tpu-type}}'
tpu-type: '${{needs.set-variables.outputs.tpu-type-topology || inputs.tpu-type}}'
zone: '${{needs.set-variables.outputs.zone}}'
location: '${{needs.set-variables.outputs.location}}'
secrets: inherit
Expand All @@ -165,7 +170,7 @@ jobs:
with:
cluster-name-dws: '${{needs.set-variables.outputs.cluster-name-dws}}'
cluster-name: '${{needs.set-variables.outputs.cluster-name}}'
tpu-type: '${{needs.set-variables.outputs.tpu-type || inputs.tpu-type}}'
tpu-type: '${{needs.set-variables.outputs.tpu-type-topology || inputs.tpu-type}}'
zone: '${{needs.set-variables.outputs.zone}}'
location: '${{needs.set-variables.outputs.location}}'
run-id: '${{needs.set-variables.outputs.run-id}}'
Expand All @@ -180,6 +185,7 @@ jobs:
cluster-name: ${{needs.set-variables.outputs.cluster-name}}
cluster-name-dws: '${{needs.set-variables.outputs.cluster-name-dws}}'
tpu-type: ${{needs.set-variables.outputs.tpu-type}}
tpu-type-topology: ${{needs.set-variables.outputs.tpu-type-topology}}
zone: ${{needs.set-variables.outputs.zone}}
run-id: '${{needs.set-variables.outputs.run-id}}'
secrets: inherit
Expand Down
5 changes: 4 additions & 1 deletion .github/workflows/reusable_workload_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ on:
tpu-type:
required: true
type: string
tpu-type-topology:
required: true
type: string
tpu-type-dws:
required: false
type: string
Expand Down Expand Up @@ -108,7 +111,7 @@ jobs:
--docker-password='${{secrets.GCP_SA_KEY}}' \
--docker-email='${{secrets.GCP_SA_EMAIL}}'
- name: Run workload with private image
run: python xpk.py workload create --cluster ${{inputs.cluster-name}} --workload $PRIVATE_IMAGE_WORKLOAD_NAME --command "echo foo" --tpu-type=${{inputs.tpu-type}} --num-slices=1 --zone=${{inputs.zone}} --docker-image=${{secrets.DOCKER_REPO_SERVER}}ubuntu2004 --docker-image-pull-secret=gcr-key
run: python xpk.py workload create --cluster ${{inputs.cluster-name}} --workload $PRIVATE_IMAGE_WORKLOAD_NAME --command "echo foo" --tpu-type=${{inputs.tpu-type-topology}} --num-slices=1 --zone=${{inputs.zone}} --docker-image=${{secrets.DOCKER_REPO_SERVER}}ubuntu2004 --docker-image-pull-secret=gcr-key
- name: Wait for private image workload completion and confirm it succeeded
run: python3 xpk.py workload list --cluster ${{inputs.cluster-name}} --zone=${{inputs.zone}} --wait-for-job-completion $PRIVATE_IMAGE_WORKLOAD_NAME --timeout 300
- name: Delete kubectl secret
Expand Down
263 changes: 263 additions & 0 deletions src/xpk/commands/cluster.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,8 @@
from ..utils.file import write_tmp_file
from . import cluster_gcluster
from .common import set_cluster_command
import shutil
import os


def cluster_adapt(args) -> None:
Expand Down Expand Up @@ -247,6 +249,10 @@ def cluster_create(args) -> None:

get_cluster_credentials(args)

update_coredns_command_code = update_coredns_if_necessary(args)
if update_coredns_command_code != 0:
xpk_exit(update_cluster_command_code)

k8s_client = setup_k8s_env(args)

install_storage_crd(k8s_client)
Expand Down Expand Up @@ -702,6 +708,262 @@ def cluster_create_ray_cluster(args) -> None:
cluster_create(args)


def install_jq(args):
"""Installs 'jq' utility."""
if shutil.which('jq'):
xpk_print("Task: 'Install jq' skipped, jq already installed.")
return
command_jq_install = 'sudo apt install jq -y'
xpk_print("Task: 'Install jq' in progress.")
return_code = run_command_with_updates(command_jq_install, 'Install jq', args)
if return_code != 0:
xpk_print(f'Install jq error {return_code}')
xpk_exit(return_code)


def clone_coredns_deployment_repo(args, coredns_repo_full_path: str):
"""Clones the CoreDNS deployment repository if it doesn't exist."""
if os.path.exists(coredns_repo_full_path):
xpk_print(
f"Directory '{coredns_repo_full_path}' already exists, skip git clone."
)
return
command_git_clone = (
'git clone https://github.com/coredns/deployment.git'
f' {coredns_repo_full_path}'
)
xpk_print(
"Task: 'Clone deployment' in progress, Target"
f' directory:{coredns_repo_full_path}.'
)
return_code = run_command_with_updates(
command_git_clone, 'Clone deployment', args
)
if return_code != 0:
xpk_print(f'Clone deployment error {return_code}')
xpk_exit(return_code)


def deploy_coredns_manifests(args, coredns_k8s_path: str):
"""Deploys CoreDNS manifests to the cluster."""
if not os.path.isdir(coredns_k8s_path):
xpk_print(
f"Error:CoreDNS Kubernetes path '{coredns_k8s_path}' does not exist."
' Has git clone been successful?'
)
xpk_exit(1)
original_cwd = os.getcwd()
try:
os.chdir(coredns_k8s_path)
xpk_print(f'Current working directory changed to: {os.getcwd()}')

command_deploy_coredns = './deploy.sh | kubectl apply -f -'
xpk_print(
f"Task: 'Deploy CoreDNS' in progress, Located at '{coredns_k8s_path}'"
)
return_code = run_command_with_updates(
command_deploy_coredns, 'Deploy CoreDNS', args
)
if return_code != 0:
xpk_print(f'Deploy CoreDNS error {return_code}')

finally:
xpk_print(f'Restoring working directory to: {original_cwd}')
os.chdir(original_cwd)
if return_code != 0:
xpk_exit(return_code)


def scale_down_deployment(
args, deployment_name: str, namespace: str = 'kube-system'
):
"""Scales down a specified Kubernetes deployment to 0 replicas."""
command = (
f'kubectl scale deployment {deployment_name} --replicas=0'
f' --namespace={namespace}'
)
xpk_print(f"Task: 'Scaling down {deployment_name}' in progress")
return_code = run_command_with_updates(
command, f'Scale down {deployment_name}', args
)
if return_code != 0:
xpk_print(f'Scale down {deployment_name} error {return_code}')
xpk_exit(return_code)
xpk_print(f'\n{deployment_name} has been scaled down.')


def scale_up_coredns(args, replicas: int = 15, namespace: str = 'kube-system'):
"""Scales up the CoreDNS deployment to a specified number of replicas."""
command_coredns_scale = (
f'kubectl scale deployment coredns --replicas={replicas} -n {namespace}'
)
xpk_print(f"Task: 'Scale CoreDNS' in progress (to {replicas} replicas)")
return_code = run_command_with_updates(
command_coredns_scale, 'Scale CoreDNS', args
)
if return_code != 0:
xpk_print(f'Scale CoreDNS error {return_code}')
xpk_exit(return_code)


def check_deployment_exists(args, deployment_name: str, namespace: str) -> bool:
"""Check for the existence of a specific Deployment in a given namespace."""
command = (
f'kubectl get deployment {deployment_name} -n'
f' {namespace} --ignore-not-found'
)
result = run_command_with_updates(
command, 'Waiting for kubeDNS to be checked.', args
)
return result


def verify_coredns_readiness(
args, timeout: int = 120, namespace: str = 'kube-system'
):
"""Verifies CoreDNS readiness using kubectl wait commands."""
xpk_print('Now verifying CoreDNS readiness...')
kube_dns_exists = check_deployment_exists(args, 'kube-dns', namespace)
if kube_dns_exists:
# Wait for kube-dns to be fully scaled down
command_kube_dns_wait_scaled_down = (
'kubectl wait deployment/kube-dns'
" --for=jsonpath='{.status.replicas}'=0"
f' --namespace={namespace} --timeout={timeout}s'
)
xpk_print('Verifying if kube-dns has scaled down...')
return_code_kube_dns = run_command_with_updates(
command_kube_dns_wait_scaled_down, 'Wait for kube-dns scale down', args
)
if return_code_kube_dns != 0:
xpk_print('kube-dns did not scale down successfully within the timeout.')
xpk_exit(1) # Exit if kube-dns cannot scale down
else:
xpk_print('kube-dns has successfully scaled down.')
else:
xpk_print('kube-dns deployment not found.')
# Wait for CoreDNS to be fully scaled up and available
command_coredns_wait_available = (
'kubectl wait deployment/coredns --for=condition=Available=true'
f' --namespace={namespace} --timeout={timeout}s'
)
xpk_print('Verifying if CoreDNS is available...')
return_code_coredns = run_command_with_updates(
command_coredns_wait_available, 'Wait for coredns available', args
)
if return_code_coredns != 0:
xpk_print(
'CoreDNS verification failed, it might not have fully started within'
' the timeout.'
)
xpk_exit(1) # Exit if coredns cannot become available

xpk_print('CoreDNS has successfully started and passed verification.')


def cleanup_coredns_repo(coredns_repo_full_path: str):
"""Deletes the cloned CoreDNS deployment directory."""
xpk_print(
"Task: 'Deleting CoreDNS deployment directory' in progress:"
f' {coredns_repo_full_path}'
)
try:
shutil.rmtree(coredns_repo_full_path)
xpk_print(f'Successfully deleted directory: {coredns_repo_full_path}')
except OSError as e:
xpk_print(f'Error deleting directory {coredns_repo_full_path}: {e}')


def update_coredns(args):
"""Updates and deploys CoreDNS within a cluster.

Args:
args: user provided arguments for running the command.

Returns:
0 if successful and 1 otherwise.
"""
coredns_repo_dir = os.path.expanduser('/tmp/')
coredns_repo_dir_name = 'deployment'
coredns_repo_full_path = os.path.join(coredns_repo_dir, coredns_repo_dir_name)
coredns_k8s_path = os.path.join(coredns_repo_full_path, 'kubernetes')
# 1. Install jq
install_jq(args)

# 2. Clone CoreDNS deployment repository
clone_coredns_deployment_repo(args, coredns_repo_full_path)

# 3. Deploy CoreDNS to the cluster
deploy_coredns_manifests(args, coredns_k8s_path)

# 4. Scale down kube-dns-autoscaler
scale_down_deployment(args, 'kube-dns-autoscaler')

# 5. Scale down kube-dns
scale_down_deployment(args, 'kube-dns')

# 6. Scale up coredns and verify readiness
scale_up_coredns(args, replicas=15)
verify_coredns_readiness(args, timeout=120)

xpk_print('The CoreDNS setup process has been completed.')

# 7. Cleanup
cleanup_coredns_repo(coredns_repo_full_path)

return 0


def coredns_deployment_exists(args, namespace: str = 'kube-system') -> bool:
"""Checks if the CoreDNS deployment exists in the given namespace.

Args:
namespace: The Kubernetes namespace to check for the CoreDNS deployment.

Returns:
True if the 'coredns' deployment exists, False otherwise.
"""
command = f'kubectl get deployment coredns -n {namespace}'
xpk_print(
"Task: 'Checking CoreDNS deployment existence' in progress for"
f' namespace: {namespace}'
)
return_code = run_command_with_updates(
command, f'Check CoreDNS deployment in {namespace}', args
)
if return_code == 0:
verify_coredns_readiness(args)
xpk_print(f"CoreDNS deployment 'coredns' found in namespace '{namespace}'.")
return True
else:
xpk_print(
f"CoreDNS deployment 'coredns' NOT found in namespace '{namespace}' or"
' an error occurred.'
)
return False


def update_coredns_if_necessary(args) -> int:
"""Updates and deploys CoreDNS within the cluster if it's not already present.

This function checks for the existence of the CoreDNS deployment.
If it's not found, it proceeds to deploy and configure CoreDNS.

Args:
args: User-provided arguments for running the command.

Returns:
0 if successful (CoreDNS was already present or successfully deployed),
and 1 otherwise.
"""
if coredns_deployment_exists(args, namespace='kube-system'):
xpk_print('Skipping CoreDNS deployment since it already exists.')
return 0
else:
xpk_print('CoreDNS deployment not found. Proceeding with CoreDNS setup.')
return update_coredns(args)


def create_cluster_if_necessary(
args, gke_control_plane_version: str, system: SystemCharacteristics
) -> int:
Expand Down Expand Up @@ -842,6 +1104,7 @@ def run_gke_cluster_create_command(
f' {args.custom_cluster_arguments}'
f' {rapid_release_cmd}'
' --enable-dns-access'
' --autoscaling-profile=optimize-utilization'
)

enable_ip_alias = False
Expand Down
4 changes: 2 additions & 2 deletions src/xpk/core/capacity.py
Original file line number Diff line number Diff line change
Expand Up @@ -232,9 +232,9 @@ def get_capacity_node_selectors_from_capacity_type(
case CapacityType.ON_DEMAND.name:
node_selector = ''
case CapacityType.FLEX_START.name:
node_selector = 'cloud.google.com/gke-queued="true"'
node_selector = 'cloud.google.com/gke-queued: "true"'
case CapacityType.SPOT.name:
node_selector = 'cloud.google.com/gke-spot="true"'
node_selector = 'cloud.google.com/gke-spot: "true"'
case CapacityType.RESERVATION.name:
node_selector = f'cloud.google.com/reservation-name: {args.reservation}'
case _:
Expand Down
2 changes: 1 addition & 1 deletion src/xpk/core/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from ..utils.console import xpk_print

# This is the version for XPK PyPI package
__version__ = 'v0.10.1'
__version__ = 'v0.11.0'
XPK_CURRENT_VERSION = __version__
XPK_CONFIG_FILE = os.path.expanduser('~/.config/xpk/config.yaml')

Expand Down
2 changes: 1 addition & 1 deletion src/xpk/core/jobset.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@
limits:
memory: {memory_limit_size}
requests:
cpu: 500m
cpu: 1000m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
Expand Down
Loading
Loading