Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot create Node pool - Error 400 #1919

Closed
jacopoterrinoni92 opened this issue Apr 2, 2024 · 1 comment
Closed

Cannot create Node pool - Error 400 #1919

jacopoterrinoni92 opened this issue Apr 2, 2024 · 1 comment
Labels

Comments

@jacopoterrinoni92
Copy link

Hello everyone.

I'm having trouble setting up a GKE cluster through because I receive the following error when applying the code:
│ Error: googleapi: Error 400: Updating private endpoint on vpc peering private clusters is not supported. │ Details: │ [ │ { │ "@type": "type.googleapis.com/google.rpc.RequestInfo", │ "requestId": "0x5d3b5581088c16c7" │ } │ ] │ , badRequest │ │ with google_container_cluster.k8s_cluster, │ on main.tf line 72, in resource "google_container_cluster" "k8s_cluster": │ 72: resource "google_container_cluster" "k8s_cluster" {

Following is the tf code for GKE
`resource "google_container_cluster" "k8s_cluster" {

depends_on = [google_compute_subnetwork.gke_network]

name = var.gke_cluster_name
location = var.region

// cannot be used with autopilot mode
// remove_default_node_pool = true

// cannot be used with autopilot mode
// initial_node_count = 1

network = module.vpc.network_self_link
subnetwork = google_compute_subnetwork.gke_network.self_link
networking_mode = "VPC_NATIVE" // or ROUTES

// logging_service = "logging.googleapis.com/kubernetes"
// monitoring_service = "monitoring.googleapis.com/kubernetes"

enable_autopilot = true

// cannot be used with autopilot mode
// default_max_pods_per_node = 5

addons_config {
// It is enabled by default; set disabled = true to disable.
horizontal_pod_autoscaling {
disabled = false
}

/* The status of the HTTP (L7) load balancing controller addon, 
which makes it easy to set up HTTP load balancers for services in a cluster. 
It is enabled by default
MUST BE ENABLED FOR AUTOPILOT
*/
http_load_balancing {
  disabled = false
}

}

/*
Cannot be used with GKE Autopilot mode
cluster_autoscaling {

Whether node auto-provisioning is enabled. Must be supplied for GKE Standard clusters, 
true is implied for autopilot clusters
Resource limits for cpu and memory must be defined to enable node auto-provisioning for GKE Standard.

enabled = true

resource_limits {
  resource_type = "cpu"
  minimum = 1
  maximum = 16  
}

resource_limits {
  resource_type = "memory"
  minimum = 1
  maximum = 64
}

}
/
release_channel {
/

UNSPECIFIED: Not set.
RAPID: Weekly upgrade cadence; Early testers and developers who requires new features.
REGULAR: Multiple per month upgrade cadence; Production users who need features not yet offered in
the Stable channel.
STABLE: Every few months upgrade cadence; Production users who need stability above all else,
and for whom frequent upgrades are too risky.
*/
channel = "REGULAR"
}

ip_allocation_policy {
cluster_secondary_range_name = var.pod_secondary_range_name
services_secondary_range_name = var.service_secondary_range_name
}

private_cluster_config {
/*
Enables the private cluster feature, creating a private endpoint on the cluster.
In a private cluster, nodes only have RFC 1918 private addresses and communicate with the master's
private endpoint via private networking.
*/
enable_private_nodes = true

/*
When true, the cluster's private endpoint is used as the cluster endpoint and access through the public 
endpoint is disabled. When false, either endpoint can be used. 
This field only applies to private clusters, when enable_private_nodes is true.
*/
enable_private_endpoint = false

/*
The IP range in CIDR notation to use for the hosted master network. 
This range will be used for assigning private IP addresses to the cluster master(s) and the ILB VIP. 
This range must not overlap with any other ranges in use within the cluster's network, and it must be a 
/28 subnet.
*/
master_ipv4_cidr_block = "172.16.0.0/28"

}

/*
The desired configuration options for master authorized networks.
Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE
automatically whitelists)
/
master_authorized_networks_config {
/

Whether Kubernetes master is accessible via Google Compute Engine Public IPs.
*/
// gcp_public_cidrs_access_enabled = true
}

vertical_pod_autoscaling {
/*
Enables vertical pod autoscaling.
MUST BE ENABLED FOR AUTOPILOT
*/
enabled = true
}
}`

and for node pool creation
`resource "google_container_node_pool" "gke_node_pool" {
name = var.node_pool_name
cluster = google_container_cluster.k8s_cluster.id

node_count = 2

/*
Parameters used in creating the node pool. See google_container_cluster for schema.
/
node_config {
/

  */
preemptible = false

/*
  The name of a Google Compute Engine machine type. Defaults to e2-medium.
  */
machine_type = "e2-small"

/*
  Size of the disk attached to each node, specified in GB. 
  The smallest allowed disk size is 10GB. Defaults to 100GB.
  */
disk_size_gb = 100

/*
  Type of the disk attached to each node (e.g. 'pd-standard', 'pd-balanced' or 'pd-ssd'). 
  If unspecified, the default disk type is 'pd-standard'
  */
disk_type = "pd-balanced"

/*
  The image type to use for this node. 
  Note that GKE Autopilot clusters always use this image.
  [ubuntu_continerd, cos, ubuntu, windows_ltsc_containerd, windows_ltsc]
  */
image_type = "cos_containerd"

/*
  The Kubernetes labels (key/value pairs) to be applied to each node.
  */
labels = {
  role = "chornicle_forwarder"
}

/*
  The service account to be used by the Node VMs. 
  If not specified, the "default" service account is used.
  Suggested security best practice is to use a custom SA.
  */
service_account = google_service_account.cluster_service_account.email

/*
  The set of Google API scopes to be made available on all of the node VMs under the "default" service account. 
  Use the "https://www.googleapis.com/auth/cloud-platform" scope to grant access to all APIs. 
  It is recommended that you set service_account to a non-default service account 
  and grant IAM roles to that service account for only the resources that it needs.
  */
oauth_scopes = ["https://www.googleapis.com/auth/cloud-platform"]

}

/*
Configuration required by cluster autoscaler to adjust the size of the node pool
to the current cluster usage.
/
autoscaling {
/

Minimum number of nodes per zone in the NodePool. Must be >=0 and <= max_node_count.
*/
min_node_count = 2

/*
  Maximum number of nodes per zone in the NodePool. Must be >= min_node_count.
  */
max_node_count = 20

/*
  Location policy specifies the algorithm used when scaling-up the node pool.
  BALANCED = Is a best effort policy that aims to balance the sizes of available zones.
  ANY = Instructs the cluster autoscaler to prioritize utilization of unused 
  reservations, and reduce preemption risk for Spot VMs.
  */
location_policy = "BALANCED"

}

/*
Node management configuration, wherein auto-repair and auto-upgrade is configured.
/
management {
/

Whether the nodes will be automatically repaired. Enabled by default.
*/
auto_repair = true

/*
  Whether the nodes will be automatically upgraded. Enabled by default.
  */
auto_upgrade = true

}

/*
The network configuration of the pool. Such as configuration for Adding Pod IP address ranges)
to the node pool.
*/
network_config {

}

/*
Specify node upgrade settings to change how GKE upgrades nodes.
The maximum number of nodes upgraded simultaneously is limited to 20.
/
upgrade_settings {
/

The number of nodes that can be simultaneously unavailable during an upgrade.
Increasing max_unavailable raises the number of nodes that can be upgraded in parallel.
Can be set to 0 or greater.
*/
max_unavailable = 18
}

/*
The maximum number of pods per node in this node pool.
*/
max_pods_per_node = var.max_pods_per_node

/*
The list of zones in which the node pool's nodes should be located.
*/
// node_locations = []

}`

Does anybody know what the error is about?

Copy link

github-actions bot commented Jun 1, 2024

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days

@github-actions github-actions bot added the Stale label Jun 1, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant