Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CloudFormation template is larger than the 51200 bytes limit #8065

Closed
hakman opened this issue Dec 9, 2019 · 4 comments
Closed

CloudFormation template is larger than the 51200 bytes limit #8065

hakman opened this issue Dec 9, 2019 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@hakman
Copy link
Member

hakman commented Dec 9, 2019

1. What kops version are you running? The command kops version, will display
this information.

Version 1.17.0-alpha.1 (git-f4320a884)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.0-rc.2", GitCommit:"3bec159cdc7f18e5b8787e65e0c308c5e9eeadf3", GitTreeState:"clean", BuildDate:"2019-12-03T14:12:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?
aws

4. What commands did you run? What is the simplest way to reproduce this issue?

$ kops create cluster \
  --cloud aws \
  --topology public \
  --networking calico \
  --master-count 1 \
  --node-count 1 \
  --zones eu-central-1a \
  --ssh-public-key ~/.ssh/id_rsa.pub \
  --kubernetes-version 1.17.0-rc.2 \
  --state s3://com.mydomain.ciprian.k8s-state \
  --target cloudformation \
  k8s-cf.test.mydomain.com
I1209 12:44:02.934940   91331 create_cluster.go:1574] Using SSH public key: /Users/myuser/.ssh/id_rsa.pub
I1209 12:44:04.665489   91331 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet eu-central-1a
W1209 12:44:08.787254   91331 firewall.go:250] Opening etcd port on masters for access from the nodes, for calico.  This is unsafe in untrusted environments.
I1209 12:44:09.351692   91331 executor.go:103] Tasks: 0 done / 90 total; 47 can run
I1209 12:44:09.352810   91331 dnszone.go:316] Check for existing route53 zone to re-use with name ""
W1209 12:44:09.356023   91331 sshkey.go:209] Cloudformation does not manage SSH keys; pre-creating SSH key
I1209 12:44:09.485334   91331 dnszone.go:323] Existing zone "test.mydomain.com." found; will configure cloudformation to reuse
I1209 12:44:10.025693   91331 vfs_castore.go:728] Issuing new certificate: "etcd-clients-ca"
I1209 12:44:10.037562   91331 vfs_castore.go:728] Issuing new certificate: "etcd-peers-ca-main"
I1209 12:44:10.065657   91331 vfs_castore.go:728] Issuing new certificate: "ca"
I1209 12:44:10.146605   91331 vfs_castore.go:728] Issuing new certificate: "etcd-manager-ca-events"
I1209 12:44:10.149192   91331 vfs_castore.go:728] Issuing new certificate: "apiserver-aggregator-ca"
I1209 12:44:10.195296   91331 vfs_castore.go:728] Issuing new certificate: "etcd-peers-ca-events"
I1209 12:44:10.442966   91331 vfs_castore.go:728] Issuing new certificate: "etcd-manager-ca-main"
I1209 12:44:10.882961   91331 executor.go:103] Tasks: 47 done / 90 total; 24 can run
I1209 12:44:11.591465   91331 vfs_castore.go:728] Issuing new certificate: "kubecfg"
I1209 12:44:11.609482   91331 vfs_castore.go:728] Issuing new certificate: "kube-controller-manager"
I1209 12:44:11.639024   91331 vfs_castore.go:728] Issuing new certificate: "master"
I1209 12:44:11.666925   91331 vfs_castore.go:728] Issuing new certificate: "apiserver-aggregator"
I1209 12:44:11.718260   91331 vfs_castore.go:728] Issuing new certificate: "kubelet"
I1209 12:44:11.725922   91331 vfs_castore.go:728] Issuing new certificate: "kube-proxy"
I1209 12:44:11.763443   91331 vfs_castore.go:728] Issuing new certificate: "kube-scheduler"
I1209 12:44:11.824671   91331 vfs_castore.go:728] Issuing new certificate: "kops"
I1209 12:44:11.827747   91331 vfs_castore.go:728] Issuing new certificate: "apiserver-proxy-client"
I1209 12:44:11.886526   91331 vfs_castore.go:728] Issuing new certificate: "kubelet-api"
I1209 12:44:12.545405   91331 executor.go:103] Tasks: 71 done / 90 total; 17 can run
I1209 12:44:12.832101   91331 executor.go:103] Tasks: 88 done / 90 total; 2 can run
I1209 12:44:12.832494   91331 executor.go:103] Tasks: 90 done / 90 total; 0 can run
I1209 12:44:12.842219   91331 target.go:145] CloudFormation output is in out/cloudformation
I1209 12:44:13.044958   91331 update_cluster.go:294] Exporting kubecfg for cluster
kops has set your kubectl context to k8s-cf.test.mydomain.com
Cloudformation output has been placed into out/cloudformation
Run this command to apply the configuration:
   aws cloudformation create-stack --capabilities CAPABILITY_NAMED_IAM --stack-name kubernetes-k8s-cf-test-mydomain-com --template-body file://out/cloudformation/kubernetes.json

$ aws cloudformation create-stack \
  --capabilities CAPABILITY_NAMED_IAM \
  --stack-name kubernetes-k8s-cf-test-mydomain-com \
  --template-body file://out/cloudformation/kubernetes.yaml
An error occurred (ValidationError) when calling the CreateStack operation: 1 validation error detected: Value '{
...
}' at 'templateBody' failed to satisfy constraint: Member must have length less than or equal to 51200

$ ls -l out/cloudformation/kubernetes.json
  -rw-------  1 chacman  staff  51798 Dec  9 12:44 out/cloudformation/kubernetes.json

5. What happened after the commands executed?
The stack was not created because the template is larger than the 51200 bytes limit:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html

6. What did you expect to happen?
The stack should have been created using the command provided by kops

7. Please provide your cluster manifest.
n/a

**8. Please run the commands with most verbose logging by adding the -v 10 flag.
n/a

9. Anything else do we need to know?
I discovered this while working on #8051. This was previously debated in #2259 and #4727.

I thought about it and I see 2 options on how to fix the 51,200 bytes limit:

  1. Generate the template in yaml format also, which is 20% smaller. This may even take care of any complex setups.
  2. Allow users to output the templates to S3 also, which has a limit of 460,800 bytes. This is pretty simple, just need to switch from using os ioutil to vfs lib.
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 7, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants