-
Notifications
You must be signed in to change notification settings - Fork 295
Feature: Restricted SSH access #338
Comments
+1 for this. I feel like there's some overlap with the "glue" security groups feature. If glue SGs where available to the workers, controllers & etcd user would be free to define their security controls as necessary. This would rely on being able to disable the SSH/0.0.0.0 rule. |
@c-knowles @swestcott How about adding An example cluster.yaml would look like: worker:
nodePools:
- name: pool1
sshAccessCIDR: 0.0.0.0/32
- name: private1
sshAccessCIDR: <bastion subnet's CIDR>
subnets:
- name: privateSubnet1
controller:
sshAccessCIDR: <bastion subnet's CIDR>
etcd:
sshAccessCIDR: <bastion subnet's CIDR>
subnets:
- name: privateSubnet1
... |
Looks good, could |
Yes, with the name `sshAccessCIDRs`
2017年3月24日(金) 18:42 Si Westcott <notifications@github.com>:
… Looks good, could sshAccessCIDR be a list of ranges inside?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#338 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABV-Zmu2-bKdIXTGC9ViczPTIu2PhVFks5ro4_ngaJpZM4MIHF4>
.
|
@mumoshu your above idea seems good, I also would be fine with defining it ourselves in the glue SGs etc and then disabling |
Thanks @c-knowles! worker:
nodePools:
- name: pool1
sshAccessCIDRs:
securityGroupIds:
- <your-glue-sg-may-or-may-not-include-your-own-ssh-access-cidrs> |
Assuming we can tell the difference between setting it to empty and it not being there in the YAML parsing, I think that's ok. Not entirely intuitive but I can't think of a better way right now. The only other ideas I had would be if the template just contained the following as the default, it's more verbose but it also makes the default immediately clear. worker:
nodePools:
- name: pool1
# Remove this if you wish to disable global SSH access
sshAccessCIDRs: 0.0.0.0/32
# securityGroupIds:
# - <your-glue-sg-may-or-may-not-include-your-own-ssh-access-cidrs> Not setting the |
Thanks @c-knowles, your latter idea seems certainly verbose but clear. Hmm, I'll think of it for a while to see what's the most reasonable now. |
Hmm... worker:
nodePools:
- name: pool1
# CIDRs of networks you'd like SSH accesses to be allowed from. Defaults to ["0.0.0.0/32"].
# Explicitly set to an empty array to completely disable it. If you do that, probably you would like to set securityGroupIds to provide this node pool an existing sg with a ssh access allowed from specific ranges.
#sshAccessCIDRs:
# - 0.0.0.0/32
securityGroupIds:
- <your-glue-sg-may-or-may-not-include-your-own-ssh-access-cidrs> |
Looks fine, I'm a fan of having more of the defaults being uncommented but not sure of your views on that. I was also wondering if we could publish a defaults YAML file in kube-aws somewhere and load defaults from there rather than spread across the code. |
Any specific reason you'd like it to be a yaml file? package model
type workernodepool struct {
Foo string
Bar string
}
type Worker struct {
Foo string
NodePool workernodepool
}
var Defaults = struct {
Foo string
Worker worker
}{
Foo: "foo",
Worker: Worker{
NodePool: workernodepool{
"foo",
"bar",
},
},
} |
I thought a YAML would be good to mirror the non-defaults file but I guess defaults for something like worker node pools won't be representable in the same format anyway. A |
Is it possible to extend this to limiting external HTTPS traffic as well (current rule) - perhaps by adding For the specific implementation - |
@jpb We've added the Anyway, I'd appreciate it if you could submit an another github issue to address it 🌷 |
I began to prefer |
It turned out that:
|
… allowed It has been hard-coded to `0.0.0.0/0` which isn't desirable for security. From now on, you can use `sshAccessAllowedSourceCIDRs` to override the list of ranges allowed. Explicitly setting it to an empty array would result in nodes unable to be SSHed. However, doing so while configuring `worker.securityGroupIds`, `controller.securityGroupIds` and `etcd.securityGroupIds` would allow you to fully customize how SSH accesses are allowed - including not only ranges but ports to be allowed. Closes kubernetes-retired#338
I've submitted #551 for this. |
Cool, have been meaning to look for some SSH hardening and looks like it's been here for awhile. I've been just rolling my own here with a basic 'finishing' script (post kube-aws up) to configure our SSH access upon cluster deployment. I just run this immediately after cluster deployment to remove world access and allow our allowed subnets to hit our kube nodes--else we see hackers all over the world start their brute force hacking within a few minutes if our clusters are on public nets. That was kinda funny to watch in the kube nodes logs ;-) With this basic script (for AWS EC2 SG's) it only allows SSH access from within our AWS EC2 and current VPN CIDR ranges. Perhaps it might be useful for others. Disclaimer some pretty gnarly piping here, the tricky part is filtering with jq, but works reliably for me YMMV.
|
* kubernetes-incubator/master: Migrate from --register-unschedulable+taint-and-uncordon to --register-with-taints Ref kubernetes-retired#543 Migrate from --api-servers flag to --kubeconfig Ref kubernetes-retired#543 Update kube-dns to 1.14.1 Resolves kubernetes-retired#542 Add controller node labels if specified Set --storage-backend to etcd2 if not etcd3 Quote security group refs for etcd, controller, and apiendpoints Update the doc accordingly to the latest deprecations Deprecate externalDNSName/createRecordSet/hostedZoneId In favor of recently added `apiEndpoints[]`. Allow customizing network ranges from which Kubernetes API accesses are allowed It has been hard-coded to `0.0.0.0/0` which isn't desirable for security. From now on, you can use `apiEndpoints[].loadBalancer.apiAccessAllowedSourceCIDRs` to override the list of ranges allowed. Explicitly setting it to an empty array would result in a load balancer which is completely unable to be accessed. However, doing so while configuring `apiEndpoints[].loadBalancer.securityGroupIds` would allow you to fully customize how API accesses are allowed - including not only ranges but ports to be allowed. Fix unwanted AWS resource creation/Add extra validation on internetGatewayID + vpcID Fixes kubernetes-retired#318 Fixes kubernetes-retired#553 Kubernetes-Autosave to save as Kubernetes/List. Make cfn-signal more robust against image fetch failures Add reclaim policy bump kube-1.6.2 Allow customizing network ranges from which SSH accesses to nodes are allowed It has been hard-coded to `0.0.0.0/0` which isn't desirable for security. From now on, you can use `sshAccessAllowedSourceCIDRs` to override the list of ranges allowed. Explicitly setting it to an empty array would result in nodes unable to be SSHed. However, doing so while configuring `worker.securityGroupIds`, `controller.securityGroupIds` and `etcd.securityGroupIds` would allow you to fully customize how SSH accesses are allowed - including not only ranges but ports to be allowed. Closes kubernetes-retired#338
… allowed It has been hard-coded to `0.0.0.0/0` which isn't desirable for security. From now on, you can use `sshAccessAllowedSourceCIDRs` to override the list of ranges allowed. Explicitly setting it to an empty array would result in nodes unable to be SSHed. However, doing so while configuring `worker.securityGroupIds`, `controller.securityGroupIds` and `etcd.securityGroupIds` would allow you to fully customize how SSH accesses are allowed - including not only ranges but ports to be allowed. Closes kubernetes-retired#338
As of v0.9.4-rc.3, SSH access is granted to all addresses for controllers, workers, etcd. As an extra layer could we support restrictions on that?
In the case of public controllers but private etcd and workers, generally there may need to be a way to gain SSH access to the private workers by tunnelling through the public controllers. i.e. SSH access allowed from controller SG to worker SG. We could also say that we just support custom bastion and IP address/SG of the bastion has to be provided to allow SSH access to private assets.
The text was updated successfully, but these errors were encountered: