Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Feature: Restricted SSH access #338

Closed
cknowles opened this issue Feb 22, 2017 · 19 comments
Closed

Feature: Restricted SSH access #338

cknowles opened this issue Feb 22, 2017 · 19 comments
Milestone

Comments

@cknowles
Copy link
Contributor

As of v0.9.4-rc.3, SSH access is granted to all addresses for controllers, workers, etcd. As an extra layer could we support restrictions on that?

In the case of public controllers but private etcd and workers, generally there may need to be a way to gain SSH access to the private workers by tunnelling through the public controllers. i.e. SSH access allowed from controller SG to worker SG. We could also say that we just support custom bastion and IP address/SG of the bastion has to be provided to allow SSH access to private assets.

@swestcott
Copy link
Contributor

+1 for this.

I feel like there's some overlap with the "glue" security groups feature. If glue SGs where available to the workers, controllers & etcd user would be free to define their security controls as necessary. This would rely on being able to disable the SSH/0.0.0.0 rule.

@mumoshu
Copy link
Contributor

mumoshu commented Mar 24, 2017

@c-knowles @swestcott How about adding sshAccessCIDR to cluster.yaml?
It will default to 0.0.0.0/32 to keep backward compatibility but can be customized regardless of the subnet in which a worker/controller node is private/public.
Perhaps,0.0.0.0/32 for private worker/controller nodes doesn't hurt? Or do we need a flag to disable the access completely?

An example cluster.yaml would look like:

worker:
  nodePools:
  - name: pool1
    sshAccessCIDR: 0.0.0.0/32
  - name: private1
    sshAccessCIDR: <bastion subnet's CIDR>
    subnets:
    - name: privateSubnet1

controller:
  sshAccessCIDR: <bastion subnet's CIDR>

etcd:
  sshAccessCIDR: <bastion subnet's CIDR>
  subnets:
  - name: privateSubnet1
  ...

@swestcott
Copy link
Contributor

Looks good, could sshAccessCIDR be a list of ranges inside?

@mumoshu
Copy link
Contributor

mumoshu commented Mar 24, 2017 via email

@cknowles
Copy link
Contributor Author

@mumoshu your above idea seems good, I also would be fine with defining it ourselves in the glue SGs etc and then disabling 0.0.0.0/32 somehow like @swestcott mentioned. I assume nothing internal within the cluster needs SSH access for it to function. The only "harm" I can think of if always adding 0.0.0.0/32 to private nodes is those are then potentially accessible via public nodes in the same VPC (there could be many others). I think it's better practice to make these sorts of things non-global and more specific.

@mumoshu
Copy link
Contributor

mumoshu commented Apr 6, 2017

Thanks @c-knowles!
How about setting sshAccessCIDRs to an empty list or nil to disable 0.0.0.0/32?
cluster.yaml would be like:

worker:
  nodePools:
  - name: pool1
    sshAccessCIDRs:
    securityGroupIds:
    - <your-glue-sg-may-or-may-not-include-your-own-ssh-access-cidrs>

@mumoshu mumoshu added this to the v0.9.6-rc.3 milestone Apr 6, 2017
@cknowles
Copy link
Contributor Author

cknowles commented Apr 6, 2017

Assuming we can tell the difference between setting it to empty and it not being there in the YAML parsing, I think that's ok. Not entirely intuitive but I can't think of a better way right now. The only other ideas I had would be if the template just contained the following as the default, it's more verbose but it also makes the default immediately clear.

worker:
  nodePools:
  - name: pool1
    # Remove this if you wish to disable global SSH access
    sshAccessCIDRs: 0.0.0.0/32
#    securityGroupIds:
#    - <your-glue-sg-may-or-may-not-include-your-own-ssh-access-cidrs>

Not setting the sshAccessCIDRs would mean no SSH access but kube-aws would still work as it did out of the box.

@mumoshu
Copy link
Contributor

mumoshu commented Apr 6, 2017

Thanks @c-knowles, your latter idea seems certainly verbose but clear. Hmm, I'll think of it for a while to see what's the most reasonable now.

@mumoshu
Copy link
Contributor

mumoshu commented Apr 6, 2017

Hmm...

worker:
  nodePools:
  - name: pool1
    # CIDRs of networks you'd like SSH accesses to be allowed from. Defaults to ["0.0.0.0/32"]. 
    # Explicitly set to an empty array to completely disable it. If you do that, probably you would like to set securityGroupIds to provide this node pool an existing sg with a ssh access allowed from specific ranges.
    #sshAccessCIDRs:
    # - 0.0.0.0/32
    securityGroupIds:
    - <your-glue-sg-may-or-may-not-include-your-own-ssh-access-cidrs>

@cknowles
Copy link
Contributor Author

cknowles commented Apr 6, 2017

Looks fine, I'm a fan of having more of the defaults being uncommented but not sure of your views on that. I was also wondering if we could publish a defaults YAML file in kube-aws somewhere and load defaults from there rather than spread across the code.

@mumoshu
Copy link
Contributor

mumoshu commented Apr 6, 2017

Any specific reason you'd like it to be a yaml file?
Would it be better to introduce defaults.go which is dedicated to define default values, so that we can utilize the power of go tool-chain like formatting, compile-time errors, etc?

package model

type workernodepool struct {
	Foo string
	Bar string
}

type Worker struct {
	Foo      string
	NodePool workernodepool
}

var Defaults = struct {
	Foo    string
	Worker worker
}{
	Foo: "foo",
	Worker: Worker{
		NodePool: workernodepool{
			"foo",
			"bar",
		},
	},
}

@cknowles cknowles mentioned this issue Apr 8, 2017
@cknowles
Copy link
Contributor Author

cknowles commented Apr 8, 2017

I thought a YAML would be good to mirror the non-defaults file but I guess defaults for something like worker node pools won't be representable in the same format anyway. A defaults.go would be good.

@jpb
Copy link
Contributor

jpb commented Apr 10, 2017

Is it possible to extend this to limiting external HTTPS traffic as well (current rule) - perhaps by adding apiAccessCIDRs? I need to be able to limit SSH and access to the Kubernetes API to a list of IPs. I suspect if apiAccessCIDRs is introduced, the internal access needs should be added explicitly to the template (so as to not rely on the 0.0.0.0/0 rule) - such as allowing HTTPS traffic from the worker security group(s).

For the specific implementation - sshAccessCIDRs/apiAccessCIDRs (with default ["0.0.0.0/0"]) would satisfy my use-case.

@mumoshu
Copy link
Contributor

mumoshu commented Apr 12, 2017

@jpb We've added the apiEndpoints setting key for more fine-grained control of API endpoints hence ELBs backing them. I believe it is developed flexible enough to support your request.
Perhaps adding something like apiEndpoints[].loadBalancer.allowedSourceCIDRs which defaults to - 0.0.0.0/32 but is capable of customizations as you like would suffice your request?

Anyway, I'd appreciate it if you could submit an another github issue to address it 🌷

@mumoshu
Copy link
Contributor

mumoshu commented Apr 14, 2017

I began to prefer sshAllowedSourceCIDRs rather than sshAccessCIDRs for more clarity

@mumoshu
Copy link
Contributor

mumoshu commented Apr 19, 2017

It turned out that:

  • Compared to the ones for etcd and controller, implementing worker.sshAccessAllowedSourceCIDRs is a bit more difficult than I have initially thought
  • We didn't have etcd.securityGroupIds and controller.securityGroupIds to attach existing SGs to allow SSH accesses when sshAccessAllowedSourceCIDRs was explicitly set to an empty array

mumoshu added a commit to mumoshu/kube-aws that referenced this issue Apr 19, 2017
… allowed

It has been hard-coded to `0.0.0.0/0` which isn't desirable for security.
From now on, you can use `sshAccessAllowedSourceCIDRs` to override the list of ranges allowed.
Explicitly setting it to an empty array would result in nodes unable to be SSHed. However, doing so while configuring `worker.securityGroupIds`, `controller.securityGroupIds` and `etcd.securityGroupIds` would allow you to fully customize how SSH accesses are allowed - including not only ranges but ports to be allowed.
Closes kubernetes-retired#338
@mumoshu
Copy link
Contributor

mumoshu commented Apr 19, 2017

I've submitted #551 for this.
@c-knowles @swestcott @jpb Would you mind testing it?

@jpb
Copy link
Contributor

jpb commented Apr 20, 2017

@mumoshu I've tested #551 and it worked as expected 👍

@cmcconnell1
Copy link
Contributor

Cool, have been meaning to look for some SSH hardening and looks like it's been here for awhile.

I've been just rolling my own here with a basic 'finishing' script (post kube-aws up) to configure our SSH access upon cluster deployment. I just run this immediately after cluster deployment to remove world access and allow our allowed subnets to hit our kube nodes--else we see hackers all over the world start their brute force hacking within a few minutes if our clusters are on public nets. That was kinda funny to watch in the kube nodes logs ;-)

With this basic script (for AWS EC2 SG's) it only allows SSH access from within our AWS EC2 and current VPN CIDR ranges. Perhaps it might be useful for others. Disclaimer some pretty gnarly piping here, the tricky part is filtering with jq, but works reliably for me YMMV.

printf "\ngrep kube clusterName from cluster.yaml file setting in the current directory\n"

kube_cluster=$(grep 'clusterName:' ./cluster.yaml | awk -F ": " '{print $2}')

# get our desired kube cluster nodes
# not reliable using text output if the group descriptions are modified, etc
for kube_sg_id in $(aws ec2 describe-security-groups --filters Name=vpc-id,Values=vpc-abc12ca3 | jq -r '.SecurityGroups[] | [.GroupId, .GroupName] | @csv' | grep -i "$kube_cluster" | awk -F ',' '{print $1}' | sed 's/"//g') ; do
    printf "KUBE-SG-ID: $kube_sg_id\n"

    # first we remove the world-access
    aws ec2 revoke-security-group-ingress --group-id ${kube_sg_id} --protocol tcp --port 22 --cidr 0.0.0.0/0

    # now we grant sane default SSH rules for EC2 and VPN admin access
    aws ec2 authorize-security-group-ingress --group-id ${kube_sg_id} --protocol tcp --port 22 --cidr 10.1.0.0/20
    aws ec2 authorize-security-group-ingress --group-id ${kube_sg_id} --protocol tcp --port 22 --cidr 172.31.0.0/24

    printf "\nawless show kube_sg_id: ${kube_sg_id}\n"
    awless show ${kube_sg_id}
done

printf "\nSecurity Group IDs for Kubernetes Cluster: $kube_cluster\n"

@mumoshu mumoshu modified the milestones: v0.9.6-rc.4, v0.9.6-rc.3 Apr 21, 2017
camilb added a commit to camilb/kube-aws that referenced this issue Apr 24, 2017
* kubernetes-incubator/master:
  Migrate from --register-unschedulable+taint-and-uncordon to --register-with-taints Ref kubernetes-retired#543
  Migrate from --api-servers flag to --kubeconfig Ref kubernetes-retired#543
  Update kube-dns to 1.14.1 Resolves kubernetes-retired#542
  Add controller node labels if specified
  Set --storage-backend to etcd2 if not etcd3
  Quote security group refs for etcd, controller, and apiendpoints
  Update the doc accordingly to the latest deprecations
  Deprecate externalDNSName/createRecordSet/hostedZoneId In favor of recently added `apiEndpoints[]`.
  Allow customizing network ranges from which Kubernetes API accesses are allowed It has been hard-coded to `0.0.0.0/0` which isn't desirable for security. From now on, you can use `apiEndpoints[].loadBalancer.apiAccessAllowedSourceCIDRs` to override the list of ranges allowed. Explicitly setting it to an empty array would result in a load balancer which is completely unable to be accessed. However, doing so while configuring `apiEndpoints[].loadBalancer.securityGroupIds` would allow you to fully customize how API accesses are allowed - including not only ranges but ports to be allowed.
  Fix unwanted AWS resource creation/Add extra validation on internetGatewayID + vpcID Fixes kubernetes-retired#318 Fixes kubernetes-retired#553
  Kubernetes-Autosave to save as Kubernetes/List.
  Make cfn-signal more robust against image fetch failures
  Add reclaim policy
  bump kube-1.6.2
  Allow customizing network ranges from which SSH accesses to nodes are allowed It has been hard-coded to `0.0.0.0/0` which isn't desirable for security. From now on, you can use `sshAccessAllowedSourceCIDRs` to override the list of ranges allowed. Explicitly setting it to an empty array would result in nodes unable to be SSHed. However, doing so while configuring `worker.securityGroupIds`, `controller.securityGroupIds` and `etcd.securityGroupIds` would allow you to fully customize how SSH accesses are allowed - including not only ranges but ports to be allowed. Closes kubernetes-retired#338
kylehodgetts pushed a commit to HotelsDotCom/kube-aws that referenced this issue Mar 27, 2018
… allowed

It has been hard-coded to `0.0.0.0/0` which isn't desirable for security.
From now on, you can use `sshAccessAllowedSourceCIDRs` to override the list of ranges allowed.
Explicitly setting it to an empty array would result in nodes unable to be SSHed. However, doing so while configuring `worker.securityGroupIds`, `controller.securityGroupIds` and `etcd.securityGroupIds` would allow you to fully customize how SSH accesses are allowed - including not only ranges but ports to be allowed.
Closes kubernetes-retired#338
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants