Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/jenkins] JNLP port should not be exposed via type=LoadBalancer #1341

Closed
kensimon opened this issue Jun 22, 2017 · 7 comments · Fixed by #10290
Closed

[stable/jenkins] JNLP port should not be exposed via type=LoadBalancer #1341

kensimon opened this issue Jun 22, 2017 · 7 comments · Fixed by #10290

Comments

@kensimon
Copy link
Contributor

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Version of Helm and Kubernetes: Makes no difference

Which chart: stable/jenkins

What happened: Port 50000 (the jenkins JNLP master port) is exposed to the open internet via an ELB when the helm chart is installed on a cluster with an AWS cloud provider

What you expected to happen: The jenkins master port should not be exposed the same way the HTTP port is... since it's configured to launch slaves in the same kubernetes cluster, that service can just be ClusterIP. There should be an option to only expose the HTTP port via type=LoadBalancer while keeping the master port private to the cluster.

How to reproduce it (as minimally and precisely as possible): helm install stable/jenkins on an AWS cluster (I'm sure it's the same for any other cloud provider)

Anything else we need to know: There's enough configuration options to tune this behavior, but the defaults should not be this insecure.

@kensimon
Copy link
Contributor Author

Just an FYI, this should really be top priority because what Helm installs out of the box will be very quickly compromised by this security advisory: https://jenkins.io/security/advisory/2017-04-26/

Attacks are in the wild and will happen within a very short time after installing the jenkins chart: https://groups.google.com/forum/#!topic/jenkinsci-advisories/sN9S0x78kMU

@viglesiasce
Copy link
Contributor

Fixed in #1385

@viglesiasce
Copy link
Contributor

Thanks for reporting this @kensimon!!!!

@fhemberger
Copy link
Contributor

@viglesiasce Can this issue be closed?

@kensimon
Copy link
Contributor Author

I'll go ahead and close this, we've verified on the latest chart version that the port is no longer exposed by default.

@dalvizu
Copy link
Contributor

dalvizu commented Dec 28, 2018

We're running k8s across cluster, and using LoadBalancer on the agent Service to allow communication.

If no range is specified, it defaults to 0.0.0.0/0 -- however, there is an annotation you can use to specify range, and you can specify this in the chart:

 SlaveListenerServiceAnnotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: "True"
    service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"

https://github.com/kubernetes/kubernetes/blob/870c0507272d13b67c4beff34685ff674f716a2d/pkg/cloudprovider/providers/aws/aws.go#L3394
https://github.com/kubernetes/kubernetes/blob/870c0507272d13b67c4beff34685ff674f716a2d/pkg/cloudprovider/providers/aws/aws.go#L3333
https://github.com/kubernetes/kubernetes/blob/7f23a743e8c23ac6489340bbb34fa6f1d392db9d/pkg/api/v1/service/util.go#L42
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/annotation_key_constants.go#L80

dalvizu pushed a commit to dalvizu/charts that referenced this issue Dec 28, 2018
Update `values.yml` documentation on using 'LoadBalancer' type of
Service in a secure way by adding required annotations. This creates
an internal LoadBalancer with locked down rules on allowed CIDR ranges
via annotations.
dalvizu pushed a commit to dalvizu/charts that referenced this issue Dec 28, 2018
Update `values.yml` documentation on using 'LoadBalancer' type of
Service in a secure way by adding required annotations. This creates
an internal LoadBalancer with locked down rules on allowed CIDR ranges
via annotations.
dalvizu pushed a commit to dalvizu/charts that referenced this issue Dec 28, 2018
Update `values.yml` documentation on using 'LoadBalancer' type of
Service in a secure way by adding required annotations. This creates
an internal LoadBalancer with locked down rules on allowed CIDR ranges
via annotations.
dalvizu pushed a commit to dalvizu/charts that referenced this issue Dec 28, 2018
Update `values.yml` documentation on using 'LoadBalancer' type of
Service in a secure way by adding required annotations. This creates
an internal LoadBalancer with locked down rules on allowed CIDR ranges
via annotations.

Signed-off-by: Dan Alvizu <dalvizu@pingidentity.com>
syedimam0012 pushed a commit to syedimam0012/charts that referenced this issue Feb 1, 2019
* Fixes helm#1341 -- update Jenkins chart documentation

Update `values.yml` documentation on using 'LoadBalancer' type of
Service in a secure way by adding required annotations. This creates
an internal LoadBalancer with locked down rules on allowed CIDR ranges
via annotations.

Signed-off-by: Dan Alvizu <dalvizu@pingidentity.com>

* bump version, per pull request comments

Signed-off-by: Dan Alvizu <dalvizu@pingidentity.com>

* fix whitespace lint errors

Signed-off-by: Dan Alvizu <dalvizu@pingidentity.com>
@pkaramol
Copy link

Do these annotations service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"
also apply when running on GCP / GKE? (i.e. for the GCP load balancer that will be created for the agent service?

wmcdona89 pushed a commit to wmcdona89/charts that referenced this issue Aug 30, 2020
* Fixes helm#1341 -- update Jenkins chart documentation

Update `values.yml` documentation on using 'LoadBalancer' type of
Service in a secure way by adding required annotations. This creates
an internal LoadBalancer with locked down rules on allowed CIDR ranges
via annotations.

Signed-off-by: Dan Alvizu <dalvizu@pingidentity.com>

* bump version, per pull request comments

Signed-off-by: Dan Alvizu <dalvizu@pingidentity.com>

* fix whitespace lint errors

Signed-off-by: Dan Alvizu <dalvizu@pingidentity.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants