Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GLBC] Expose GCE backend parameters in Ingress object API #28

Closed
bowei opened this Issue Oct 11, 2017 · 96 comments

Comments

Projects
None yet
@bowei
Copy link
Member

bowei commented Oct 11, 2017

From @itamaro on February 7, 2017 10:9

When using GCE Ingress controller, the GCE Ingress controller (GLBC) provisions GCE backends with a bunch of default parameters.

It would be great if it was possible to tweak the parameters that are currently "untweakable" from the Ingress object API (AKA from my YAML's).

Specific use case: GCE backends are provisioned with a default timeout of 30 seconds, which is not sufficient for some long requests. I'd like to be able to control the timeout per-backend.

Copied from original issue: kubernetes/ingress-nginx#243

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @bprashanth on February 7, 2017 18:31

This will need to be a per Service configuration since currently ingresses share the backend for a given nodeport, so it makes more sense to specify it as an annotation on the Service. Basically it would be nice if the Service author could publish some timeouts for their Service, and any/all loadbalancers fronting the Service will respect these settings.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @thockin on February 8, 2017 8:34

The reason for it to be an annotation is that we're not ready to add it to every implementation of Services, yet. Maybe never. I would suggest something like service.kubernetes.io/timeout as the annotation key

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @itamaro on February 14, 2017 17:23

I think I understand the reasoning for an annotation, thanks.

Attempting to tackle this, I got this far.

  1. Is this the correct direction?
  2. Will appreciate a pointer regarding the next step - how exactly to propagate the timeout value extracted from an annotation to the GCE backend.
@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @itamaro on February 22, 2017 16:48

Following up on the discussion on the CL, I'd appreciate more thoughts from more contributors on the subject of a generic timeout annotation vs. a GCLB-specific one.

The issue is whether to take the generic path, with something like service.beta.kubernetes.io/timeout (or alpha) for the annotation, based on the observation that all/most relevant Ingress backend implementation can make sense of such an annotation (if they choose to implement it), or stick to a GCLB-specific annotation (e.g. gclb.k8s.cloud.google.com/timeout ) and not worry about other backends.

WDYT?

@thockin feel free to tag specific contributors :-)

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on March 8, 2017 23:0

What about using the timeoutSeconds field of the service's livenessProbe? Would you ever want that value and a loadbalancer's timeout to be different?

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on March 8, 2017 23:6

Oops, forgot that liveness/readiness probes don't live in the service. We would have to look up a pod under the service selector and check, similar to what we do for getting the health check request path.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on April 5, 2017 21:19

Many folks have expressed interest on this, so I'd like to keep the ball rolling.

After reading the discussion on the CL, I'm also inclined to go with a generic annotation on Service. As I mentioned above, an easy option would be to look at a probe's configuration.
Any more thoughts @itamaro and @thockin ?

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @thockin on April 5, 2017 21:40

Ae we happy with a single timeout or do we need one per host/path (Service)
?

On Wed, Apr 5, 2017 at 2:19 PM, Nick Sardo notifications@github.com wrote:

Many folks have expressed interest on this, so I'd like to keep the ball
rolling.

After reading the discussion on the CL, I'm also inclined to go with a
generic annotation on Service. As I mentioned above, an easy option would
be to look at a probe's configuration.
Any more thoughts @itamaro https://github.com/itamaro and @thockin
https://github.com/thockin ?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
kubernetes/ingress-nginx#243 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVJoulQLHjgirXdc63bb-b0ek6FdRks5rtAVfgaJpZM4L5Ss8
.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on April 5, 2017 21:46

If we went with a single timeout, a user is bound to come along with a use case for multiple. Their argument will be that GCP supports different timeouts - the controller should support that feature too.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @thockin on April 5, 2017 21:54

Is it generic, then? Or can things like nginx reasonably implement this
per host/path ? I forget - did we consider a field?

On Wed, Apr 5, 2017 at 2:46 PM, Nick Sardo notifications@github.com wrote:

If we went with a single timeout, a user is bound to come along with a use
case for multiple. Their argument will be that GCP supports different
timeouts - the controller should support that feature too.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
kubernetes/ingress-nginx#243 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVNGirY0-8-nrWaK1gRQx8cluCo3gks5rtAvKgaJpZM4L5Ss8
.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on April 5, 2017 22:5

Pinging @aledbf for thoughts on having this for proxy_read_timeout

Tim, you mentioned possibly going straight to field but the question of being generic had to be answered first.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @thockin on April 5, 2017 22:15

yeah - if it is generic, let's run with fields, if we can.

On Wed, Apr 5, 2017 at 3:05 PM, Nick Sardo notifications@github.com wrote:

Pinging @aledbf https://github.com/aledbf for thoughts on having this
for proxy_read_timeout

Tim, you mentioned possibly going straight to field but the question of
being generic had to be answered first.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
kubernetes/ingress-nginx#243 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVJTgmbrbSRT-klfEuZaIY0YJe_zAks5rtBAmgaJpZM4L5Ss8
.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @aledbf on April 5, 2017 22:19

in nginx we have 3 settings related to timeouts with predefined defaults and annotations in the ingress that allow custom values

  • ingress.kubernetes.io/proxy-connect-timeout
  • ingress.kubernetes.io/proxy-send-timeout
  • ingress.kubernetes.io/proxy-read-timeout

(this settings are used to send the request to a different endpoint if there's more than one)

From what I've seen this default are ok unless you are running something like a docker registry or exposing a service for file upload.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @porridge on April 6, 2017 7:49

My use case is a phpmyadmin, for which the default 30s GCLB timeout is not enough, we'd like something on the order of 10 minutes.

Re: "which object should be annotated", I'm leaning towards "I don't really care as long as it works ASAP", since my current need is so unsophisticated :-)

With my sysadmin hat on, it feels like it should be on the ingress, since there are a bunch of possible timeout parameters (as nginx example shows) and it's just more natural to think about some of them in the context of the ingress (being an abstraction of an LB).

I also imagine that in a larger organization one team might own the deployment+service(s), and another might own the ingress(es). Since a single service might be fronted by different ingresses, with different needs, and therefore timeouts (e.g. one for internal use and another exposed to the external users), it also would make sense to specify the timeouts on the ingress rather than service. As such, the fact that ingresses share backends seems like an artificial restrictions. But these are just my conjectures, I don't know how large organizations in fact use k8s.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on April 6, 2017 17:51

For clarity there are two questions are being discussed before proceeding...

  1. Should timeout(s) be ingress specific or service specific (regardless of being annotated/specified on ingress or service)
    Arguments for Ingress specific (one annotation sets timeout for all child services on ingress)

    • Currently how nginx is setup
    • "more natural to think about some of them ... an abstraction of a LB"
    • Possible that you may want different timeouts for external use vs internal use

    Arguments for Service specific

    • GCE supports a different timeout per backend service (it would be artificially restricting to have an ingress level timeout)
    • Some services/paths need special timeouts (As aledbf points out, file uploading is a special case for a web app and is a frequently built feature. With service-specific timeouts, you would need one (instead of two) ingress object to support your normal web service and a file upload service)
  2. If answer to above is service specific, should the timeout(s) be annotated/specified on the Ingress or Service object ?
    Argument for description on service:

    • Per @porridge's use case but with a different viewpoint, a phpmyadmin service could take 10 minutes regardless of how the service is accessed. If there exists a reason for different timeouts, I'd argue that more-than-likely it would be based on some app-level data (admin user vs public). From the standpoint of a dev in a large organization, the dev would want to know all consumers of their service have timeout X instead of having telling the sysadmin to manage multiple ingresses with potentially missing timeouts.

    Argument for description on ingress:

    • Support the the following use case: suppose I have a web app with file upload feature at a specific path. I would want the app to have required timeout X and certain paths with required timeout Y, (Y being > X).
      Possible options:
      • have LB timeout entire service at timeout Y, and have the application timeout earlier for non-special paths.
      • have annotation on ingress which maps each path/service to a timeout.

Hybrid solution?: Support description on service with an optional override-per-service annotation on ingress. (possibly overkill)

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @itamaro on April 20, 2017 10:40

My answers to the 2 questions formulated by @nicksardo:

  1. Should timeout(s) be ingress specific or service specific (regardless of being annotated/specified on ingress or service)

service-specific.

  1. Should the timeout(s) be annotated/specified on the Ingress or Service object ?

specified on the Service object.

also, in relation to:

forgot that liveness/readiness probes don't live in the service. We would have to look up a pod under the service selector and check, similar to what we do for getting the health check request path.

it's another topic that got me confused - since a service selector can match multiple pods, not all having the same health check specification necessarily, the existing behavior looks odd. it would seem more reasonable to have another health check specification at the service level, no? (off topic?)

back to the timeout definition:

  • I think it is important to allow widely different services (e.g. file upload & regular web service) on the same ingress.
  • I think it makes sense to think about the timeout of a service in the context of that service, and the owner of the service is the most qualified one to reason about it. I see the point about variations in the same service, and I'd argue that if there are big variations, it might be a signal that the service should be split into a "fast service" and a "slow service".

WDYT?

where do we stand about this issue?

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on April 24, 2017 21:12

@itamaro thanks for providing your feedback.

Regarding the first question, I agree that timeouts should be backend-service specific. We should put this question to rest.

For the second question, I like your response to the problem of having a service with varied expectations of timeouts. I agree that the owner of a service should be the most qualified to reason about what timeouts are most appropriate. However, I have a hard time getting past the use case of having a file-upload feature in a web service. From the standpoint of the service owner, they will say the timeout "depends" on what path you're talking about. Or they might give an umbrella "X seconds" with X being the longest expected timeout. This Ingress-Service dilemma has existed for awhile and doesn't seem to have a right answer. Two proposals, "Better Ingress" and "Composite Services", would seem to help this situation if one of them were implemented. Since we don't currently have a way to express HTTP paths/attributes on a service, I'm leaning towards noting the timeout on the ingress object where we do have paths defined. Since nginx has an ingress-wide timeout setting, I also believe that this annotation should be GCP specific. Thoughts/comments?

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @thockin on April 25, 2017 6:30

On Mon, Apr 24, 2017 at 2:12 PM, Nick Sardo notifications@github.com wrote:

@itamaro thanks for providing your feedback.

Regarding the first question, I agree that timeouts should be backend-service specific. We should put this question to rest.

For the second question, I like your response to the problem of having a service with varied expectations of timeouts. I agree that the owner of a service should be the most qualified to reason about what timeouts are most appropriate. However, I have a hard time getting past the use case of having a file-upload feature in a web service. From the standpoint of the service owner, they will say the timeout "depends" on what path you're talking about or give. Or they might give an umbrella "X seconds" with X being the longest expected timeout. This Ingress-Service dilemma has existed for awhile and doesn't seem to have a right answer. Two proposals, "Better Ingress" and "Composite Services", would seem to help this situation if one of them were implemented. Since we don't currently have a way to express HTTP paths/attributes on a service, I'm leaning towards noting the timeout on the ingress object where we do have paths defined. Since nginx has an ingress-wide timeout setting, I also believe that this annotation should GCP specific. Thoughts/comments?

I think I agree with Nick's assessment.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @itamaro on April 26, 2017 14:5

For the second question, I like your response to the problem of having a service with varied expectations of timeouts. I agree that the owner of a service should be the most qualified to reason about what timeouts are most appropriate. However, I have a hard time getting past the use case of having a file-upload feature in a web service. From the standpoint of the service owner, they will say the timeout "depends" on what path you're talking about. Or they might give an umbrella "X seconds" with X being the longest expected timeout. This Ingress-Service dilemma has existed for awhile and doesn't seem to have a right answer. Two proposals, "Better Ingress" and "Composite Services", would seem to help this situation if one of them were implemented. Since we don't currently have a way to express HTTP paths/attributes on a service, I'm leaning towards noting the timeout on the ingress object where we do have paths defined. Since nginx has an ingress-wide timeout setting, I also believe that this annotation should be GCP specific. Thoughts/comments?

well, not sure I completely follow all the reasoning.
naturally, a service with multiple endpoints has variance in expected latency.
as a service owner myself, specifically of services that have sub-second endpoints alongside ~minute endpoints, I am familiar with the associated pains. I think that even if I had a way to express timeouts per path, I'd still rather specify the maximal timeout for the entire service, for KISS considerations.
another possible argument - why stop at the "per path" level? even on a given path, different requests (authenticated / anonymous), with different types (POST / GET / HEAD / OPTIONS), with different request headers & parameters - can have widely different timeouts. why not define a protocol between the load balancer and the service that determines the request-specific timeout given the request metadata?

anyway, I digress... I trust that you've seen more diverse use-cases, and you can choose the best tradeoff for this. I prefer an implemented good enough solution over a theoretically perfect one :-)

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @thockin on April 28, 2017 15:23

My feeling with timeouts is that they are part of the "how to use" rather
that "what it is". Reasonable minds can disagree -j agree that a working
solution trumps all.

On Apr 26, 2017 7:05 AM, "Itamar Ostricher" notifications@github.com
wrote:

For the second question, I like your response to the problem of having a
service with varied expectations of timeouts. I agree that the owner of a
service should be the most qualified to reason about what timeouts are most
appropriate. However, I have a hard time getting past the use case of
having a file-upload feature in a web service. From the standpoint of the
service owner, they will say the timeout "depends" on what path you're
talking about. Or they might give an umbrella "X seconds" with X being the
longest expected timeout. This Ingress-Service dilemma has existed for
awhile and doesn't seem to have a right answer. Two proposals, "Better
Ingress" and "Composite Services", would seem to help this situation if one
of them were implemented. Since we don't currently have a way to express
HTTP paths/attributes on a service, I'm leaning towards noting the timeout
on the ingress object where we do have paths defined. Since nginx has an
ingress-wide timeout setting, I also believe that this annotation should be
GCP specific. Thoughts/comments?

well, not sure I completely follow all the reasoning.
naturally, a service with multiple endpoints has variance in expected
latency.
as a service owner myself, specifically of services that have sub-second
endpoints alongside ~minute endpoints, I am familiar with the associated
pains. I think that even if I had a way to express timeouts per path, I'd
still rather specify the maximal timeout for the entire service, for KISS
considerations.
another possible argument - why stop at the "per path" level? even on a
given path, different requests (authenticated / anonymous), with different
types (POST / GET / HEAD / OPTIONS), with different request headers &
parameters - can have widely different timeouts. why not define a protocol
between the load balancer and the service that determines the
request-specific timeout given the request metadata?

anyway, I digress... I trust that you've seen more diverse use-cases, and
you can choose the best tradeoff for this. I prefer an implemented good
enough solution over a theoretically perfect one :-)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
kubernetes/ingress-nginx#243 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVH7MO2g5X6KefEBlXOCtDGELHN4lks5rz08jgaJpZM4L5Ss8
.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @tsloughter on May 31, 2017 1:57

For the time being is there any work around where I can manually change the timeout in the google console and not have the controller revert the change?

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @evanj on May 31, 2017 13:6

This is exactly what I've done and it seems to work? I have a test where I check it periodically. It seems to have kept its settings for at least a month now. I am a bit scared about the next time I make any change to the ingress of course :)

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @tsloughter on May 31, 2017 15:48

@evanj what is it you did?

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @evanj on June 1, 2017 2:4

@tsloughter I edited the load balancer's backend timeout through the Google Cloud Console web UI. I changed the timeout from 30s to 10m and it is still working, about 1 month later.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @tsloughter on June 1, 2017 18:18

@evanj oh, hm, ok, I assumed you meant something else because I had tried that but the timeout seemed to revert to 30s after a short period of time. I'll try again, thanks!

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on June 1, 2017 18:42

@tsloughter The ingress controller does not update the timeout value. However, if you change ports of your service and a new nodeport is generated, the backend service will be regenerated.

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @brugz on June 9, 2017 0:33

Hi guys and gals -- what's the current status here? Do we expect this feature will make it into a future release? Any idea on time frame?

Cheers
Brugz

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Oct 11, 2017

From @nicksardo on June 9, 2017 23:8

This is probably what we're looking at:

  • Need to provide options per backend (combination of hostname & path with an extra predfined "default" for the default backend)
  • Options should pass through to the BackendService with a whitelist of passable fields. enableCDN, timeout, iam, cdnpolicy, etc....

Thoughts?

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
  annotations:
     cloud.google.com/service-settings: | 
       {
           "default": {
              "timeoutSec": 321
           },
          "foo.bar.com/foo": {
               "timeoutSec": 123,
               "iap": {
                   "enabled": true,
                   "oauth2ClientId": "....",
                   "oauth2ClientSecret":"..."
                }
           },
          "foo.bar.com/bar/*":{
               "enableCDN": true
          }
       }
spec:
  backend:
    serviceName: s0
    servicePort: 80
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: s1
          servicePort: 80
      - path: /bar/*
        backend:
          serviceName: s2
          servicePort: 80
@jkosek

This comment has been minimized.

Copy link

jkosek commented Aug 9, 2018

+1

1 similar comment
@mathstlouis

This comment has been minimized.

Copy link

mathstlouis commented Aug 27, 2018

+1

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Sep 11, 2018

Is there any timeline when timeout configuration will be available in BackendConfig?

@tzaleski

This comment has been minimized.

Copy link

tzaleski commented Sep 11, 2018

yeah, we could really use timeout parameter

@bowei

This comment has been minimized.

Copy link
Member Author

bowei commented Sep 11, 2018

We hear you :-)

@Neonox31

This comment has been minimized.

Copy link

Neonox31 commented Sep 12, 2018

👍

@ianks

This comment has been minimized.

Copy link

ianks commented Sep 24, 2018

Is it possible to setup IAP using nginx-ingress? The old issue was merged with this but they seem somewhat unrelated

@martin-dmtrv

This comment has been minimized.

Copy link

martin-dmtrv commented Sep 30, 2018

+1
Looking forward to this. We have been struggling to find workarounds for this feature and they all lead to moving away from GCP controller to a custom controller.

@jpalomaki

This comment has been minimized.

Copy link

jpalomaki commented Oct 1, 2018

@bowei et al, question: going forward, will BackendConfig also cover container-native load balancing and NEGs or will there be a separate set of config options for that?

alexvanboxel pushed a commit to alexvanboxel/ingress-gce that referenced this issue Oct 12, 2018

Alex Van Boxel
[Issue kubernetes#28] Add timeout to the BackendConfig
This commits adds a new Connection section to the BackendConfig that
enables setting the timeout.

bpineau added a commit to DataDog/ingress-gce that referenced this issue Oct 17, 2018

BackendConfig support for timeouts and connection draining
As suggested in kubernetes#28, BackendConfig is a natural way to expose
those settings.
@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Oct 31, 2018

@jpalomaki No, BackendConfig will not cover NEG's. The documentation you linked is how its done.

@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Oct 31, 2018

All,

Thanks to @bpineau, we will soon be launching support for timeout, session affinity and connection draining parameters on the BackendService.

As of now, all GA features (except custom healthchecks) on the BackendService have been implemented in BackendConfig

Please look out for some documentation in the near future on how you (the community) can contribute further to BackendConfig and FrontendConfig (which will be coming soon).

If there is no objection, I am going to close this bug, since the primary ask has been implemented. If you have further requests, please file additional issues for easier tracking.

@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Oct 31, 2018

/close

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Oct 31, 2018

@rramkumar1: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mofirouz

This comment has been minimized.

Copy link

mofirouz commented Nov 5, 2018

@rramkumar1, can you point us to the documentation on how to configure:

we will soon be launching support for timeout, session affinity and connection draining parameters on the BackendService.

in Kubernetes / GKE?

@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Nov 5, 2018

@mofirouz Documentation will be posted at the start of next week.

@matti

This comment has been minimized.

Copy link

matti commented Nov 21, 2018

@rramkumar1 "start of next week" is due now?

@matti

This comment has been minimized.

Copy link

matti commented Nov 21, 2018

Honestly I don't understand why issues are closed without proper documentation - I have no idea how to use BackendServices to do this. I do understand that from Google's point of view this is closed, but not users.

It is misleading to close issues and then users have to dig in deeper to find out that this actually dos not work yet: #513 (comment)

@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Nov 21, 2018

@matti Rollout of the actual feature has been delayed to issues out of our control and thus the documentation is also delayed. Apologies for not providing an update here sooner.

Regarding closing of this issue, the implementation to support GCE BackendService features is already out (BackendConfig). Any additional feature support on top of this existing CRD is outside the scope of this issue.

@cerealcable

This comment has been minimized.

Copy link

cerealcable commented Nov 21, 2018

@rramkumar1 I'd argue that the issue still exists at this point for consumers of the GCE Ingress. None of us have control over the deployment. Given that, from my view point the issue still exists. At this point in time no value has been delivered since it can't be used.

@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Nov 21, 2018

@cerealcable The crux of this issue was to come up with a way to expose BackendService parameters. We delivered that with BackendConfig and several features like IAP, CDN and CloudArmor already are being used today (e.g https://cloud.google.com/iap/docs/enabling-kubernetes-howto)

Any requests we get to support more features in BackendConfig should be filed as separate issues rather than conflating everything in this issue.

I absolutely agree that we need to do a better job of keeping the community informed on what features are dropping and when. This is something we are working on in the form of much better documentation and changelogs.

@nicksardo nicksardo removed their assignment Nov 28, 2018

@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Nov 29, 2018

@mofirouz and FYI for all:

Support for session affinity, timeout and connection draining are now supported via BackendConfig. This is now launched on GKE for cluster versions at or above 1.11.3-gke.18!

Docs: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service

@mofirouz

This comment has been minimized.

Copy link

mofirouz commented Nov 29, 2018

yay, awesome @rramkumar1 thank you. One question in regards to existing services/ingresses:

I've noticed that if I update some of the ingress configs (like paths) in the GCP Console, they get reset to what they are in the Kube ingress definition. Do timeouts and other backend configs have a similar behaviour? Do I need to apply backend config retrospectively to those environments?

@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Nov 30, 2018

@mofirouz BackendConfig is a first class citizen. Any settings specified in BackendConfig will be asserted in GCP and any updates to BackendConfig will be reflected in GCP.

If you previously manually modified settings such as timeout, I would highly recommend migrating to using BackendConfig.

@econtal

This comment has been minimized.

Copy link

econtal commented Nov 30, 2018

This is great, thanks!

Since this issue is closed, is there a separated thread to follow the progress on custom healthchecks? It is personally our last piece that requires "manual override" using GCP.

@rramkumar1

This comment has been minimized.

Copy link
Member

rramkumar1 commented Nov 30, 2018

@econtal #42

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.