Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS & Ingress deployed together at once don't play nice #140

Closed
mthaddon opened this issue May 11, 2024 · 4 comments · Fixed by #143
Closed

TLS & Ingress deployed together at once don't play nice #140

mthaddon opened this issue May 11, 2024 · 4 comments · Fixed by #143

Comments

@mthaddon
Copy link
Contributor

mthaddon commented May 11, 2024

Bug Description

If we deploy the nginx-ingress-integrator charm with a client application and a tls-certificates provider and relate them all immediately, we get an InvalidIngressError.

To Reproduce

juju deploy repo-policy-compliance --channel=edge --config charm_token=xy --config github_token=foobar
juju deploy postgresql-k8s --trust
juju integrate postgresql-k8s repo-policy-compliance
juju deploy nginx-ingress-integrator --trust --config service-hostname="test.example.com" --config path-routes="/" --channel stable
juju deploy self-signed-certificates
juju integrate nginx-ingress-integrator repo-policy-compliance
juju integrate nginx-ingress-integrator self-signed-certificates

Environment

Microk8s 1.28.7, Juju 3.1.8

Relevant log output

unit-nginx-ingress-integrator-0: 17:50:57 ERROR unit.nginx-ingress-integrator/0.juju-log certificates:7: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 557, in <module>
    main(NginxIngressCharm)
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/venv/ops/main.py", line 544, in main
    manager.run()
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/venv/ops/main.py", line 520, in run
    self._emit()
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/venv/ops/main.py", line 509, in _emit
    _emit_charm_event(self.charm, self.dispatcher.event_name)
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/venv/ops/main.py", line 143, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/venv/ops/framework.py", line 350, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/venv/ops/framework.py", line 849, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/venv/ops/framework.py", line 939, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 436, in _on_certificates_relation_created
    hostnames = self.get_additional_hostnames()
  File "./src/charm.py", line 423, in get_additional_hostnames
    definition = self._get_definition_from_relation(relation)  # type: ignore[arg-type]
  File "./src/charm.py", line 199, in _get_definition_from_relation
    ingress_definition = IngressDefinition.from_essence(definition_essence)
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/src/ingress_definition.py", line 595, in from_essence
    upstream_endpoint_type=essence.upstream_endpoint_type,
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/src/ingress_definition.py", line 489, in upstream_endpoint_type
    if not self.upstream_endpoints:
  File "/var/lib/juju/agents/unit-nginx-ingress-integrator-0/charm/src/ingress_definition.py", line 465, in upstream_endpoints
    raise InvalidIngressError("no endpoints are provided in ingress relation")
exceptions.InvalidIngressError: no endpoints are provided in ingress relation

Model   Controller          Cloud/Region        Version  SLA          Timestamp
c-test  microk8s-localhost  microk8s/localhost  3.1.8    unsupported  18:22:06+02:00

App                       Version  Status   Scale  Charm                     Channel    Rev  Address         Exposed  Message
nginx-ingress-integrator           waiting      1  nginx-ingress-integrator  stable      95  10.152.183.67   no       installing agent
postgresql-k8s            14.10    active       1  postgresql-k8s            14/stable  193  10.152.183.254  no       Primary
repo-policy-compliance             active       1  repo-policy-compliance    edge        12  10.152.183.89   no       
self-signed-certificates           active       1  self-signed-certificates  stable      72  10.152.183.24   no       

Unit                         Workload  Agent  Address       Ports  Message
nginx-ingress-integrator/0*  error     idle   10.1.129.161         hook failed: "certificates-relation-created"
postgresql-k8s/0*            active    idle   10.1.129.160         Primary
repo-policy-compliance/0*    active    idle   10.1.129.158         
self-signed-certificates/0*  active    idle   10.1.129.162

nginx-ingress-integrator/0:
  opened-ports: []
  charm: ch:amd64/focal/nginx-ingress-integrator-95
  leader: true
  life: alive
  relation-info:
  - relation-id: 7
    endpoint: certificates
    related-endpoint: certificates
    application-data:
      certificates: '[]'
    related-units:
      self-signed-certificates/0:
        in-scope: true
        data:
          egress-subnets: 10.152.183.24/32
          ingress-address: 10.152.183.24
          private-address: 10.152.183.24
  - relation-id: 6
    endpoint: ingress
    related-endpoint: ingress
    application-data:
      model: '"c-test"'
      name: '"repo-policy-compliance"'
      port: "8000"
      strip-prefix: "true"
    related-units:
      repo-policy-compliance/0:
        in-scope: true
        data:
          host: '"repo-policy-compliance-0.repo-policy-compliance-endpoints.c-test.svc.cluster.local"'
          ip: '"10.1.129.158"'
  - relation-id: 5
    endpoint: nginx-peers
    related-endpoint: nginx-peers
    application-data: {}
    local-unit:
      in-scope: true
      data:
        egress-subnets: 10.152.183.67/32
        ingress-address: 10.152.183.67
        private-address: 10.152.183.67
  provider-id: nginx-ingress-integrator-0
  address: 10.1.129.161

Additional context

No response

@mthaddon
Copy link
Contributor Author

From juju debug-code nginx-ingress-integrator/0 after juju resolved nginx-ingress-integrator/0 in the above situation and:

(Pdb) definition_essence.ingress_provider.get_data(definition_essence.relation)
IngressRequirerData(app=IngressRequirerAppData(model='c-test', name='repo-policy-compliance', port=8000, strip_prefix=True, redirect_https=False, scheme='http'), units=[])

So the problem here is that units is empty, and this section of code raises an error on this line:

if self.use_endpoint_slice and not endpoints:
    raise InvalidIngressError("no endpoints are provided in ingress relation")

@mthaddon
Copy link
Contributor Author

I've also tried this with the most recent version in edge, but it's still failing in the same way:

juju deploy repo-policy-compliance --channel=edge --config charm_token=xy --config github_token=foobar
juju deploy postgresql-k8s --trust
juju integrate postgresql-k8s repo-policy-compliance
juju deploy nginx-ingress-integrator --trust --config service-hostname="test.example.com" --config path-routes="/" --channel latest/edge --base=ubuntu@20.04
juju deploy self-signed-certificates
juju integrate nginx-ingress-integrator repo-policy-compliance
juju integrate nginx-ingress-integrator self-signed-certificates

@weiiwang01
Copy link
Contributor

@cbartz I investigated this issue a little bit, and it appears that this is a juju "problem". In charms, the relation information (such as remote units, remote applications, relation IDs, etc.) is retrieved from the charm unit agent state rather than from the Juju controller. The charm unit agent relation state updates only after the relation hook has finished executing relation.relationer.CommitHook or before the relation hook is executing relation.relationSolver.NextOp. Specifically, the remote unit state updates here, before the relation-joined event.

This is why, before the ingress-relation-joined event is fired, the charm cannot see the ingress requirer units on the other side of the relation, causing the validation error, even if the ingress requirer unit has already joined the relation and we can query the joined state from the controller.

To resolve this issue, instead of catching the exception, we should change the check for the readiness of the relation here to accommodate this situation.

            # check relation.units is not empty
            if relation.app is not None and relation.units and relation.data[relation.app]:

@cbartz
Copy link
Contributor

cbartz commented May 28, 2024

@cbartz I investigated this issue a little bit, and it appears that this is a juju "problem". In charms, the relation information (such as remote units, remote applications, relation IDs, etc.) is retrieved from the charm unit agent state rather than from the Juju controller. The charm unit agent relation state updates only after the relation hook has finished executing relation.relationer.CommitHook or before the relation hook is executing relation.relationSolver.NextOp. Specifically, the remote unit state updates here, before the relation-joined event.

This is why, before the ingress-relation-joined event is fired, the charm cannot see the ingress requirer units on the other side of the relation, causing the validation error, even if the ingress requirer unit has already joined the relation and we can query the joined state from the controller.

To resolve this issue, instead of catching the exception, we should change the check for the readiness of the relation here to accommodate this situation.

            # check relation.units is not empty
            if relation.app is not None and relation.units and relation.data[relation.app]:

@weiiwang01 Thanks for the deep dive, I have updated the PR as per your suggestion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants