Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] main from cilium:main #1056

Merged
merged 17 commits into from
May 4, 2023
Merged

[pull] main from cilium:main #1056

merged 17 commits into from
May 4, 2023

Conversation

pull[bot]
Copy link

@pull pull bot commented May 4, 2023

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

tommyp1ckles and others added 17 commits May 4, 2023 08:48
Changes in c1a0dba may have introduced flakes for this set of tests
when attempting to reach NodePort from outside the cluster.

A replacement test is being added to cilium cli connectivity tests:

cilium/cilium-cli#1547

Quarantine this test for now until we can remove it.

Addresses: #25119

Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com>
1.14.0-dev is more accurate version than 1.13.90 for the development
branch for the next Cilium minor release, which is 1.14.0.

Signed-off-by: Michi Mutsuzaki <michi@isovalent.com>
Signed-off-by: Joseph Sheng <jiajun.sheng@microfocus.com>
hive requires splitting up object lifecycle into New, Start, Stop.
It does this by injecting a hive.Lifecycle into constructors. This
in turn means that we need a lifecycle to create objects during testing.

Implement a minimal hive.Lifecycle for use in testing, which doesn't
distinguish between New and Start phases and instead immediately
executes any start hooks. Stop hooks are invoked at test cleanup time.

Signed-off-by: Lorenz Bauer <lmb@isovalent.com>
Rework the egressmap from a global variable into a Cell. The map still
has singleton behaviour due to map pinning, but at least for unit tests
we can opt out of pinning and isolate the map from global changes.

Stop writing out EGRESS_POLICY_MAP_SIZE into node_config.h since
the map definition in C is never actually used. The map size is
configured via a command line parameter instead. Hardcode the previously
used default value to avoid errors when running BPF unit tests.
This is simpler than wiring the new PolicyConfig into HeaderfileWriter.

Reuse the MapOut strategy from configmap and authmap to ensure that
the egressmap is initialized by the agent before the loader is invoked.
Otherwise map creation might use the (incorrect) MaxElems from the
compiled C code.

Signed-off-by: Lorenz Bauer <lmb@isovalent.com>
There's no need for the encrypt key to be set on host IPs. Additionally,
this code was also setting the encrypt key for the `0.0.0.0/0` `world`
entry as well, which isn't necessary.

Encryption is done for node-to-node traffic therefore it's not necessary
for encrypt key to be set on local host IPs.

Fixes: b698972 (cilium: ipsec, support rolling updates)

Signed-off-by: Chris Tarazi <chris@isovalent.com>
Just some capitalized errors that were annoying the linter.

Signed-off-by: Casey Callendrello <cdc@isovalent.com>
We determined exactly how to chain by reading the CNI network name. This
is fragile, and means that end-users can't easily configure chaining for
their own networks.

This keeps the existing logic but adds an explicit chaining-mode field
in the CNI configuration.

Additionally: remove the "portmap" plugin, since all it did was signify
that the network "portmap" was the same as "cilium". There are cleaner
ways to handle that.

Signed-off-by: Casey Callendrello <cdc@isovalent.com>
Signed-off-by: Casey Callendrello <cdc@isovalent.com>
People would like to automatically inject Cilium in to CNI configuration
files. Add an option to the daemon that watches for networks of a given
name and injects Cilium into it.

We already supported this in a limited sense with aws-cni, but let's
generalize it to support arbitrary CNI networks.

Signed-off-by: Casey Callendrello <cdc@isovalent.com>
Add the option to set a CNI chaining target explicitly. Also, clean up
some of the defaulting around cni chaining mode.

Signed-off-by: Casey Callendrello <cdc@isovalent.com>
There was a logic error when handling an unset chaining mode.

Signed-off-by: Casey Callendrello <cdc@isovalent.com>
Introduce new manager layer, which decouples itself from gobgp
structures. Also, using UUID in identifying paths instead of gobgp path pointers.

This change is mostly refactoring code and not introducing any
functionality.

Signed-off-by: harsimran pabla <hpabla@isovalent.com>
The planned logic was to only update proxy network policy if the added
policy map entries included proxy redirections. This design did not
consider the possibility a concurrent endpoint policy computation
consuming the part of the policy map entries with redirection, which lead
to the policy update to be missed after the remaining FQDN selectors were
applied.

For example:

 1. DNS proxy answer to query 'www.example.com' applies to two FQDN
    selectors: 'www.example.com' with an L3-only rule, and '*.example.com'
    with a rule on port 80 for which HTTP policy only allowing GET requests
    is applied.
 2. New local security ID is allocated for the IP in the DNS answer
    3. Endpoint regenerations are spawned due to the new ID (which could
       also match an CIDR-based identity selector in addition to the FQDN
       selectors).
 4. DNS proxy adds the new security ID to the '*.example.com' selector
    associated with the HTTP rule.
    5. Endpoint regeneration consumes the above ID update (from another
       goroutine). As this update is associated with an HTTP rule, proxy
       network plicy is recomputed. During this computation the new security
       ID is only associated with selector '*.example.com'.
 6. DNS proxy adds the same new security ID to the selector
    'www.example.com', which has no HTTP rules.
 7. DNS proxy waits for the FQDN selector updates to finish.
 8. DNS proxy consumes the computed policy map keys, but only sees the
    ones for the selector 'www.example.com' as the ones for the selector
    `*.example.com' were already consumed. Since 'www.selector.com' has no
    HTTP rules, the proxy network policy update is skipped.
 9. DNS proxy responds to the pod with the DNS answer.
10. pod sends a HTTP POST request to the resolved IP on port 80.
11. Traffic to port 80 is redirected to the proxy.
12. Proxy only has the ID associated with the destination IP in the rule
    that only allows PUT requests and drops the request.
13. Proxy policy is synced by a controller and the proxy policy is
    healed, so that it is eventyally consistent. This was too late for the
    traffic the pod was sending.

Fix this by unconditionally computing proxy policy at (8) above.

A more efficient implementation would make sure that all the map updates
due to a specific DNS response get to each endpoint's policy map changes
as one transaction, so that it is only possible to consume them as one
unit. This would make sure that proxy network policy updates are done for
all of the updated selectors if any of the them have proxy redirections.

Signed-off-by: Jarno Rajahalme <jarno@isovalent.com>
…stomConf

Signed-off-by: Marcel Zieba <marcel.zieba@isovalent.com>
…nabled

Signed-off-by: Nick Young <nick@isovalent.com>
This change was actually part of #22673, but then got reverted
unintentionally due to refactor as part of #24025.

Relates: #24025
Relates: #22673
Fixes: #21993
Signed-off-by: Tam Mach <tam.mach@cilium.io>
@pull pull bot added the ⤵️ pull label May 4, 2023
@pull pull bot merged commit 53f26bd into sayboras:main May 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
10 participants