Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.13 Backports 2023-09-12 #28103

Merged
merged 6 commits into from
Sep 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 0 additions & 3 deletions .github/workflows/tests-ipsec-upgrade.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,6 @@ jobs:
kpr: 'disabled'
tunnel: 'disabled'
encryption: 'ipsec'
ipv6: 'false' # until https://github.com/cilium/cilium/issues/26944 resolved

- name: '2'
# renovate: datasource=docker depName=quay.io/lvh-images/kind
Expand All @@ -94,7 +93,6 @@ jobs:
tunnel: 'disabled'
encryption: 'ipsec'
endpoint-routes: 'true'
ipv6: 'false' # until https://github.com/cilium/cilium/issues/26944 resolved

- name: '3'
# We don't want to update bpf-next after branching
Expand All @@ -104,7 +102,6 @@ jobs:
tunnel: 'vxlan'
encryption: 'ipsec'
endpoint-routes: 'false' # Due to https://github.com/cilium/cilium/pull/22333
ipv6: 'false' # until https://github.com/cilium/cilium/issues/26944 resolved

timeout-minutes: 60
steps:
Expand Down
24 changes: 10 additions & 14 deletions Documentation/gettingstarted/hubble_intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,18 +17,14 @@ node level, cluster level or even across clusters in a :ref:`Cluster Mesh`
scenario. For an introduction to Hubble and how it relates to Cilium, read the
section :ref:`intro`.

By default, the Hubble API is scoped to each individual node on which the
Cilium agent runs. In other words, networking visibility is only provided for
traffic observed by the local Cilium agent. In this scenario, the only way to
interact with the Hubble API is by using the Hubble CLI (``hubble``) to query
the Hubble API provided via a local Unix Domain Socket. The Hubble CLI binary
is installed by default on Cilium agent pods.
By default, Hubble API operates within the scope of the individual node on which the
Cilium agent runs. This confines the network insights to the traffic observed by the local
Cilium agent. Hubble CLI (``hubble``) can be used to query the Hubble API provided via a local
Unix Domain Socket. The Hubble CLI binary is installed by default on Cilium agent pods.

When Hubble Relay is deployed, Hubble provides full network visibility. In this
scenario, the Hubble Relay service provides a Hubble API which scopes the
entire cluster or even multiple clusters in a ClusterMesh scenario. Hubble data
can be accessed by pointing a Hubble CLI (``hubble``) to the Hubble Relay
service or via Hubble UI. Hubble UI is a web interface which enables automatic
discovery of the services dependency graph at the L3/L4 and even L7 layer,
allowing user-friendly visualization and filtering of data flows as a service
map.
Upon deploying Hubble Relay, network visibility is provided for the entire cluster or even
multiple clusters in a ClusterMesh scenario. In this mode, Hubble data can be accessed by
directing Hubble CLI (``hubble``) to the Hubble Relay service or via Hubble UI.
Hubble UI is a web interface which enables automatic discovery of the services dependency
graph at the L3/L4 and even L7 layer, allowing user-friendly visualization and filtering
of data flows as a service map.
3 changes: 3 additions & 0 deletions Documentation/network/egress-gateway.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,9 @@ Because egress gateway isn't compatible with identity allocation mode ``kvstore`
you must use Kubernetes as Cilium's identity store (``identityAllocationMode``
set to ``crd``). This is the default setting for new installations.

Egress gateway is not compatible with the Cluster Mesh feature. The gateway selected
by an egress gateway policy must be in the same cluster as the selected pods.

Egress gateway is not supported for IPv6 traffic.

Enable egress gateway
Expand Down
1 change: 0 additions & 1 deletion pkg/hubble/monitor/consumer.go
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,6 @@ func (c *consumer) sendNumLostEvents() {
// We now now safely reset the counter, as at this point have
// successfully notified the observer about the amount of events
// that were lost since the previous LostEvent message
c.observer.GetLogger().Infof("hubble events queue is processing messages again: %d messages were lost", c.numEventsLost)
c.numEventsLost = 0
default:
// We do not need to bump the numEventsLost counter here, as we will
Expand Down
2 changes: 1 addition & 1 deletion pkg/node/types/node.go
Original file line number Diff line number Diff line change
Expand Up @@ -564,7 +564,7 @@ func (n *Node) GetIPv6AllocCIDRs() []*cidr.CIDR {
if n.IPv6AllocCIDR != nil {
result = append(result, n.IPv6AllocCIDR)
}
if len(n.IPv4SecondaryAllocCIDRs) > 0 {
if len(n.IPv6SecondaryAllocCIDRs) > 0 {
result = append(result, n.IPv6SecondaryAllocCIDRs...)
}
return result
Expand Down
114 changes: 114 additions & 0 deletions pkg/node/types/node_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,11 @@
package types

import (
"fmt"
"net"
"testing"

"github.com/stretchr/testify/assert"
. "gopkg.in/check.v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

Expand Down Expand Up @@ -242,3 +244,115 @@ func (s *NodeSuite) TestNode_ToCiliumNode(c *C) {
},
})
}

func TestGetIPv4AllocCIDRs(t *testing.T) {
var (
cidr1 = cidr.MustParseCIDR("1.0.0.0/24")
cidr2 = cidr.MustParseCIDR("2.0.0.0/24")
cidr3 = cidr.MustParseCIDR("3.0.0.0/24")
)

var tests = []struct {
// name of test
name string
// primary ipv4 allocation cidr
allocCIDR *cidr.CIDR
// secondary ipv4 allocation cidrs
secAllocCIDRs []*cidr.CIDR
// expected ipv4 cidrs
expectedCIDRs []*cidr.CIDR
}{
{
name: "nil cidrs",
allocCIDR: nil,
secAllocCIDRs: nil,
expectedCIDRs: make([]*cidr.CIDR, 0),
},
{
name: "one primary and no secondary cidrs",
allocCIDR: cidr1,
secAllocCIDRs: nil,
expectedCIDRs: []*cidr.CIDR{cidr1},
},
{
name: "one primary and one secondary cidr",
allocCIDR: cidr1,
secAllocCIDRs: []*cidr.CIDR{cidr2},
expectedCIDRs: []*cidr.CIDR{cidr1, cidr2},
},
{
name: "one primary and multiple secondary cidrs",
allocCIDR: cidr1,
secAllocCIDRs: []*cidr.CIDR{cidr2, cidr3},
expectedCIDRs: []*cidr.CIDR{cidr1, cidr2, cidr3},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
n := Node{
Name: fmt.Sprintf("node-%s", tt.name),
IPv4AllocCIDR: tt.allocCIDR,
IPv4SecondaryAllocCIDRs: tt.secAllocCIDRs,
}

actual := n.GetIPv4AllocCIDRs()
assert.Equal(t, actual, tt.expectedCIDRs)
})
}
}

func TestGetIPv6AllocCIDRs(t *testing.T) {
var (
cidr2001 = cidr.MustParseCIDR("2001:db8::/32")
cidr2002 = cidr.MustParseCIDR("2002:db8::/32")
cidr2003 = cidr.MustParseCIDR("2003:db8::/32")
)

var tests = []struct {
// name of test
name string
// primary ipv6 allocation cidr
allocCIDR *cidr.CIDR
// secondary ipv6 allocation cidrs
secAllocCIDRs []*cidr.CIDR
// expected ipv6 cidrs
expectedCIDRs []*cidr.CIDR
}{
{
name: "nil cidrs",
allocCIDR: nil,
secAllocCIDRs: nil,
expectedCIDRs: make([]*cidr.CIDR, 0),
},
{
name: "one primary and no secondary cidrs",
allocCIDR: cidr2001,
secAllocCIDRs: nil,
expectedCIDRs: []*cidr.CIDR{cidr2001},
},
{
name: "one primary and one secondary cidr",
allocCIDR: cidr2001,
secAllocCIDRs: []*cidr.CIDR{cidr2002},
expectedCIDRs: []*cidr.CIDR{cidr2001, cidr2002},
},
{
name: "one primary and multiple secondary cidrs",
allocCIDR: cidr2001,
secAllocCIDRs: []*cidr.CIDR{cidr2002, cidr2003},
expectedCIDRs: []*cidr.CIDR{cidr2001, cidr2002, cidr2003},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
n := Node{
Name: fmt.Sprintf("node-%s", tt.name),
IPv6AllocCIDR: tt.allocCIDR,
IPv6SecondaryAllocCIDRs: tt.secAllocCIDRs,
}

actual := n.GetIPv6AllocCIDRs()
assert.Equal(t, actual, tt.expectedCIDRs)
})
}
}
6 changes: 5 additions & 1 deletion pkg/policy/selectorcache.go
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,11 @@ func (s *selectorManager) Equal(b *selectorManager) bool {
// of the selections. If the old version is returned, the user is
// guaranteed to receive a notification including the update.
func (s *selectorManager) GetSelections() []identity.NumericIdentity {
return *(*[]identity.NumericIdentity)(atomic.LoadPointer(&s.selections))
selections := (*[]identity.NumericIdentity)(atomic.LoadPointer(&s.selections))
if selections == nil {
return emptySelection
}
return *selections
}

// Selects return 'true' if the CachedSelector selects the given
Expand Down
15 changes: 15 additions & 0 deletions pkg/policy/selectorcache_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -610,6 +610,21 @@ func (ds *SelectorCacheTestSuite) TestIdentityUpdatesMultipleUsers(c *C) {
c.Assert(len(sc.selectors), Equals, 0)
}

func (ds *SelectorCacheTestSuite) TestSelectorManagerCanGetBeforeSet(c *C) {
defer func() {
r := recover()
c.Assert(r, Equals, nil)
}()

selectorManager := selectorManager{
key: "test",
users: make(map[CachedSelectionUser]struct{}),
}
selections := selectorManager.GetSelections()
c.Assert(selections, Not(Equals), nil)
c.Assert(len(selections), Equals, 0)
}

func testNewSelectorCache(ids cache.IdentityCache) *SelectorCache {
sc := NewSelectorCache(testidentity.NewMockIdentityAllocator(ids), ids)
sc.SetLocalIdentityNotifier(testidentity.NewDummyIdentityNotifier())
Expand Down