Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DownstreamMerge] 11 jul 23 #1752

Merged
merged 37 commits into from Jul 12, 2023

Conversation

jcaamano
Copy link
Contributor

Conflict on base_network_controller_pods.go as joinSubnetToRoute and AddRoutesGatewayIP where moved to util/pod_annotation.go

/cc @dcbw @maiqueb @npinaeva @trozet

maiqueb and others added 30 commits June 28, 2023 15:51
This change attempts to simplify how the network admin configures the
underlay of the network to which the secondary network overlays connect
to; the user must configure the OVN bridge mappings by associating the
name of the physical network to the name of the desired OVS bridge.

Currently that is done via the following pattern:
`<name of phys network>_br-localnet:<name of ovs bridge>`

An example would be:
`tenantblue_br-localnet:ovsbr1`

This commit simplifies that by removing the suffix; the bridge mappings
would be:
`<name of phys network>:<name of ovs bridge>`

And an example would be:
`tenantblue:ovsbr1`

Signed-off-by: Miguel Duarte Barroso <mdbarroso@redhat.com>
Every time attachamentName() was being used, a new network name was
being generated. If used more than once throughout a single test, the
different parts of that test would use a different network name.
Specifically for localnet tests, this meant that the bridge mapping
would not be set with the same network name as the NAD network name.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
The bridge mapping was not being generated with the physnet expected by
OVN-K, which replaces `-` and `/` with `.` in the network name.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
 docker and podman do different things with the --internal parameter:
 - docker installs iptables rules to drop traffic on a different subnet
   than the bridge and we don't want that.
 - podman does not set the bridge as default gateway and we want that.
 So we need it with podman but not with docker. Neither allows us to create
 a bridge network without IPAM which would be ideal, so perhaps the best
 option would be a manual setup.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
…et-port-suffix

multi-homing, localnet: update localnet port network name
We have the logical switch name as client index so we no longer need to
cache the UUID.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
ipallocator package will be extensively used from cluster manager so it
makes sense for it to be a top level package rather than sit in the ovn
package.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Move pkg/ipallocator to pkg/allocator/ip
Move pkg/ipallocator/allocator to pkg/allocator/bitmap

This makes sense since other components are using the allocator package
without it actually being associated to allocating IPs.

It also facilitates proper placement of a future subnet IP allocator.

Had to fix an issue in hack/test-go.sh where it was using a recursive
grep to find ginkgo references and pass ginkgo arguments. This grep
should not be recursive, otherwise it will pass ginkgo arguments if a
subpackage has ginkgo references but not the root package which was the
case for pkg/allocater after this change.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Move most of the logical switch manager functionality to an independant
subnet ip allocator so it can be used from different places without it
being associated with the concept of switches or nodes.

Logical switch manager retains the functionality of reserving the
gateway, management and hybrid overlay IPs for every allocated subnet.

Everything is functionally equivalent except when reserving the hybrid
overlay IP. An issue was fixed where it could be releasing an IP it had
not allocated in the first place.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Add the ability to pre-allocate subnets to the subnet ip allocator as it
is a common need: we use this for a variety of pruposes, including
reserving well known addresses in the default network or L3 secondary
networks, as well as reserving user requested address for L2 secondary
networks.

Note that this is just a pre-allocation. The subnet ip allocator does
not distinguish between pre-allocated IPs vs normally allocated IPs and
will return the same ErrAllocated error on re-allocation.

In the process, refactor GetIPFullMask to be based on net.IPMask and
avoided parsing in a few places.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
So that it can be tested independantly without have to setup a
controller test harness.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Moves most of the PodAnnotation setting logic from the base network
controller to an utility so that it can be used from cluster manager
as well.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Network cluster controller will be in charge of coordinating more than
just host subnets so move that functionality to the host subnet allocator
itself. That functionality fits in the host subnet allocator which
it was almost only a façade to the base subnet allocator.

Functionaly equivalent.

Fixes subnet count metric being used for all networks instead of just
the default network.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Adds an ip allocator to cluster manager. The allocator reconciles pods
triggered by events registered by the cluster manager network controller
which now listens to pod events when interconnect is enabled.

The ip allocator not only allocates IPs but fully handles the
PodAnnotation as well. It will derive from the allocated IPs or honor
requests for the mac address and gateways and set the annotation on the
Pod. This will trigger the CNI side to attach the pod interface while
master network controller configures the northbound database.

Only enabled for localnet secondary networks with interconnect for
now. On such a scenario, the master network controller is inhibited from
setting the PodAnnotation leaving it up for the cluster manager to do.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
namespace and pod handlers need different commands to be
cleaned up. we used to call RemovePodHandler for both namespace
and pod handlers

Signed-off-by: Nadia Pinaeva <npinaeva@redhat.com>
We used to pass oc.stopChan to the retryFramework for
network policy handlers, but that means that retry loop
for failed objects will not be stopped on network policy
delete, therefore leaking goroutines.
Create getChildStopChan function to pass stop signal both
on oc and network policy delete.

Signed-off-by: Nadia Pinaeva <npinaeva@redhat.com>
This egressip test is updating and then deleting an egressip. The
expectations in between do not correctly verify if the update had been
processed. Sometimes the update would not be processed at all because we
get the delete event before that.

Then when we process the delete, we only look at the deleted egress ip
status items but not at any other status items the pod would have been
previously assigned. In this specific case those other status items are
not cleared and the standby egressip does not take over.

This fixes the UT to correctly verify that the update has been process.
But there might be another bug to fix in the egressip logic.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Enable interconnect for localnet topology
Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
The ID allocator will be used form different packages. Lets add an
interface to export its functionality and for easy mocking.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
So that it can be used to allocate tunnel IDs for pods from the
clustermanager/pod package.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Tunnel IDs will be allocated for a specific network from a context that
should not affect allocation of other networks. Let's protect against
this with an interface bounded to a specific network.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Pods attached to secondary networks with layer2 topologies on
interconnect require a tunnel ID allocated to be configured as tunnel
keys on its logical switch ports of the transit switch.

Allocate this tunnel Id in the PodAnnotation as it shares the same
lifecycle as the other data stored there.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Since it will now allocate tunnel IDs as well and not just IPs.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Pods attached to secondary networks with layer2 topologies on
interconnect require a tunnel ID allocated to be configured as tunnel
keys on its logical switch ports of the transit switch.

This add support to allocate such IDs from cluster manager.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
Tunnel IDs allocated to pods have a more dynamic lifecycle than IDs
allocated to networks. Lets change the allocation strategy to round
robin to reduce the posibility of potential collisions when a
de-allocated tunnel ID is allocated to a pod while still being used
by one of the zones that might have not yet process the original pod
termination event.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
@jcaamano jcaamano requested a review from dcbw as a code owner July 11, 2023 10:29
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 11, 2023
@maiqueb
Copy link
Contributor

maiqueb commented Jul 11, 2023

/lgtm on my stuff

@npinaeva
Copy link
Member

/lgtm for my commits

PR#3609 removed the suffix `_br-localnet` from the localnet network
name used in the bridge mappings, but also inadvertedly removed the
replacement of `-` and `/` characters form the controller code but not
from the tests.

Let's remove them from tests as well as they don't look troublesome
neither as network name in the logical switch port options nor in the
OVS bridge mappings.

Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
@tssurya
Copy link
Contributor

tssurya commented Jul 11, 2023

/retest

@jcaamano
Copy link
Contributor Author

/retest-required

Enable multi-homing e2e tests for localnet with IC
Allocate tunnel IDs to pods attached to layer2 networks with interconnect
…update

Fix egressip test not waiting for update processing before delete
@dcbw
Copy link
Contributor

dcbw commented Jul 11, 2023

V4SubnetAllocationThresholdExceeded alerts introduced by this PR should be fixed by ovn-org/ovn-kubernetes#3760 upstream

dcbw and others added 2 commits July 12, 2023 10:35
The original Usage() returned both used and available subnets for
the allocator, which is confusing and error prone. The fixed commit
used the "count" (eg used+unused) for both count and usage, leading
to OpenShift alerts about subnets exceeded because
(usage / count) = 1.

Instead, break the API into two separate calls with clear meaning.
Usage() returns how many subnets are allocated, and Count() returns
how many are possible/available.

{  fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:587]: Unexpected error:
    <errors.aggregate | len:1, cap:1>:
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
      {
        "metric": {
          "__name__": "ALERTS",
          "alertname": "V4SubnetAllocationThresholdExceeded",
          "alertstate": "firing",
          "container": "kube-rbac-proxy",
          "endpoint": "metrics",
          "instance": "10.0.0.4:9102",
          "job": "ovnkube-master",
          "namespace": "openshift-ovn-kubernetes",
          "pod": "ovnkube-master-hjp2x",
          "prometheus": "openshift-monitoring/k8s",
          "service": "ovn-kubernetes-master",
          "severity": "warning"
        },
        "value": [
          1689091264.978,
          "1"
        ]
      }
    ]
    [
        <*errors.errorString | 0xc00194d430>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"V4SubnetAllocationThresholdExceeded\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.0.0.4:9102\",\n      \"job\": \"ovnkube-master\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"pod\": \"ovnkube-master-hjp2x\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"ovn-kubernetes-master\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1689091264.978,\n      \"1\"\n    ]\n  }\n]",
        },
    ]
occurred

Fixes: d86604b ("Move subnet handling out of network cluster controller")
Signed-off-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: Jaime Caamaño Ruiz <jcaamano@redhat.com>
@jcaamano
Copy link
Contributor Author

V4SubnetAllocationThresholdExceeded alerts introduced by this PR should be fixed by ovn-org/ovn-kubernetes#3760 upstream

Thanks for the fix @dcbw . Copy paste snafu on my side there.

@jcaamano
Copy link
Contributor Author

/retest-required

1 similar comment
@jcaamano
Copy link
Contributor Author

/retest-required

@dcbw
Copy link
Contributor

dcbw commented Jul 12, 2023

/test ci/prow/e2e-aws-ovn-upgrade
/lgtm

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 12, 2023

@dcbw: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

  • /test 4.14-upgrade-from-stable-4.13-e2e-aws-ovn-upgrade
  • /test 4.14-upgrade-from-stable-4.13-images
  • /test 4.14-upgrade-from-stable-4.13-local-gateway-e2e-aws-ovn-upgrade
  • /test 4.14-upgrade-from-stable-4.13-local-gateway-images
  • /test e2e-aws-ovn
  • /test e2e-aws-ovn-local-gateway
  • /test e2e-aws-ovn-local-to-shared-gateway-mode-migration
  • /test e2e-aws-ovn-serial
  • /test e2e-aws-ovn-shared-to-local-gateway-mode-migration
  • /test e2e-aws-ovn-upgrade
  • /test e2e-aws-ovn-upgrade-local-gateway
  • /test e2e-aws-ovn-windows
  • /test e2e-gcp-ovn
  • /test e2e-metal-ipi-ovn-dualstack
  • /test e2e-metal-ipi-ovn-ipv6
  • /test e2e-vsphere-windows
  • /test gofmt
  • /test images
  • /test lint
  • /test okd-images
  • /test unit

The following commands are available to trigger optional jobs:

  • /test e2e-aws-ovn-hypershift
  • /test e2e-aws-ovn-kubevirt
  • /test e2e-azure-ovn
  • /test e2e-metal-ipi-ovn-ipv4
  • /test e2e-openstack-ovn
  • /test e2e-ovn-hybrid-step-registry
  • /test e2e-vsphere-ovn
  • /test okd-e2e-gcp-ovn

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-ovn-kubernetes-master-4.14-upgrade-from-stable-4.13-e2e-aws-ovn-upgrade
  • pull-ci-openshift-ovn-kubernetes-master-4.14-upgrade-from-stable-4.13-images
  • pull-ci-openshift-ovn-kubernetes-master-4.14-upgrade-from-stable-4.13-local-gateway-e2e-aws-ovn-upgrade
  • pull-ci-openshift-ovn-kubernetes-master-4.14-upgrade-from-stable-4.13-local-gateway-images
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-hypershift
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-kubevirt
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-local-gateway
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-local-to-shared-gateway-mode-migration
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-serial
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-shared-to-local-gateway-mode-migration
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-upgrade
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-upgrade-local-gateway
  • pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-windows
  • pull-ci-openshift-ovn-kubernetes-master-e2e-azure-ovn
  • pull-ci-openshift-ovn-kubernetes-master-e2e-gcp-ovn
  • pull-ci-openshift-ovn-kubernetes-master-e2e-metal-ipi-ovn-dualstack
  • pull-ci-openshift-ovn-kubernetes-master-e2e-metal-ipi-ovn-ipv6
  • pull-ci-openshift-ovn-kubernetes-master-e2e-openstack-ovn
  • pull-ci-openshift-ovn-kubernetes-master-e2e-ovn-hybrid-step-registry
  • pull-ci-openshift-ovn-kubernetes-master-e2e-vsphere-ovn
  • pull-ci-openshift-ovn-kubernetes-master-e2e-vsphere-windows
  • pull-ci-openshift-ovn-kubernetes-master-gofmt
  • pull-ci-openshift-ovn-kubernetes-master-images
  • pull-ci-openshift-ovn-kubernetes-master-lint
  • pull-ci-openshift-ovn-kubernetes-master-okd-e2e-gcp-ovn
  • pull-ci-openshift-ovn-kubernetes-master-okd-images
  • pull-ci-openshift-ovn-kubernetes-master-unit

In response to this:

/test ci/prow/e2e-aws-ovn-upgrade
/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jul 12, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 12, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dcbw, jcaamano, maiqueb, npinaeva

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@dcbw
Copy link
Contributor

dcbw commented Jul 12, 2023

/test e2e-aws-ovn-upgrade

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 12, 2023

@jcaamano: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-azure-ovn 52ecda3 link false /test e2e-azure-ovn
ci/prow/e2e-ovn-hybrid-step-registry 52ecda3 link false /test e2e-ovn-hybrid-step-registry

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit 0c44e51 into openshift:master Jul 12, 2023
27 of 29 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants