Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: adds network policies to unblock traffic between components #145

Merged

Conversation

bartoszmajsak
Copy link
Contributor

@bartoszmajsak bartoszmajsak commented Jan 18, 2024

Description

This solution is based on an initially proposed script, but with slightly modified namespace selectors.

Additional changes

  • provides a composed matcher to inspect nested structs easily
  • cleans up test setup for KServe
  • renames variables in reconcilers to reflect actual objects used

Fixes https://issues.redhat.com/browse/RHOAIENG-1955

How Has This Been Tested?

I included a simple integration test following the existing setup in the project.

I would like to get some help in testing it e2e, perhaps based on https://github.com/rh-aiservices-bu/fraud-detection/blob/main/setup/setup-s3.yaml which revealed the issue. Is there some other repo with e2e tests I could use it for?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

"time"
)

var _ = Describe("The Openshift Kserve model controller", func() {

When("creating a Kserve ServiceRuntime & InferenceService", func() {
var testNs string

BeforeEach(func() {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BeforeEach as it initially was defined made adding new tests to this node/block impossible. I did the following to unblock

  • moved istio-system ns creation to BeforeSuite, as otherwise it would be attempted to create for every test which is not wanted.
  • made each test deploy their resources to randomly named namespace
  • ensured that configs are created only once by ignoring already exist error

The first and last points can be done differently - in theory, we could use BeforeAll but this would require tests to be Ordered which will block parallel execution.

// - application namespace, where OpenDataHub and component service are deployed
// - namespaces created by OpenDataHub where its component live
// - traffic from other DataScienceProjects (namespaces created through dashboard)
func (r *KserveNetworkPolicyReconciler) allowTrafficFromApplicationNamespaces(isvc *kservev1beta1.InferenceService) *v1.NetworkPolicy {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zdtsw can you help me confirm if these are all the labels we need?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these should have all namespaces covered: either created by ODH operator or DS projects done by UI.

@bartoszmajsak bartoszmajsak changed the title chore: renames variables to reflect actual objects fix: adds network policies to unblock traffic between components Jan 18, 2024
Copy link
Contributor

@israel-hdez israel-hdez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codewise, I'm not seeing anything wrong.

Tomorrow I will check if we are covered on the bad cases we found.

Comment on lines +57 to +58
if err := cli.Create(ctx, inferenceServiceConfig); err != nil && !errors.IsAlreadyExists(err) {
Fail(err.Error())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if the following would work, nor if it is really a simplification... But leaving it if you find it fancy (and works):

Suggested change
if err := cli.Create(ctx, inferenceServiceConfig); err != nil && !errors.IsAlreadyExists(err) {
Fail(err.Error())
}
Expect(cli.Create(ctx, inferenceServiceConfig)).To(SatisfyAny(
Succeed(),
WithTransform(errors.IsAlreadyExists, BeTrue())
))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, that's cool. I learned something today :) Totally forgot about WithTransform existence.

Generally speaking I'm on the verge as far as "easy to understand/more readable" is concerned.

controllers/kserve_inferenceservice_controller_test.go Outdated Show resolved Hide resolved
ObjectMeta: metav1.ObjectMeta{
Name: openshiftIngressNetworkPolicyName,
Namespace: isvc.Namespace,
Labels: map[string]string{"opendatahub.io/related-to": "RHOAIENG-1003"},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For future clean-up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I thought it can help in deprecation / upgrade moving fwd. Won't harm I guess.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These type of labels or comment on code doesn't hurt the functionality but it pollute the overall code quality. I would say we could create Jira for its deprecation and add it to backlog so that it doesn't miss in going forward.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me give you a bit more elaborated reason for the existence of this label. At some point we will provide a better-suited solution when it comes to network policies (that's actually how we started). That would mean not creating defaults on OSSM side, but also then these will go away (as they're solely to mitigate restrictions imposed by OSSM default ones).

Considering the upgrade path with existing Data Science Projects and with those policies already existing we should take care of removing them. One way I can think of is to delete all the instances of these special NetworkPolicies during the odh-operator upgrade process. Using labels makes it very straightforward to fetch all at once. Another solution (done in #136 ) is to keep their names in the code and ensure they're removed as part of the reconciliation loop.

pollute the overall code quality

Can you elaborate a bit more on this? Would like to understand how I can make it better. One thing coming to mind is to extract it to a const.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I promote #136 way to handling deletion of NP.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we differ on how the cleanup should be handled, so I hope we can work out a consensus with other reviewers :)

Thinking of it more, the logic should be implemented here, as it's this controller that owns it. I still vouch however to not keep the policy names in the code but leverage the concept of a label to handle such cases in a more generic fashion. I also wonder if we do need this as part of the reconciliation loop, maybe a one-off call during the upgrade is sufficient?

The name of the label is something to discuss further, as we all know it is one of the hardest problems in software engineering :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not have a strong opinion of how to do the clean-up.

Perhaps, my only preference, is to keep the cleanup outside the reconcile loop; i.e. at controller startup. This would mean to either:

  • Add code to iterate through existent ISVCs at start-up, and specifically look for these NPs to do their deletion; or
  • Keep these labels, and directly query and issue deletion of the NPs.

The former is cleaner for the resources in the cluster, while the latter is cleaner for the time we write the code to do the deletion.

About #136, for the time we need to re-introduce the code again, I'd suggest to rework it to do the deletion at controller startup, rather than in the reconcile loop.

These clean-ups are similar to database migrations on other frameworks: ideally, they should run once on upgrade. But we don't have a component that would run migrations. So, the next better scenario is to run the migration at controller startup and keep clean the reconcile loop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @israel-hdez about not having cleanup login inside a reconcile loop, especially because IMO cleanup logic should only run once.
For example, to clean up resources related to the deprecated monitoring stack, I added the logic to a temporary Job for the odh-model-controller deployment which I felt was a better way of deleting those resources. This can differ for this PR though, as the number of NetworkPolicies and what namespaces they reside in is not known at startup time - the cleanup logic would have to determine that.
For my PR the list of resources to be deleted is already known

},
{
NamespaceSelector: &metav1.LabelSelector{
MatchLabels: map[string]string{"opendatahub.io/generated-namespace": "true"},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In what cases is this label added to the namespace? I thought that opendatahub.io/dashboard would be enough.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can the user create the namespace himself somehow? Or use an existing one to deploy ISVC to?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@israel-hdez opendatahub.io/dashboard is only applied on the DS projects.
if we want traffic going a bit wider by operator, application, and notebook's, better to have the later 2 added.
normally, the 3rd one should have the 2nd one covered, but in case, the label does not exsit (....)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ReToCode I guess user can do anything, but I am not sure this is supported scenario. Those policies take into account only the scenarios defined in godoc for this func:

// - application namespace, where OpenDataHub and component services are deployed
// - namespaces created by OpenDataHub where components live
// - traffic from other DataScienceProjects (namespaces created through dashboard)

I am not sure if the scenario where user creates a namespace on their own is something we should support / cover here. @danielezonca could you confirm?

Copy link

@skonto skonto Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking out loud, I guess there is no isolation between different notebooks also I guess a project admin could just misuse this right (might be wrong)?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if the scenario where user creates a namespace on their own is something we should support / cover here. @danielezonca could you confirm?

I think it will be possible in the future but the focus now is only "DS project" (aka namespaces with proper annotation)

@bartoszmajsak
Copy link
Contributor Author

bartoszmajsak commented Jan 19, 2024

Tomorrow I will check if we are covered on the bad cases we found.

@israel-hdez Is there some way we could automate this?

Copy link

@ReToCode ReToCode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the approach!

@@ -30,14 +31,19 @@ import (
)

const (
networkPolicyName = "allow-from-openshift-monitoring-ns"
monitoringNetworkPolicyName = "allow-from-openshift-monitoring-ns"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
monitoringNetworkPolicyName = "allow-from-openshift-monitoring-ns"
uwmNWPName = "allow-from-uwm-ns"
openshiftIngressNWPName = "allow-from-openshift-ingress-ns"
opendatahubOwnedNWPName = "allow-from-opendatahub-ns"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can guess what uwm stands for but would like to know the reason for this change :)

Important to note is that it's already in use, so renaming it would result in adding new policy under "uwm" name with the same rules.

Copy link

@skonto skonto Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The net policy defined later on has a ns selector openshift-user-workload-monitoring" and that means it applies to uwm so people reading the current name (constant) will expect to apply for openshift-monitoring ns. It is more about consistency I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if i recall it correctly, from 2.5 release, serving started to use user-workload-monitoring namespace for shipping their metrics to. pre-2.5 it was the standard openshift-monitoring namespace.
also I remember there is a ticket on serving team to "cleanup the old monitoring stack" which should remove resource created in old releases, including this "allow-from-openshift-monitoring-ns" NWP ( the one created by modelmesh not by knatvie)

Copy link

@skonto skonto Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @zdtsw

if i recall it correctly, from 2.5 release, serving started to use user-workload-monitoring namespace for shipping their metrics to

Not really we haven't changed anything. We use cluster monitoring for our control plane metrics and recommend UWM for user workloads.

also I remember there is a ticket on serving team to "cleanup the old monitoring stack" which should remove resource created in old releases, including this "allow-from-openshift-monitoring-ns" NWP ( the one created by modelmesh not by knatvie)

Have a pointer?

cc @ReToCode

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @VedantMahabaleshwarkar is still working on that.
@VedantMahabaleshwarkar Please have a look here to make sure we align the changes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. He is still working on this.
This is @VedantMahabaleshwarkar PR. @vaibhavjainwiz and I think using init-container to remove old monitoring stack is not a good idea. So
@VedantMahabaleshwarkar and @vaibhavjainwiz are discussing how to implement to remove old monitoring stack.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Jooho @zdtsw @danielezonca My PR for https://issues.redhat.com/browse/RHOAIENG-293 is to remove the lingering resources related to the monitoring stack. i. e the Prometheus operator deployments and operator pods. It does not touch Networkpolicies as the old monitoring stack itself did not create any Network Policies

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The allow-from-openshift-monitoring-ns NetworkPolicy was added for Kserve during the work to enable Kserve metrics so I have not touched that in my PR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commenting about the original suggested change, I don't think it is worth to do the renaming of the NP. In existing releases, the NP is already created with allow-from-openshift-monitoring-ns name. Since we want to fully move away from creating NPs, spending effort on a rename (i.e. migrating from allow-from-openshift-monitoring-ns -> to allow-from-uwm-ns) is probably not worth the time.

@@ -19,6 +19,7 @@ import (
"context"
"github.com/go-logr/logr"
kservev1beta1 "github.com/kserve/kserve/pkg/apis/serving/v1beta1"
"github.com/kserve/kserve/pkg/constants"
Copy link

@skonto skonto Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend to separate imports for better readability as a standard best practice. Also sorting matters as it affects package initialization, usually you want first third party then local ones etc (this goes to other files too).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@skonto Yeah I was having the same thoughts. In other projects I use gci, so it autoformats things:

  gci:
    sections:
      - standard
      - default
      - blank
      - prefix(github.com/opendatahub-io/opendatahub-operator)
      - blank
      - dot
    skip-generated: false
    custom-order: true

though that would be too much for the scope of this PR. @Jooho if you folks agree I can donate stuff like that in another PR.

sorting affects package initialization

TIL :)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is more like a comment for a separate PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, a separate PR would be good for build/chores

// This set of policies allow traffic from:
// - application namespace, where OpenDataHub and component services are deployed
// - namespaces created by OpenDataHub where components live
// - traffic from other DataScienceProjects (namespaces created through dashboard)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

User can create workloads in other OCP projects which were not originally created from Dashboard. it's a recent feature.

I can't tell how much it is used, but still, it's a scenario. Would this be a problem?

image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bdattoma Then it boils down to the question if it is possible to know that namespace X from where the traffic is originating is part of the Open Data Hub. Is there any label that would help us answer this question? If not, then it would be blocked even with 2.5 and KServe only, due to e.g. already existing allow-openshift-monitoring-ns policy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can anyone shed some light here? @danielezonca @zdtsw

Copy link

@bdattoma bdattoma Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think pods created via dashboard have the "opendatahub.io/dashboard": "true" label. I've checked with a workbench created from UI but running into a OCP project (not created via RHOAI Dashboard). It should be confirmed for other pods from different features as well

So, could we use a such pod selector in the NP?

podSelector:
        matchLabels:
          "opendatahub.io/dashboard": "true"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm I see now that Modelmesh model pods, for example, don't have the same label ^. It has something else like modelmesh-service=modelmesh-serving

Copy link

@bdattoma bdattoma Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then it is much more namespace can be used as a datascience project(?)

correct.

If having a generic solution here is complicated and not wroth the effort, we could probably make it working for the proper DS Projects (i.e., from UI), and release note a workaround in the case the user is using a generic OCP project. I don't like much as approach, but:

  • I'm unsure about likelihood to face the scenario (new feature, I think landed in 2.4 or 2.5). I didn't follow it, so don't know the requirement behind it
  • This is supposed to be a short-term fix in our code right? if we are confident a long term solution can arrive in the next or 2 releases, we might be ok taking this risk

Probably it's worth to include PMs as well in the decision.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR might be revisit in the future if we can remove the NP entirely so let's focus on "DS project" for now.
For "advanced" use cases where the user creates a namespace explicitly we can document the necessary NP

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good, do we already know what would be the NP in case of "advanced" use case? So we can provide it in the docs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO the most important use-case is the one from @cfchase. If I understood it correctly: assuming a namespace belongs to a tenant, services within the namespace should be reachable between them transparently, regardless if they belong or not to the mesh.

Also, we should have NPs to let infrastructure components to do their job. This would mean to allow pods in the AppNamespace to access pods in the namespaces where a KServe InferenceService exists (regardless if it is a DSProject or not). I think this PR already covers this, right?.

To me, traffic flowing to/from other DSProject is just a "plus" here. Assuming that the discussion here is about cross-namespace traffic between DSProjects and non-DSProjects, I'd agree that for the time being documenting it as a known issue with the workaround would be OK. However, this PR should still properly enable traffic between services in the same namespace where an InferenceService exists, even if the namespace in play is not a DSProject.

controllers/suite_test.go Outdated Show resolved Hide resolved
Comment on lines +123 to +126
withMatchingNestedField("ObjectMeta.Name", Equal("allow-from-openshift-monitoring-ns")),
withMatchingNestedField("ObjectMeta.Name", Equal("allow-openshift-ingress")),
withMatchingNestedField("ObjectMeta.Name", Equal("allow-from-opendatahub-ns")),
),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there other fields we need to check? Perhaps do a deep-equal check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how deeply we want to go with contracts here, I just wanted to make sure that we have created those policies. Based on my knowledge we are anyway not able to check in k8s testenv if they actually work on a network level so my thinking was we can stop here. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW,

IMHO, checking only for presence is just fine in this case, because it is almost a static NP. The only thing that varies is the namespace. If the NP would be different based on properties of something (for example, based on properties of the InferenceService), that would be worth additional assertions. But for these instances, I'm more than happy with just checking presence.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No strong preference

@@ -19,6 +19,7 @@ import (
"context"
"github.com/go-logr/logr"
kservev1beta1 "github.com/kserve/kserve/pkg/apis/serving/v1beta1"
"github.com/kserve/kserve/pkg/constants"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, a separate PR would be good for build/chores

ObjectMeta: metav1.ObjectMeta{
Name: openshiftIngressNetworkPolicyName,
Namespace: isvc.Namespace,
Labels: map[string]string{"opendatahub.io/related-to": "RHOAIENG-1003"},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add some (code) comments on this label?

@VedantMahabaleshwarkar
Copy link
Contributor

@bartoszmajsak I just merged #148 which fixed an issue on the main branch. Please make sure to rebase this PR to include the changes before merging

@terrytangyuan terrytangyuan mentioned this pull request Jan 19, 2024
3 tasks
most of them are called existing/desiredPod even though they are of different type.

Most likely copy-pastie mistake
This solution is based on an initially proposed [script](https://github.com/bartoszmajsak/issue-work-logs/tree/main/RHOAIENG-1003), but with slightly modified namespace selectors.

Additional changes:

- bumps ginkgo to v2 across the tests
- provides composed matcher to easily inspect nested structs
- cleans up test setup for KServe
- renames variables in reconcilers to reflect actual objects used

Fixes https://issues.redhat.com/browse/RHOAIENG-1955
and aligns timeouts across the calls
@Jooho
Copy link
Contributor

Jooho commented Jan 19, 2024

/approve
/lgtm

Copy link
Contributor

openshift-ci bot commented Jan 19, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bartoszmajsak, Jooho

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@terrytangyuan terrytangyuan merged commit ba123e5 into opendatahub-io:main Jan 19, 2024
4 of 5 checks passed
terrytangyuan pushed a commit to terrytangyuan/odh-model-controller that referenced this pull request Jan 19, 2024
…ndatahub-io#145)

* chore: renames variables to reflect actual objects

most of them are called existing/desiredPod even though they are of different type.

Most likely copy-pastie mistake

* chore: swaps ginkgo to v2 across the tests

* fix: adds network policies to unblock traffic between components

This solution is based on an initially proposed [script](https://github.com/bartoszmajsak/issue-work-logs/tree/main/RHOAIENG-1003), but with slightly modified namespace selectors.

Additional changes:

- bumps ginkgo to v2 across the tests
- provides composed matcher to easily inspect nested structs
- cleans up test setup for KServe
- renames variables in reconcilers to reflect actual objects used

Fixes https://issues.redhat.com/browse/RHOAIENG-1955

* fix(deps): pins ginkgo to version compatible with go1.19

* chore: moves assertion to Eventually check

and aligns timeouts across the calls

* chore: fixes wording in godoc

* chore: uses KServeNamespace var instead of custom func+struct field

* chore: simplifies name generation

* Revert "chore: swaps ginkgo to v2 across the tests"

This reverts commit d5078aa.
terrytangyuan pushed a commit to terrytangyuan/odh-model-controller that referenced this pull request Jan 19, 2024
…ndatahub-io#145)

* chore: renames variables to reflect actual objects

most of them are called existing/desiredPod even though they are of different type.

Most likely copy-pastie mistake

* chore: swaps ginkgo to v2 across the tests

* fix: adds network policies to unblock traffic between components

This solution is based on an initially proposed [script](https://github.com/bartoszmajsak/issue-work-logs/tree/main/RHOAIENG-1003), but with slightly modified namespace selectors.

Additional changes:

- bumps ginkgo to v2 across the tests
- provides composed matcher to easily inspect nested structs
- cleans up test setup for KServe
- renames variables in reconcilers to reflect actual objects used

Fixes https://issues.redhat.com/browse/RHOAIENG-1955

* fix(deps): pins ginkgo to version compatible with go1.19

* chore: moves assertion to Eventually check

and aligns timeouts across the calls

* chore: fixes wording in godoc

* chore: uses KServeNamespace var instead of custom func+struct field

* chore: simplifies name generation

* Revert "chore: swaps ginkgo to v2 across the tests"

This reverts commit d5078aa.
Jooho added a commit that referenced this pull request Jan 19, 2024
[cherry-pick] fix: adds network policies to unblock traffic between components  (#145)
Jooho added a commit to red-hat-data-services/odh-model-controller that referenced this pull request Jan 19, 2024
[cherry-pick] fix: adds network policies to unblock traffic between components  (opendatahub-io#145)
@bartoszmajsak bartoszmajsak deleted the unblock_network_communication branch January 19, 2024 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet