Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NFTopology controller doc #428

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

NFTopology controller doc #428

wants to merge 5 commits into from

Conversation

s3wong
Copy link

@s3wong s3wong commented Nov 1, 2023

No description provided.

- Approver:

## Description
NFTopology controller (NTC) takes a network topology centric input and kickstarts the Nephio automation process via generating a PackageVariantSet (PVS) custom resource, and as the associated packages are being deployed, NTC continuously updates statuses to reflect state of deployment of these packages. Primarily, NTC serves as example application to utilize Nephio primitives for network function automations.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The NFTopology controller (NTC) takes a network topology centric input and kickstarts the Nephio automation process by generating a PackageVariantSet (PVS) custom resource. As the associated packages are being deployed, NTC continuously updates their status fields to reflect the state of deployment of these packages. Primarily, NTC serves as example application that utilizes Nephio primitives for network function automations.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed made. Thanks


![NFTopology High Level Functional Diagram](./img/nftopology-functional.jpg)

NTC contains two controllers, one watches the NFTopology custom resource, and reconcile that intent via constructing a number of PVSs; the other controller watches PackageRevision resources in the management cluster to determine what NF package has been deployed, and updates the NFTopology status accordingly.
Copy link
Member

@liamfallon liamfallon Nov 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The NTC contains two controllers. The first controller watches the NFTopology custom resource and reconciles that intent by constructing a number of PVSs. The second controller watches PackageRevision resources in the management cluster to determine what NF packages have been deployed and updates the NFTopology status accordingly.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments incorporated. Thanks

}

// PackageRevisionReference is a temporary replica of PackageRevisionReference used for the
// ONE Summit
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we want to remove this reference to the ONE summit?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed. Thanks


![NFTopology CRD](./img/nftopology-crd.jpg)

Note that the NetworkInstance field is static, i.e., user defined a fixed NetworkInstance resource to be linked to a template, whereas the actual NF instance is dynamic --- that is, the actuation of the instance of the NF is based off of a cluster matching some labels, but the NetworkInstace this NF specification will be attaching to is statically configured. Note that each NFInstance would only create one instance per each cluster matching a label, and that the NFNetworkInstance does **NOT** define the network, it is merely
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Note that the NetworkInstance field is static, i.e., user defined a fixed NetworkInstance resource to be linked to a template, whereas the actual NF instance is dynamic --- that is, the actuation of the instance of the NF is based off of a cluster matching some labels, but the NetworkInstace this NF specification will be attaching to is statically configured. Note that each NFInstance would only create one instance per each cluster matching a label, and that the NFNetworkInstance does **NOT** define the network, it is merely
Note that the NetworkInstance field is static, i.e., the user defines a fixed NetworkInstance resource to be linked to a template, whereas the actual NF instance is dynamic --- that is, the actuation of the instance of the NF is based on a cluster matching some labels, but the NetworkInstance that this NF specification will be attaching to is statically configured. Note that each NFInstance would only create one instance per each cluster matching a label, and that the NFNetworkInstance does **NOT** define the network, it is merely

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments incorporated. Thanks

![NFTopology CRD](./img/nftopology-crd.jpg)

Note that the NetworkInstance field is static, i.e., user defined a fixed NetworkInstance resource to be linked to a template, whereas the actual NF instance is dynamic --- that is, the actuation of the instance of the NF is based off of a cluster matching some labels, but the NetworkInstace this NF specification will be attaching to is statically configured. Note that each NFInstance would only create one instance per each cluster matching a label, and that the NFNetworkInstance does **NOT** define the network, it is merely
a placeholder to denote a connectivity between an instance's attachment point to another instance's attachment point
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
a placeholder to denote a connectivity between an instance's attachment point to another instance's attachment point
a placeholder to denote connectivity between an instance's attachment point and another instance's attachment point

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments incorporated. Thanks

a placeholder to denote a connectivity between an instance's attachment point to another instance's attachment point

## NFTopology to PackageVariantSet CR Mapping
NTC kickstarts the hydration process via applying the **PackageVariantSet** (PVS) CR to management cluster. The following table denotes which fields from NFTopology (and its associated struct) will be used to populate the PVS CR:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
NTC kickstarts the hydration process via applying the **PackageVariantSet** (PVS) CR to management cluster. The following table denotes which fields from NFTopology (and its associated struct) will be used to populate the PVS CR:
NTC kickstarts the hydration process by applying the **PackageVariantSet** (PVS) CR to management cluster. The following table denotes which fields from NFTopology (and its associated struct) will be used to populate the PVS CR:

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments incorporated. Thanks

## Tracking Packages
NTC will embed a label 'nf-deployment-name', which is set to NFToplogy CR's own name; PackageVariantSet and PackageVariant controllers will propagate this label to all the PackageRevision resources associated with the "fan-out"ed packages.

NTC watches over all PackageRevision resources in the management cluster, and maps the NFTopology intent to the number of deployed NF resources via tracking corresponding PackageRevision resources. As each PackageRevision resource gets to *PUBLISHED* state, NTC would update the **NFInstances** field of its status to reflect on deployed NF package.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
NTC watches over all PackageRevision resources in the management cluster, and maps the NFTopology intent to the number of deployed NF resources via tracking corresponding PackageRevision resources. As each PackageRevision resource gets to *PUBLISHED* state, NTC would update the **NFInstances** field of its status to reflect on deployed NF package.
NTC watches over all PackageRevision resources in the management cluster, and maps the NFTopology intent to the number of deployed NF resources by tracking the corresponding PackageRevision resources. As each PackageRevision resource gets to *PUBLISHED* state, NTC will update the **NFInstances** field of its status to reflect on deployed NF package.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments incorporated. Thanks


NTC watches over all PackageRevision resources in the management cluster, and maps the NFTopology intent to the number of deployed NF resources via tracking corresponding PackageRevision resources. As each PackageRevision resource gets to *PUBLISHED* state, NTC would update the **NFInstances** field of its status to reflect on deployed NF package.

NTC will also extract the number of pending conditions from each of the PR as a display to user(s) who want to continuously examine the lifecycle of the packages, and which conditions are gating a particular package from being deployed. NTC essentially tracks status of package deployment for all NF instances specified derived from the NFTopology intent; however, NTC does **NOT** track the progress of the deployment once the package is pushed to the workload cluster.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice!

## Constructing NFInstances of NFTopology Status
NTC only tracks those NF instances that are successfully deployed, i.e., the corresponding cloned and hydrated / specialized package is merged to the **main** branch on the corresponding deployment repo for that target workload cluster. NTC would first create the initial *NFDeployTopology* CR when the first package merged, then update the CR as every subsequent package got merged.

As each NF package got merged, the existing NFDeployedInstance's Connectivities field would change. This relationship is keyed off of the NetworkInstance resource; NetworkInstance specifies for each cluster matching the input label, a corresponding NF specified by NFTemplate would be deployed. For each NFAttachment defined on this NFInstance, there is a NetworkInstance defined --- and for any other NFInstance that has NFAttachment
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
As each NF package got merged, the existing NFDeployedInstance's Connectivities field would change. This relationship is keyed off of the NetworkInstance resource; NetworkInstance specifies for each cluster matching the input label, a corresponding NF specified by NFTemplate would be deployed. For each NFAttachment defined on this NFInstance, there is a NetworkInstance defined --- and for any other NFInstance that has NFAttachment
As each NF package gets merged, the existing NFDeployedInstance's Connectivities field will change. This relationship is keyed off of the NetworkInstance resource; NetworkInstance specifies for each cluster matching the input label, a corresponding NF specified by NFTemplate will be deployed. For each NFAttachment defined on this NFInstance, there is a NetworkInstance defined --- and for any other NFInstance that has NFAttachment

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the document @s3wong , it explains the NTC very well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the thorough review!

Copy link
Contributor

@efiacor efiacor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice document. Well explained


NFTopology CR has capacity input which needs to be passed through PVS to the actual NF packages. This can be accomplished via the kpt provided function **search-replace** where the capacity input from NFTopology CR can be passed to the cloned NF package, which also contains capacity.yaml.

NTC will also need to associate all the to-be-fan'ed-out packages with this deployment. In PVS, this is allowed as a set of labels to be passed to PVS controller, which will then propagate those labels to PackageVariant (PV) CRs, and subsequently the PackageRevision (PR) CRs. NTC will then watch for all the PR resources created, and associated them with which NFTopology specification via these labels.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
NTC will also need to associate all the to-be-fan'ed-out packages with this deployment. In PVS, this is allowed as a set of labels to be passed to PVS controller, which will then propagate those labels to PackageVariant (PV) CRs, and subsequently the PackageRevision (PR) CRs. NTC will then watch for all the PR resources created, and associated them with which NFTopology specification via these labels.
NTC will also need to associate all the to-be-fan'ed-out packages with this deployment. In PVS, this is allowed as a set of labels to be passed to PVS controller, which will then propagate those labels to PackageVariant (PV) CRs, and subsequently the PackageRevision (PR) CRs. NTC will then watch for all the PR resources created, and associate them with the relevant NFTopology specification via these labels.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments incorporated. Thanks

@liamfallon
Copy link
Member

/approve

@liamfallon
Copy link
Member

/lgtm

@liamfallon
Copy link
Member

Rebase needed on this PR

@nephio-prow nephio-prow bot removed the lgtm label Dec 13, 2023
@liamfallon
Copy link
Member

/approve
/lgtm

@nephio-prow nephio-prow bot added the lgtm label Dec 19, 2023
Copy link
Contributor

nephio-prow bot commented Dec 19, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: liamfallon
Once this PR has been reviewed and has the lgtm label, please ask for approval from s3wong by writing /assign @s3wong in a comment. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@efiacor
Copy link
Contributor

efiacor commented Jan 15, 2024

Is this for R2 or 3 @s3wong ?

@liamfallon
Copy link
Member

@s3wong just checking, do you want to merge this PR or should it move into "experimental"?

@tliron
Copy link

tliron commented May 22, 2024

This PR is outdated. Let's keep it open until we make clear plans for the future of the topology controller.

@tliron tliron removed the lgtm label May 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants