Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation Openstack #70

Merged
merged 6 commits into from
Nov 15, 2023

Conversation

lapentad
Copy link
Contributor

Tested the ability of installing Nephio on multiple Openstack clusters, using the test-infra's ansible script.
Used this flow to sync the clusters and deploy packages via Github.

@liamfallon
Copy link
Member

/approve

Copy link
Member

@electrocucaracha electrocucaracha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad that you started documenting this, I consider that this scenario must be cover, hopefully we can simplify the process to better adoption.

install-guide/openstack.md Outdated Show resolved Hide resolved
Comment on lines +19 to +94
```- name: Create PersistentVolume
kubernetes.core.k8s:
context: "{{ k8s.context }}"
state: present
definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-gitea-postgresql-0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /tmp/
namespace: "{{ gitea.k8s.namespace }}"

- name: Create PersistentVolumeClaim
kubernetes.core.k8s:
context: "{{ k8s.context }}"
state: present
definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-gitea-postgresql-0
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
namespace: "{{ gitea.k8s.namespace }}"

- name: Create PersistentVolume
kubernetes.core.k8s:
context: "{{ k8s.context }}"
state: present
definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-gitea-0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /tmp/
namespace: "{{ gitea.k8s.namespace }}"

- name: Create PersistentVolumeClaim
kubernetes.core.k8s:
context: "{{ k8s.context }}"
state: present
definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-gitea-0
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
namespace: "{{ gitea.k8s.namespace }}"
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe those resources can add to the gitea package to simplify the process

Comment on lines +95 to +97
2. Change context value from all the *test-infra\e2e\provision\playbooks* yaml files into your kubernetes context: `kubectl config get-contexts`

context: kubernetes-admin@cluster.local
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change needs to be done in the test-infra project, there are some places using k8s.context var like this one but there are others using hardcode like here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I manually changed all of them. I know this is not ideal, but I wanted to see if the ansible script would also work in a different environment.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll submit the PR for that and expose the value in a variable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lapentad I have created a PR to simplify this process, once it's available you could override some default values with

ANSIBLE_CMD_EXTRA_VAR_LIST="k8s.context=kubernetes-admin@cluster.local,kind.enabled=false" ./install_sandbox.sh

install-guide/openstack.md Outdated Show resolved Hide resolved
Comment on lines 105 to 110
4. Change the check specitifaction values in
*test-infra\e2e\provision\playbooks\roles\bootstrap\defaults\main.yml*

host_min_vcpu: 4
host_min_cpu_ram: 8

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we can improve the way to override default values

Maybe something like:

for nephio_var in NEPHIO_*; do
    [[ -z ${!nephio_var:-} ]] || ansible_cmd+="--extra-vars=\"${nephio_var,,}=${!nephio_var}\" "
done

So running NEPHIO_HOST_MIN_VCPU=4 NEPHIO_HOST_MIN_CPU_RAM=8 ./install_sandbox.sh could override the Ansible default values, without having to edit any file.

Another way is to use the Ansible var precedence rules and have a group_vars/host_vars/set_facts file, something similar to what we have in the molecule files

*test-infra\e2e\provision\install_sandbox.sh*

## Manual Installation of the managment cluster using kpt
TDB (manual install of kpt, porch, configsync, nephio-webui, capi, metallb)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally the management cluster should be installed with the Install Ansible role

## Manual Installation of the managment cluster using kpt
TDB (manual install of kpt, porch, configsync, nephio-webui, capi, metallb)

## Manual Installation of the Edge cluster using kpt
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that edge clusters should be provisioned by Cluster API, which means that we need to create a new package on the existing catalog like this one or something new here

Copy link
Contributor Author

@lapentad lapentad Oct 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Victor, I wanted to address the use case where the user doesn't have access to the provisioning process. For instance, I do not have access to the OpenStack cluster provisioning cli within my organization. I can, though, request a cluster using an internal process.

Copy link
Member

@electrocucaracha electrocucaracha Oct 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, maybe we need to specify that as requirement on the prerequesites section or in this document.

/cc @johnbelamaric

lapentad and others added 3 commits October 27, 2023 09:37
typo "OpenStack"

Co-authored-by: Victor Morales <chipahuac@hotmail.com>
indentation

Co-authored-by: Victor Morales <chipahuac@hotmail.com>
Override spec values in the script launch
@liamfallon
Copy link
Member

I think this could be pulled in now

@electrocucaracha
Copy link
Member

/lgtm

@nephio-prow nephio-prow bot added the lgtm label Nov 1, 2023
@nephio-prow nephio-prow bot removed the lgtm label Nov 14, 2023
@liamfallon
Copy link
Member

/lgtm

@nephio-prow nephio-prow bot added the lgtm label Nov 14, 2023
@efiacor
Copy link
Contributor

efiacor commented Nov 14, 2023

/lgtm

@johnbelamaric
Copy link
Member

/approve
/lgtm

@nephio-prow nephio-prow bot removed the lgtm label Nov 15, 2023
@liamfallon
Copy link
Member

/approve
/lgtm

@nephio-prow nephio-prow bot added the lgtm label Nov 15, 2023
Copy link
Contributor

nephio-prow bot commented Nov 15, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: johnbelamaric, liamfallon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@nephio-prow nephio-prow bot merged commit d2be6a8 into nephio-project:main Nov 15, 2023
3 checks passed
@liamfallon liamfallon deleted the OpenstackDocumentation branch January 30, 2024 14:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants