Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make OpenShift compatible #292

Merged
merged 2 commits into from
Aug 28, 2023
Merged

make OpenShift compatible #292

merged 2 commits into from
Aug 28, 2023

Conversation

GitarPlayer
Copy link
Contributor

Q A
Bug fix? no
New feature? yes
API breaks? no
Deprecations? no
Related tickets mentioned in #250
License Apache 2.0

What's in this PR?

  1. I updated the kubebuilder RBAC annotations so the operator works on OpenShift.
  2. I added a new variable to install the helm operator while specifying the RunAsUser
  3. I added a nificlusters.nifi.konpyutaika.com sample for OpenShift

Why?

As it is NifiKop does not run on OpenShift without custom day 2 modifications

Additional context

I checked the install on OpenShift

# Tried on a clean AKS OpenShift cluster
oc version  
Client Version: 4.12.9
Kustomize Version: v4.5.7
Server Version: 4.10.54
Kubernetes Version: v1.23.12+8a6bfe4
# Create namespaces for Zookeeper and NiFi
oc create ns zookeeper
oc create ns nifi

# Install the CustomResourceDefinitions and cert-manager itself
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.7.2/cert-manager.yaml

# Get UID range for the operator install from the nifi namespace
uid=$(kubectl get namespace nifi -o=jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}' | sed 's/\/10000$//' | tr -d '[:space:]')

# Install Nifi operator using helm
helm install nifikop \
    nifikop \
    --namespace=nifi \
    --version 1.1.1 \
    --set image.tag=v1.1.1-release \
    --set resources.requests.memory=256Mi \
    --set resources.requests.cpu=250m \
    --set resources.limits.memory=256Mi \
    --set resources.limits.cpu=250m \
    --set namespaces={"nifi"} \
    --set runAsUser=$uid

# Get UID range for the Zookeeper operator from the zookeeper namespace
zookeper_uid=$(kubectl get namespace zookeeper -o=jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}' | sed 's/\/10000$//' | tr -d '[:space:]')

# Get the default storage class for the cluster
sc=$(kubectl get storageclass -o=jsonpath='{range .items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{.metadata.name}{end}')

# Install Zookeeper using helm
helm install zookeeper bitnami/zookeeper \
    --set resources.requests.memory=256Mi \
    --set resources.requests.cpu=250m \
    --set resources.limits.memory=256Mi \
    --set resources.limits.cpu=250m \
    --set global.storageClass=$sc \
    --set networkPolicy.enabled=true \
    --set replicaCount=3 \
    --set containerSecurityContext.runAsUser=$zookeper_uid \
    --set podSecurityContext.fsGroup=$zookeper_uid \
    --namespace zookeeper

# Use the UID for the NiFi operator to set the fsGroup and runAsUser
sed -i "s/1000690000/$uid/g" config/samples/openshift

# Use the default storage class for the cluster to set the persistent volume claim
sed -i "s/standard/$sc/g" config/samples/openshift

# Apply the configuration for the NiFi operator
oc apply -f config/samples/openshift -n nifi

# Expose the NiFi service as a route
oc expose svc -n nifi simplenifi-headless

# Get the route for the NiFi service
route=$(kubectl get route simplenifi-headless -n nifi -o=jsonpath='{.spec.host}')

# Open the NiFi UI in Firefox using the route
firefox http://$route/nifi

Checklist

  • Implementation tested
  • User guide and development docs updated (if needed)
  • Append changelog with changes

To Do

  • User guide and development docs updated

@juldrixx juldrixx merged commit f5f38ca into konpyutaika:master Aug 28, 2023
1 check passed
@indiealexh
Copy link

Thank you for this! Appreciate the hard work

@mh013370
Copy link
Member

Sorry for the headache, @GitarPlayer -- thanks for contributing this feature!

Thanks, @juldrixx for merging :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants