Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add multi arch Docker images #116

Merged
merged 19 commits into from
Jun 26, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
47 changes: 35 additions & 12 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ name: Docker

on:
push:
branches:
- master
branches:
- master
release:
types: [created]

Expand All @@ -19,17 +19,40 @@ jobs:

- name: Set Docker Tag
if: ${{ github.event_name == 'push' }}
run: echo ::set-env name=DOCKER_TAG::${GITHUB_SHA::8}
run: |
echo ::set-env name=DOCKER_TAG::${GITHUB_SHA::8}

- name: Set Docker Tag
if: ${{ github.event_name == 'release' && github.event.action == 'created' }}
run: echo ::set-env name=DOCKER_TAG::${GITHUB_REF/refs\/tags\//}
run: |
echo ::set-env name=DOCKER_TAG::${GITHUB_REF/refs\/tags\//}

- name: Build and Push Docker Image
uses: docker/build-push-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: kubenav/kubenav
dockerfile: cmd/server/Dockerfile
tags: ${{ env.DOCKER_TAG }}
- name: Docker Buildx (prepare)
id: prepare
run: |
DOCKER_IMAGE=kubenav/kubenav
DOCKER_PLATFORMS=linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64
echo ::set-output name=buildx_args::--platform ${DOCKER_PLATFORMS} --tag ${DOCKER_IMAGE}:${{ env.DOCKER_TAG }} --file cmd/server/Dockerfile .

- name: Set up Docker Buildx
uses: crazy-max/ghaction-docker-buildx@v3

- name: Docker Buildx (build)
run: |
docker buildx build --output "type=image,push=false" ${{ steps.prepare.outputs.buildx_args }}

- name: Docker Login
env:
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
run: |
echo "$DOCKER_PASSWORD" | docker login --username "$DOCKER_USERNAME" --password-stdin

- name: Docker Buildx (push)
run: |
docker buildx build --output "type=image,push=true" ${{ steps.prepare.outputs.buildx_args }}

- name: Clear
if: ${{ always() }}
run: |
rm -f ${HOME}/.docker/config.json
13 changes: 2 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,18 +41,9 @@ On mobile you can add your Cluster via Kubeconfig file or via your prefered Clou

On desktop kubenav will automatic load all configured clusters from the default Kubeconfig file or the `KUBECONFIG` environment variable. If you want to use another Kubeconfig file, you can start kubenav with the `-kubeconfig` Flag. You can also use the `-kubeconfig-include` and `-kubeconfig-exclude` flag to load Kubeconfig files from multiple locations by glob. The `-sync` flag can be used to write context changes back to your Kubeconfig file, so the context is also changed in your terminal.

It is also possible to deploy kubenav to your Kubernetes cluster and use it via your browser. For the Kubernetes based deployment you can choose between the in cluster options or you can add your Kubeconfig file to the container. You can take a look at the [`utils/kubernetes`](./utils/kubernetes) folder, to deploy kubenav to your cluster, or you can run the following commands:
> **Note:** kubenav is based on [Electron](https://www.electronjs.org) and [go-astilectron](https://github.com/asticode/go-astilectron), which will be downloaded on the first start of the app. Therefore the first start of the app can take a bit longer with a slow internet connection.

```sh
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/serviceaccount.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/clusterrole.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/clusterrolebinding.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/service.yaml
```

All available Docker images can be found at Docker Hub: [kubenav/kubenav](https://hub.docker.com/r/kubenav/kubenav)
Similar to the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard) it is also possible to deploy kubenav to your Kubernetes cluster. More information on the deployment of kubenav to Kubernetes can be found in [`utils/kubernetes`](./utils/kubernetes) folder.

## Beta and Nightly Builds

Expand Down
6 changes: 5 additions & 1 deletion cmd/server/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM node:13 as build
FROM --platform=linux/amd64 node:13 as build
COPY . /kubenav
WORKDIR /kubenav
RUN npm install -g ionic
Expand All @@ -7,13 +7,17 @@ ENV REACT_APP_SERVER true
RUN ionic build

FROM golang:1.14.4-alpine3.12 as server
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "Building on $BUILDPLATFORM, for $TARGETPLATFORM" > /log
RUN apk update && apk add git make
COPY . /kubenav
WORKDIR /kubenav
RUN make build-server

FROM alpine:3.12.0
RUN apk update && apk add --no-cache ca-certificates
RUN mkdir /kubenav
COPY --from=build /kubenav/build /kubenav/build
COPY --from=server /kubenav/bin/server /kubenav
WORKDIR /kubenav
Expand Down
84 changes: 84 additions & 0 deletions utils/kubernetes/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Kubernetes

To deploy kubenav to your Kubernetes cluster, you can simply run the following commands:

```sh
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/serviceaccount.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/clusterrole.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/clusterrolebinding.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubenav/kubenav/master/utils/kubernetes/service.yaml
```

This will deploy kubenav with the `-incluster` flag, which enables the in cluster mode of kubenav, where only the cluster where kubenav is running is available via the dashboard. To access the dashboard you can create an Ingress or you can use the created service:

```sh
kubectl port-forward --namespace kubenav svc/kubenav 14122
```

It is also possible to use the kubenav server with multiple Kubernetes cluster. For this you have to create a [secret](./secret.yaml) with your base64 encoded Kubeconfig file. Then you mount your Kubeconfig file and set the path via the `-kubeconfig` flag:

```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubenav
namespace: kubenav
labels:
app: kubenav
spec:
replicas: 1
selector:
matchLabels:
app: kubenav
template:
metadata:
labels:
app: kubenav
spec:
serviceAccountName: kubenav
containers:
- name: kubenav
image: kubenav/kubenav:0742c773
imagePullPolicy: IfNotPresent
args:
- -kubeconfig=/kubenav/kubeconfig/kubeconfig
ports:
- name: http
containerPort: 14122
protocol: TCP
livenessProbe:
httpGet:
path: /api/health
port: http
readinessProbe:
httpGet:
path: /api/health
port: http
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
# Mount the Kubeconfig file from your secret to the kubenav container.
volumeMounts:
- name: kubeconfig
mountPath: '/kubenav/kubeconfig'
readOnly: true
# Define a new volume, to use the kubeconfig secret containing your Kubeconfig file. This is only required, if you
# do not want to use the in cluster option.
# volumes:
- name: kubeconfig
secret:
secretName: kubeconfig
```

All available Docker images for kubenav can be found at Docker Hub: [kubenav/kubenav](https://hub.docker.com/r/kubenav/kubenav)




2 changes: 1 addition & 1 deletion utils/kubernetes/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ spec:
serviceAccountName: kubenav
containers:
- name: kubenav
image: kubenav/kubenav:8e93455a
image: kubenav/kubenav:0742c773
imagePullPolicy: IfNotPresent
args:
# Use the in cluster configuration for kubenav. This allows to manage only the cluster where kubenav is
Expand Down