Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support compiling ARMv7l Architecture #21094

Closed
xunholy opened this issue Feb 13, 2020 · 58 comments
Closed

Support compiling ARMv7l Architecture #21094

xunholy opened this issue Feb 13, 2020 · 58 comments
Labels
area/environments kind/docs lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed

Comments

@xunholy
Copy link

xunholy commented Feb 13, 2020

Given my architecture type is armv7l, and Kubernetes is compatible with armv7l architecture I would like to be able to compile Istio for this format.

I've attempted to compile using the current build scripts however, they're not compiling for my 32bit arch, rather for 64bit which is incompatible. I've also run into issues where I had to get several of the build tools available to arm 32bit locally and that itself was a lot of work.

ARM 32bit should be supported, or at least possible to compile. I've currently got a Rasberry Pi 4b cluster running Kubernetes, I firmly believe ARM support should become more standardized in support.

@pisymbol
Copy link

I completely agree with @xunholy as I too have a RPi 4B cluster waiting on Istio - I'd like to use k3sup to install it too!

@xunholy
Copy link
Author

xunholy commented Feb 18, 2020

@howardjohn

Has there been any work to investigate building and supporting multiple architecture types using docker buildx?

https://docs.docker.com/buildx/working-with-buildx/

In light of recently attempting to compile armv7l architecture myself, can I suggest the compilation of the source code be done with dependencies encapsulated in containers? This would make reproducible builds easier to troubleshoot and make version pinning extremely easy if there was a docker-compose like setup that the Makefiles execute.

I've also engaged previously in another issue thread this discussion which I think might be handy linking if people like myself are seeking this kind of support.

Issue Ref: #13321

@howardjohn
Copy link
Member

Building the docker images is the easier part, the harder part is getting Envoy compiled, which I don't think is (officially) done yet envoyproxy/envoy#1861.

We are already using buildx, so getting it to do multi-arch could probably be done fairly easily

However, just because we can doesn't mean we should. We aren't going to ship something that isn't properly tested, documented, etc, which represents a huge amount of work - both short and long term.

@MrXinWang
Copy link

MrXinWang commented Mar 13, 2020

@xunholy Referening my comments in your posted issue here, personally I am quite interested in the multiarch support of istio, as istio has been already used in a lot of k8s-based projects.

@howardjohn Totally agree with you about your CI and document (seems much easier compared to the CI) concerns, but in order to make this multiarch official support (or arm support) a real thing, are there any suggestions from you that we can start to work on?

@howardjohn
Copy link
Member

My opinion (and its just mine - others in the project may disagree) is that we would be perfectly happy to see changes made to help people build on various platforms more easily and accept PRs that move in this direction (assuming they don't have large complexity costs). To ship an "official" ARM image would be a step that I don't personally feel the project has the resources to adequately handle properly. I think it would make a lot of sense to have some community project building images in the short term and then we can evaluate making things more official in the future.

That's just my opinion though, you are welcome to join the test and release WG meeting (see https://github.com/istio/community/blob/master/WORKING-GROUPS.md#working-group-meetings) or even just add it to the agenda and we can discuss this with a wider audience.

@Bonehead5338
Copy link
Contributor

Bonehead5338 commented Mar 23, 2020

how about just istioctl on ARMv7 and ARM64? This only involves changing the GO parameters and duplicating part of the existing build I think

@howardjohn
Copy link
Member

Yeah that is probably much easier, assuming no complications come up. Releases are built at istio/release-builder repo if anyone is interested in sending a PR. I think this would basically just involve a couple lines to add to the existing architectures

@Bonehead5338
Copy link
Contributor

@howardjohn #22381

@xunholy
Copy link
Author

xunholy commented Mar 25, 2020

@dave93cab & @howardjohn I'm also in the process of not only making istioctl arm7l compliant, however, but it's also possible to run Istio in such a way that architecture is almost entirely irrelevant.

I am working on the Envoy support in parallel to this change and will hopefully update here soon with a corresponding pull request on their end.

#22456

@sdake
Copy link
Member

sdake commented Mar 28, 2020

@howardjohn regarding #21094 (comment). My opinion matches yours:

We should enable Istio's codebase to be built by third parties and distributed/tested by third parties for platforms we, ourselves, cannot CI (as a result of insufficient resources).

If we don't have a committed roadmap for testing the work, we shouldn't ship the containers directly in docker.io/istio.

I think we would be wise not to conflate istioctl, the CLI tool, with the docker containers.

cc / @esnible

Cheers
-steve

@Bonehead5338
Copy link
Contributor

added #22567 for cleanup

@mmohamed
Copy link

Hi, I just finish with building Istio on ARM arch (v7/v8) and it's not possible today.

I have started by install all requirement for building environment (Golang, java, CMake, GCC, CC...) and the first problem it to build Bazel from source (I try to use v0.28 with jdk8). Resolved, next problem is to build Istio proxy with Bazel from source, I got a problem with non-supported flag for GCC (old version) for envoy and pull container rule from docker rule , it's not compiled with ARM , so after compile it from source (with Bazel and go), I override it to make analyzing package success with Bazel, but I got new problem with ld linked and Ninja release (libac++ and libc++/abi not good liked). After resolving it, compile is started for great than 4000 packages (C, C++ ,Java and Go) but compilation was failed because envoy has a dependency to V8 chromium, and they have dependency to Wee8.

Wee8 it's not available for ARM arch, so it's not possible to build Istio proxy with armv7 without re building Wee8 from source (if it’s possible) and force dependencies.

I don’t understand why it's so complicated to build a simple manageable proxy (envoy) and build a simple docker image for ARM. Mesh service it's a very simple concept but Istio with his more dependencies and complicated building toolchain (if I have understand it’s extends envoy to build core image) will be unmaintainable and can't be reused for another project.

It's my opinion and I respect all works for making Istio. Before trying to build it from source, I have expecting Istio to be based on Nginx proxy who be injected inside pod to make a reverse manageable proxy and a simple frontend application with core back service to manage it.

Thank you, I will try Istio if it's support ARM next time :)

@morlay
Copy link
Contributor

morlay commented Jun 11, 2020

image

Just share our build for arm64 https://github.com/querycap/istio (1.6.1 available too)

hardest step is compiling istio envoy. I pushed it here https://github.com/morlay/istio-envoy-arm64 (always timeout in github worflow https://github.com/morlay/istio-proxy-build-env/runs/759787862?check_suite_focus=true )

compiling istio enovy for arm64 is very slow.
16 hours cost on azure 4U16GiB x86_64 marchine in docker with qemu(docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

The build env here
https://github.com/morlay/istio-proxy-build-env/blob/master/build-env.Dockerfile

two important changes:

  • added gn for v8
  • compiled hsdis-<arch>.so for bazel

And change env vars, then build envoy.

export PATH="${LLVM_PATH}/bin:${PATH}"; 
export JAVA_HOME="$(dirname $(dirname $(realpath $(which javac))))"; 
export BAZEL_BUILD_ARGS="--verbose_failures --define=ABSOLUTE_JAVABASE=${JAVA_HOME} --javabase=@bazel_tools//tools/jdk:absolute_javabase --host_javabase=@bazel_tools//tools/jdk:absolute_javabase --java_toolchain=@bazel_tools//tools/jdk:toolchain_vanilla --host_java_toolchain=@bazel_tools//tools/jdk:toolchain_vanilla"; 
make build_envoy; 

hope there be helpful.

@xunholy
Copy link
Author

xunholy commented Jun 11, 2020

@morlay This is fantastic news, I've also been interested in seeing this done for arm64 for some time. I raised a similar change into the upstream which you can view here for reference #22456

The outcome was they don't have the resources to be able to officially support building the arm64 / multi-arch images at this current stage, it would be awesome if we could potentially utalise the work you've done into the following repository as I know quite a number of people tracking this: https://github.com/raspbernetes/multi-arch-images

Issue Ref: raspbernetes/multi-arch-images#5

@morlay
Copy link
Contributor

morlay commented Jun 11, 2020

@xunholy welcome to did that.

but sadly. the compile times will block us to added to github workflow.

and for armv7, should fix the gn first.
I just copy form https://github.com/envoyproxy/envoy-build-tools/blob/master/build_container/build_container_common.sh#L12-L27.

@xunholy
Copy link
Author

xunholy commented Jun 11, 2020

Github workflows should work with the accurately set timeout threshold being extended to the timeframe you've mentioned above.

Would be great to see, also feel free to reach out, there is a discord chat link in the org label you can just and we can discuss further.

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Jun 27, 2020
@hzxuzhonghu
Copy link
Member

not stale

@istio-policy-bot istio-policy-bot removed the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Jun 28, 2020
@gabrielfreire
Copy link

gabrielfreire commented Jul 1, 2021

Hi, any update on this ?

Just bought a case and 3 raspberry PI 4 to transfer my cluster to it, installed everything, cluster up and running but istio won't install.
Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1038-raspi aarch64)
are there any workarounds at least ?

@chrstnwhlrt
Copy link

@gabrielfreire I think it should be possible to use istioctl to create a yaml containing the setup and just replace the images used with some arm64 compatible builds (from docker hub or self compiled)

@morlay
Copy link
Contributor

morlay commented Jul 2, 2021

@gabrielfreire could try to use my builds https://github.com/querycap/istio#install-istio

@gabrielfreire
Copy link

gabrielfreire commented Jul 2, 2021

Thank you @morlay.

But i don't understand what i should do with this, where should i apply it ?

spec:
  components:
    pilot:
      k8s: # each components have to set this
        affinity: &affinity
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
                - matchExpressions:
                    - key: kubernetes.io/arch
                      operator: In
                      values:
                        - arm64
                        - amd64
    egressGateways:
      - k8s:
          affinity: *affinity
    ingressGateways:
      - k8s:
          affinity: *affinity

i believe i need it since i'm having this problem on ingress/egress

0/3 nodes are available: 3 node(s) didn't match Pod's node affinity.

btw, i have tried to add this affinity bit in the IstioOperator but it doesn't install anything when i do it

there is an error in the logs

error   installer       failed to merge base profile with user IstioOperator CR pi-istiocontrolplane, json merge error 
(map: map[k8s:map[affinity:map[nodeAffinity:map[requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:
[map[key:kubernetes.io/arch operator:In values:[arm64 amd64]]]]]]]]]] does not contain declared merge key: name) for base object:

EDIT

Managed to make it work, final operator looks like this

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: pi-istiocontrolplane
spec:
  hub: docker.io/querycapistio
  profile: demo
  components:
    pilot:
      k8s: # each components have to set this
        affinity: &affinity
          nodeAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - preference:
                matchExpressions:
                - key: beta.kubernetes.io/arch
                  operator: In
                  values:
                  - arm64
                  - amd64
              weight: 2
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: beta.kubernetes.io/arch
                  operator: In
                  values:
                  - arm64
                  - amd64
    egressGateways:
    - name: istio-egressgateway
      k8s:
        affinity: *affinity
      enabled: true
    ingressGateways:
    - name: istio-ingressgateway
      k8s:
        affinity: *affinity
      enabled: true

@morlay
Copy link
Contributor

morlay commented Jul 2, 2021

@gabrielfreire

using 1.10.x,and just run

istioctl operator init --hub=docker.io/querycapistio

custom profiles should after you know more about istio-operator spec

the docs just show additional settings, not a full applyable crd yaml.

@gabrielfreire
Copy link

I am using 1.10.0, just running doesn't work as i get the error: 0/3 nodes are available: 3 node(s) didn't match Pod's node affinity.

I figured it out and added an EDIT to my previous post with the final istio-operator file that worked.

it seems that the key is not kubernetes.io/arch but beta.kubernetes.io/arch

Thanks for your work on this !

@morlay
Copy link
Contributor

morlay commented Jul 2, 2021

@gabrielfreire

if you only have beta.kubernetes.io/arch label in node.

Your k8s version may be too old?

@gabrielfreire
Copy link

@morlay

 kubectl get no
NAME          STATUS   ROLES                      AGE   VERSION
k8s-node-01   Ready    controlplane,etcd,worker   23h   v1.20.6
k8s-node-02   Ready    controlplane,etcd,worker   23h   v1.20.6
k8s-node-03   Ready    controlplane,etcd,worker   23h   v1.20.6

@morlay
Copy link
Contributor

morlay commented Jul 2, 2021

@gabrielfreire

Interesting. I remember k8s have multiple labels about arch since 1.13 or before.

Is your k8s created by cloud provider?

@gabrielfreire
Copy link

@morlay
No, it's bare metal on raspberry-pi cluster with metallb as load balancer, rancher/rke, etc

i'm not using any cloud provider for anything at the moment

@aslanpour
Copy link

Is there a chance for us to use istio on ARM or not?!!

@morlay
Copy link
Contributor

morlay commented Jul 23, 2021

@aslanpour

aarch64 yes !
aarch32 no. istio-proxy (envoy) depends chrome v8, which is not support 32bit.

@jocatalin
Copy link

I've switched to mac mini with m1 processor (arm based) as local development box. I can run istio in kind (via docker desktop) using helm charts. I didn't experienced any issues until now. There are some anti-affinity rules that need to be adjusted to make the chart work (add arm64 as an architecture)

@mike-source
Copy link

@jocatalin Is there any chance you could elaborate on the steps you took/rules you adjusted exactly? I'm considering making the switch to a mac mini with m1, but need to be able to use istio.

Our dev setup is using minikube & docker at the moment...

@niranjannitesh
Copy link

Recently we were facing the same issue, after some debugging and research I wrote this script,
It uses KinD but I think it can easily be replaceable with either docker-desktop or minikube

https://github.com/nowandme/k8s-istio-m1

@jxlwqq
Copy link
Contributor

jxlwqq commented Nov 1, 2021

Set Unofficial hub when install istio (from https://github.com/resf/istio):

istioctl install --set hub=ghcr.io/resf/istio --set profile=demo -y

If your installed Istio version <= 1.12, you need to open a new terminal: (add arm64 in the nodeAffinity):

kubectl patch deployments.apps \
  istio-ingressgateway \
  --namespace istio-system \
  --type='json' \
  -p='[
  {"op": "replace", "path": "/spec/template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution/0/preference/matchExpressions/0/values", "value": [amd64,arm64]},
  {"op": "replace", "path": "/spec/template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/0/values", "value": [amd64,arm64,ppc64le,s390x]}
  ]'
kubectl patch deployments.apps \
  istio-egressgateway \
  --namespace istio-system \
  --type='json' \
  -p='[
  {"op": "replace", "path": "/spec/template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution/0/preference/matchExpressions/0/values", "value": [amd64,arm64]},
  {"op": "replace", "path": "/spec/template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/0/values", "value": [amd64,arm64,ppc64le,s390x]}
  ]'

Thanks for your contributions. @morlay

@sirAlexander
Copy link

@jxlwqq Thanks for the update. With the above, Both Istio core and Istiod install ok but the Egress and Ingress gateways processing results in failure.

@jxlwqq
Copy link
Contributor

jxlwqq commented Dec 28, 2021

@jxlwqq Thanks for the update. With the above, Both Istio core and Istiod install ok but the Egress and Ingress gateways processing results in failure.

open a new terminal, then run the patch command.

@KoukiMatsuda
Copy link

KoukiMatsuda commented Jan 27, 2022

@jxlwqq

Thank you for publishing the procedure.
I built the Istio environment by following the instructions you published. However, it did not work as expected.

I have built the Calico + Istio environment with the following configuration, and have performed the steps described in Istio.
However, the generation of the httpbin pod does not work with an error.
Can you tell me what is going on?
https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#configuring-ingress-using-an-istio-gateway

[Environment]

Desktop PC(Host)
 OS:Ubuntu 20.04.3 LTS 64bit

Raspberry Pi4(Client)
 OS:Ubuntu Server 20.04.3 LTS 64bit(arm64)

[Prerequisite.]

The following software must have been installed.
・Docker
・Kubernetes
・calicoctl

[Implementation Procedure]

Reference URL:https://docs.projectcalico.org/getting-started/kubernetes/quickstart

■Insall Calico

1.Create a Kubernetes cluster.

$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. Join a Woker node to the generated cluster.

$ sudo kubeadm join <..>

3.Deploy Calico.

$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
$ kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml

4.Verify that Calico pod has been activated.

$ watch kubectl get pods -n calico-system

5.Allow the Master node to create a Pod.

$ kubectl taint nodes --all node-role.kubernetes.io/master-

■Install Istio

 Reference URL:https://projectcalico.docs.tigera.io/security/app-layer-policy

1.Enable application layer policy

$ calicoctl patch FelixConfiguration default --patch \
   '{"spec": {"policySyncPathPrefix": "/var/run/nodeagent"}}'

2.Install Istio Operator

$ istioctl operator init --hub=docker.io/querycapistio --tag=1.10.2

3.Install Istio

$ curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.10.2 sh -
$ export PATH="$PATH:/home/ubuntu/istio-1.10.2/bin"
$ istioctl install --set hub=docker.io/querycapistio --set profile=demo -y --set values.gateways.istio-ingressgateway.type=NodePort

$ kubectl create -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default-strict-mode
  namespace: istio-system
spec:
  mtls:
    mode: STRICT
EOF

$ kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: example-istiocontrolplane
spec:
  hub: docker.io/querycapistio
  profile: demo
EOF

4.Add arm64 in the nodeAffinity

$ kubectl patch deployments.apps \
  istio-ingressgateway \
  --namespace istio-system \
  --type='json' \
  -p='[
  {"op": "replace", "path": "/spec/template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution/0/preference/matchExpressions/0/values", "value": [amd64,arm64]},
  {"op": "replace", "path": "/spec/template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/0/values", "value": [amd64,arm64,ppc64le,s390x]}
  ]'

$ kubectl patch deployments.apps \
  istio-egressgateway \
  --namespace istio-system \
  --type='json' \
  -p='[
  {"op": "replace", "path": "/spec/template/spec/affinity/nodeAffinity/preferredDuringSchedulingIgnoredDuringExecution/0/preference/matchExpressions/0/values", "value": [amd64,arm64]},
  {"op": "replace", "path": "/spec/template/spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution/nodeSelectorTerms/0/matchExpressions/0/values", "value": [amd64,arm64,ppc64le,s390x]}
  ]'

5.Update Istio sidecar injector

$ curl https://docs.projectcalico.org/manifests/alp/istio-inject-configmap-1.10.yaml -o istio-inject-configmap.yaml
$ kubectl patch configmap -n istio-system istio-sidecar-injector --patch "$(cat istio-inject-configmap.yaml)"

6.Add Calico authorization services to the mesh

$ kubectl apply -f https://docs.projectcalico.org/manifests/alp/istio-app-layer-policy-envoy-v3.yaml

7.Add namespace labels

$ kubectl label namespace default istio-injection=enabled

@Trackhe
Copy link

Trackhe commented Feb 15, 2022

@KoukiMatsuda
any reason why you exactly use 1.10.2?

@bastosvinicius
Copy link

Hello guys, is there still hope to make this feature available?

@ddelange
Copy link

Is this issue (armv7l) superseded by #26652 (arm64, found via https://github.com/resf/istio)?

@morlay
Copy link
Contributor

morlay commented May 23, 2022

@ddelange

armv7l is 32bit processor, which chould not be supported because of envoy.

@howardjohn
Copy link
Member

Lets close this in favor of #26652. arm64 support is planned but armv7l is not

@howardjohn howardjohn closed this as not planned Won't fix, can't repro, duplicate, stale May 23, 2022
Prioritization automation moved this from P1 to Done May 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/environments kind/docs lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed
Projects
Development

No branches or pull requests