Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-Primary Cluster On Different Network - There are some problems of installing multi-cluster on minikube #33434

Closed
Patrick0308 opened this issue Jun 15, 2021 · 6 comments
Labels
lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@Patrick0308
Copy link
Contributor

Patrick0308 commented Jun 15, 2021

Bug description
Following Install Multi-Primary on different networks on minikube, there are some problems.

  • Install a remote secret failed
❯ istioctl x create-remote-secret \
  --context="${CTX_CLUSTER1}" \
  --name=cluster1 | \
  kubectl apply -f - --context="${CTX_CLUSTER2}"

error: error validating "STDIN": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
  • Cannot use a hostname-based gateway for east-west traffic on minikube clusters
while true; do kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep     "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')"     -- curl -s helloworld.sample:5000/hello; done
Hello version: v1, instance: helloworld-v1-776f57d5f6-fcx2g
Hello version: v1, instance: helloworld-v1-776f57d5f6-fcx2g
Hello version: v1, instance: helloworld-v1-776f57d5f6-fcx2g
Hello version: v1, instance: helloworld-v1-776f57d5f6-fcx2g
Hello version: v1, instance: helloworld-v1-776f57d5f6-fcx2g
Hello version: v1, instance: helloworld-v1-776f57d5f6-fcx2g
^C%
❯
❯ while true; do kubectl exec --context="${CTX_CLUSTER2}" -n sample -c sleep     "$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')"     -- curl -s helloworld.sample:5000/hello; done
Hello version: v2, instance: helloworld-v2-54df5f84b-4s4bq
Hello version: v2, instance: helloworld-v2-54df5f84b-4s4bq
Hello version: v2, instance: helloworld-v2-54df5f84b-4s4bq
Hello version: v2, instance: helloworld-v2-54df5f84b-4s4bq
Hello version: v2, instance: helloworld-v2-54df5f84b-4s4bq
Hello version: v2, instance: helloworld-v2-54df5f84b-4s4bq
Hello version: v2, instance: helloworld-v2-54df5f84b-4s4bq
^C%
❯

Affected product area (please put an X in all that apply)

[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
[ ] Upgrade

Affected features (please put an X in all that apply)

[X] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane

Expected behavior
I expected to get the same behavior that the guide expects.

Steps to reproduce the bug

minikube start -p "cluster2"
minikube start -p "cluster1"
export CTX_CLUSTER1=cluster1
export CTX_CLUSTER2=cluster2

Then follow Install Multi-Primary on different networks. And Installing remote secret failed as above on Bug description. I fixed error by replace service ip with api server ip. Finally I followed Verify the installation, and I got response from the same cluster service.

Version (include the output of istioctl version --remote and kubectl version --short and helm version --short if you used Helm)

❯ istioctl version --remote
client version: 1.10.0
control plane version: 1.10.0
data plane version: 1.10.0 (4 proxies)
❯ kubectl version --short
Client Version: v1.19.7
Server Version: v1.20.2
❯ minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae

How was Istio installed?

Environment where the bug was observed (cloud vendor, OS, etc)

❯ uname -a
Darwin jiangchaodeMacBook-Pro.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64 x86_64

Additionally, please consider running istioctl bug-report and attach the generated cluster-state tarball to this issue.
Refer cluster state archive for more details.

@howardjohn
Copy link
Member

can you how the output of istioctl x create-remote-secret \ --context="${CTX_CLUSTER1}" \ --name=cluster1

? It is probably getting hidden by the pipe

@Patrick0308
Copy link
Contributor Author

The output is

❯ istioctl x create-remote-secret --context="${CTX_CLUSTER1}" --name=cluster1
2021-06-16T02:19:36.991349Z	warn	Server in Kubeconfig is https://127.0.0.1:59568. This is likely not reachable from inside the cluster.
If you're using Kubernetes in Docker, pass --server with the container IP for the API Server.
# This file is autogenerated, do not edit.
apiVersion: v1
kind: Secret
metadata:
  annotations:
    networking.istio.io/cluster: cluster1
  creationTimestamp: null
  labels:
    istio/multiCluster: "true"
  name: istio-remote-secret-cluster1
  namespace: istio-system
stringData:
  cluster1: |
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: **********
        server: https://127.0.0.1:59568
      name: cluster1
    contexts:
    - context:
        cluster: cluster1
        user: cluster1
      name: cluster1
    current-context: cluster1
    kind: Config
    preferences: {}
    users:
    - name: cluster1
      user:
        token: **********
---

@howardjohn
Copy link
Member

Maybe that warning log is piped to kubectl and fails

@Patrick0308
Copy link
Contributor Author

Patrick0308 commented Jun 16, 2021

Yeah, I remove the warning log. Follow the guide, the second problem about cannot using a hostname-based gateway for east-west traffic was happened in the step of verifying the installation.

@howardjohn
Copy link
Member

@esnible should we be logging to stderr or similar?

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Sep 15, 2021
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2021-06-16. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Sep 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests

3 participants