Skip to content
This repository has been archived by the owner on Oct 24, 2023. It is now read-only.

The customSearchDomain is not functional #3859

Closed
0hlov3 opened this issue Sep 23, 2020 · 1 comment · Fixed by #3862
Closed

The customSearchDomain is not functional #3859

0hlov3 opened this issue Sep 23, 2020 · 1 comment · Fixed by #3862
Labels
bug Something isn't working

Comments

@0hlov3
Copy link

0hlov3 commented Sep 23, 2020

Describe the bug
I have the problem that I would like to use a custom DNS, so I enter the variables as desired

            "customSearchDomain": {
                "name": "REDACTED.tld",
                "realmUser": "svc.K8sDNS",
                "realmPassword": "REDACTED"
            },
            "customNodesDNS": {
                "dnsServer": "$IPofcustomNodesDNS"
            }

I get the fallowing error from the log:

  • grep -Fq '#EOF' /opt/azure/containers/setup-custom-search-domains.sh
  • '[' 3600 -eq 3600 ']'
  • return 1
  • exit 6

This is caused by the ./provision_source.sh:wait_for_file()

wait_for_file() {
  retries=$1; wait_sleep=$2; filepath=$3
  paved=/opt/azure/cloud-init-files.paved
  grep -Fq "${filepath}" $paved && return 0
  for i in $(seq 1 $retries); do
    grep -Fq '#EOF' $filepath && break
    if [ $i -eq $retries ]; then
      return 1
    else
      sleep $wait_sleep
    fi
  done
  sed -i "/#EOF/d" $filepath
  echo $filepath >>$paved
}

The script assumes that the script parts/k8s/cloud-init/artifacts/setup-custom-search-domains.sh contains a #EOF. But this is not the case here, so the script is not included in /opt/azure/cloud-init-files.paved on the server and the corresponding abort occurs. If i run setup-custom-search-domains.sh manually, the server are joining the realm.

Steps To Reproduce

{
    "apiVersion": "vlabs",
    "location": "",
    "properties": {
        "orchestratorProfile": {
            "orchestratorType": "Kubernetes",
            "orchestratorRelease": "1.17",
            "orchestratorVersion": "1.17.5",
            "kubernetesConfig": {
                "cloudProviderBackoff": true,
                "cloudProviderBackoffRetries": 1,
                "cloudProviderBackoffDuration": 30,
                "cloudProviderRateLimit": true,
                "cloudProviderRateLimitQPS": 3,
                "cloudProviderRateLimitBucket": 10,
                "cloudProviderRateLimitQPSWrite": 3,
                "cloudProviderRateLimitBucketWrite": 10,
                "kubernetesImageBase": "mcr.microsoft.com/k8s/azurestack/core/",
                "useInstanceMetadata": false,
                "networkPlugin": "kubenet",
                "kubeletConfig": {
                    "--node-status-update-frequency": "1m",
                    "--kube-reserved": "memory=500Mi",
                    "--system-reserved": "memory=500Mi",
                    "--eviction-hard": "memory.available<500Mi"
                },
                "controllerManagerConfig": {
                    "--node-monitor-grace-period": "5m",
                    "--pod-eviction-timeout": "5m",
                    "--route-reconciliation-period": "1m"
                },
                "privateCluster": {
                    "enabled": true
                },
                "etcdDiskSizeGB": "32"
            }
        },
        "customCloudProfile": {
            "portalURL": "https://azurestackhub/"
        },
        "featureFlags": {
            "enableTelemetry": true
        },
        "masterProfile": {
            "dnsPrefix": "mas",
            "distro": "aks-ubuntu-16.04",
            "count": 3,
            "vmSize": "Standard_DS2_v2",
            "vnetSubnetId": "/subscriptions/...REDACTED.../subnets/kubernetes-master-Subnet",
            "firstConsecutiveStaticIP": "..REDACTED...",
            "customVMTags": {
                "Application": "k8s Test"
            }
        },
        "agentPoolProfiles": [
            {
                "name": "worker",
                "count": 3,
                "vmSize": "Standard_DS2_v2",
                "distro": "aks-ubuntu-16.04",
                "availabilityProfile": "AvailabilitySet",
                "AcceleratedNetworkingEnabled": false,
                "vnetSubnetId": "/subscriptions/...REDACTED.../subnets/kubernetes-agent-Subnet",
                "customVMTags": {
                  "Application": "k8s Test"
                }
            }
        ],
        "linuxProfile": {
            "adminUsername": "...REDACTED...",
            "ssh": {
                "publicKeys": [
                    {
                        "keyData": "...REDACTED..."
                    }
                ]
            },
            "customSearchDomain": {
                "name": "...REDACTED...",
                "realmUser": "...REDACTED...",
                "realmPassword": "...REDACTED..."
            },
            "customNodesDNS": {
                "dnsServer": "...REDACTED..."
            }
        },
        "servicePrincipalProfile": {
            "clientId": "...REDACTED...",
            "secret": "...REDACTED..."
        }
    }
}

aks-engine-v0.51.1 deploy -f --azure-env AzureStackCloud --api-model kubernetes-azurestack.json --location REDACTED --resource-group RG-k8s-Test --client-id REDACTED --client-secret REDACTED --subscription-id REDACTED --output-directory RG-k8s-Test

Expected behavior
The script should be functional so that the nodes register themselves in the AD.

AKS Engine version
aks-engine-v0.51.1
Kubernetes version
1.17.5

Additional context
Another thing is, thet the Names of the Nodes are much to long, for the Active-Directory Computer section. There is a limit, so the nodes are shorten to k8s-master-3500 and get all the same name, is there an way to get shorter names?

@0hlov3 0hlov3 added the bug Something isn't working label Sep 23, 2020
@welcome
Copy link

welcome bot commented Sep 23, 2020

👋 Thanks for opening your first issue here! If you're reporting a 🐞 bug, please make sure you include steps to reproduce it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
1 participant