Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Launch kubemark with an existing Kubemark master #44394

Merged
merged 2 commits into from Jul 6, 2017
Merged

Launch kubemark with an existing Kubemark master #44394

merged 2 commits into from Jul 6, 2017

Conversation

rthallisey
Copy link

In order to expand the use of kubemark, allow developers to use kubemark with a pre-existing Kubernetes cluster.

Ref issue #44393

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Apr 12, 2017
@k8s-reviewable
Copy link

This change is Reviewable

@k8s-ci-robot
Copy link
Contributor

Hi @rthallisey. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with @k8s-bot ok to test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-github-robot k8s-github-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. release-note-label-needed labels Apr 12, 2017
@rthallisey
Copy link
Author

@shyamjvs do you have any comments?

@shyamjvs
Copy link
Member

shyamjvs commented May 9, 2017

@rthallisey Thank you for the PR. So I assume you intend to enable running kubemark on-premise without any particular cloud provider.
Before starting the review, what's the reason you haven't defined the skeleton functions "create-master-instance-with-resources", "delete-master-instance-and-resources" and "copy-files" (those here: https://github.com/kubernetes/kubernetes/blob/master/test/kubemark/skeleton/util.sh)
Do you want to let those functions be defined by the dev based on their environment?

@rthallisey
Copy link
Author

@shyamjvs Yes. The goal would be to let the developer manage their vms/bm setup, but allow them to land a kubemark cluster on that infrastructure. So any infrastructure related functions would be handled by the user managing their infrastructure.

@shyamjvs
Copy link
Member

shyamjvs commented May 9, 2017

I see you've defined ssh command, but didn't define copy-files (which is basically scp). Any reason?

@rthallisey
Copy link
Author

I don't move any files around. What I do there is copy the credentials from the running Kubernetes cluster that the developer provided and pass that around to be used by the kubermark scripts.

@shyamjvs
Copy link
Member

shyamjvs commented May 9, 2017

Copy link
Member

@shyamjvs shyamjvs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clean and smooth @rthallisey ! Did a quick first pass. Also, have you verified if this works by running locally?


function detect-project() {
if [[ -z "${MASTER_IP-}" ]]; then
echo "Set 'MASTER_IP' to the instance assigned to be the Kubernetes master" 1>&2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix indentation, here and below.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

if [[ -z "${SERVICE_CLUSTER_IP_RANGE-}" ]]; then
octets=($(echo "${MASTER_IP}" | sed -e 's|/.*||' -e 's/\./ /g'))
octets[3]=0
cluster_range=$(echo "${octets[*]}" | sed 's/ /./g')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you'd also have to add "/24" to make it a subnet.(https://github.com/kubernetes/kubernetes/blob/master/cluster/kubemark/gce/config-default.sh#L89)
Also why not assign a larger CIDR (like /16)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can live with the larger one.


if [[ -z "${SERVICE_CLUSTER_IP_RANGE-}" ]]; then
octets=($(echo "${MASTER_IP}" | sed -e 's|/.*||' -e 's/\./ /g'))
octets[3]=0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just cluster_range=$(echo "${MASTER_IP}" | awk -F '.' '{printf("%d.%d.%d.0", $1, $2, $3)}')

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice use of awk

fi
}

function run-cmd-with-retries {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some bugs around this function's logic have been fixed recently. Please incorporate them.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}

function execute-cmd-on-master-with-retries() {
run-cmd-with-retries ssh kubernetes@"${MASTER_IP}" $@
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we have RETRIES="${2:-1}" before the command (so we can allow setting RETRIES to a value different from 3). For eg. https://github.com/kubernetes/kubernetes/blob/master/test/kubemark/start-kubemark.sh#L91

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@rthallisey
Copy link
Author

Ah I see where you're referring to. I don't define copy-files because I don't need any of those scripts since kubernetes is already running in the developer's infra. So that function just gets passed over.

@rthallisey
Copy link
Author

The way I test this is by creating a 3 node kubernetes cluster with Kubeadm and pointing master_ip to the vm with the Kube master.

I tested this before I pushed, but I need to rebase and retest this since it's >20 days old.

@@ -0,0 +1,79 @@
#!/bin/bash
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason why you've placed this file here? Given that it defines kubemark's cloud provider interface (the one in test/kubemark/skeleton), test/kubemark/pre-existing/util.sh seems to be the right path to put this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the function create-certs in start-kubemark.sh needs to be overwritten. Instead of going out to the common script for its definition I define it here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. You can keep create-certs and detect-project here, but move the other two functions (which define kubemark interface) to test/kubemark/pre-existing/util.sh?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll test that out. Though this script only sources cluster scripts and not test/pre-existing/util.sh so I'm not positive it will work since creat-certs uses that func.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would have it included through https://github.com/kubernetes/kubernetes/blob/master/test/kubemark/start-kubemark.sh#L24 in start-kubemark script.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I'll add it to the next revision.

@shyamjvs
Copy link
Member

The function won't get passed over. As you reach here (https://github.com/rthallisey/kubernetes/blob/d25d6298965aae2414540707c74859965a4d9fb0/test/kubemark/skeleton/util.sh#L52) and return with error. I guess you'd have to define it, at least as some empty function. And thanks for testing! But please make sure you've thoroughly done it against the new code too.

@rthallisey
Copy link
Author

By passed over I mean it doesn't do anything. Sorry I should've been more specific. All it does is echo to stderr, which doesn't stop the script from running. But, I can add a blank function if that's an issue.

@shyamjvs
Copy link
Member

Sorry, I was expecting it to exit with error when I defined the interface, but seems like it just stderrs (similar to cluster/skeleton). That's fine then, no need to redefine it.

@rthallisey
Copy link
Author

No problem. Just want to be sure we get it right.

@rthallisey
Copy link
Author

I'll update this hopefully later today. At a conference this week :)

@shyamjvs
Copy link
Member

Sure.. No problem.

@rthallisey
Copy link
Author

@shyamjvs ok updated this.

When testing I get the error from hollow-node logs, which makes it seem like my local kubeconfig is busted. But, I can confirm my kubeconfig works from the CLI. I'll investigate further:
hollow-node.go:100] Failed to create a ClientConfig: error while loading kubeconfig from file /kubeconfig/kubelet.kubeconfig: [pos 2811]: json: error decoding base64 binary ... illegal base64 data at input byte 1368


# Leave the skeleton definition of execute-cmd-on-master-with-retries
# so only the pre-existing provider functions will target this.
function execute-cmd-on-pre-existing-master-with-retries() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this. Why have you named the function differently?
This would mean that the line calling start-kubemark-master.sh script on the master VM (inside the start-kubemark.sh function) would simply pass without doing anything.
How would we have our kubemark master setup then?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function execute-cmd-on-master-with-retries will be passed because I don't want to write anything to the master. I only need to read the existing cluster credentials. I commented at the bottom to further explain.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks. The assumption of pre-existing kubemark master wasn't clear to me earlier.

exit 1
fi

if [[ -z "${PROJECT-}" ]]; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could replace this entire if statement with PROJECT="${PROJECT:-kubemark}"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. I'll add it to the pre-existing defaults.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since normally detect-project sets the value of PROJECT, I was imitating that here. But, adding PROJECT="${PROJECT:-kubemark}" to config-default.sh for the pre-existing provider will accomplish the same thing.

source "${KUBE_ROOT}/hack/lib/util.sh"

function detect-project() {
if [[ -z "${MASTER_IP-}" ]]; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace '-' with ':-' ? Doesn't really matter in this case. But in general we follow the convention of using latter instead of the former mostly. Their difference (as stated by bash reference):

When not performing substring expansion, using the form described below (e.g., ‘:-’), Bash tests for a parameter that is unset or null. Omitting the colon results in a test only for a parameter that is unset. Put another way, if the colon is included, the operator tests for both parameter’s existence and that its value is not null; if the colon is omitted, the operator tests only for existence.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

PROJECT="kubemark"
fi

if [[ -z "${SERVICE_CLUSTER_IP_RANGE-}" ]]; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above. Replace with ':-'.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


# A library of helper functions for a pre-existing Kubernetes cluster
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../..
source "${KUBE_ROOT}/cluster/common.sh"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add newline before this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

# limitations under the License.

# A library of helper functions for a pre-existing Kubernetes cluster
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../..
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add newline before this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

fi

if [[ -z "${PROJECT-}" ]]; then
PROJECT="kubemark"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC this is used later in start-kubemark.sh for pushing the kubemark image to ${REGISTRY}/${PROJECT}/kubemark. Assuming some default value might be a bit too vague.


# Pre-existing cluster expects a MASTER_IP
MASTER_IP="${MASTER_IP:-}"
MASTER_IP_AND_PORT="${MASTER_IP_AND_PORT:-$MASTER_IP:6443}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6443 is the default secure port anyway. Why do you want to add this port?
Unless you want to override the port, you can just let it stay as $MASTER_IP.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I set it to 6443 with the assumption of an existing Kubernetes cluster running that's using that port. This could be solved with the user setting MASTER_IP to include the port. So I suppose documentation here could be an alternative.

clusters:                                                                                                                                                                                                          
- name: kubemark                                                                                                                                                                                                   
  cluster:                                                                                                                                                                                                         
    certificate-authority-data: "${CA_CERT_BASE64}"                                                                                                                                                                
    server: https://${MASTER_IP_AND_PORT} 

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We were not having port as part of the kubeconfig earlier also. We just passed the IP address in the config and IIUC the port is defaulted. Not sure if hardcoding the default number (when it automatically happens) is a good idea. WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it really 6443? Isn't it 443?

Copy link
Member

@shyamjvs shyamjvs May 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like it says 6443 here (https://github.com/kubernetes/apiserver/blob/master/pkg/server/config.go#L157) which is what also shows up in the documentation:
--secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 6443)

But actually the value being used is 443 (https://github.com/kubernetes/apiserver/blob/master/pkg/server/config.go#L199).
Guess it's a bug, will send a PR fixing this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like I was looking at the wrong place. It's 6443 here (https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/options/serving.go#L39).
I'm not sure I understand why we have this difference across the repos.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe 443 is the default. 6443 is the default secure port.

# See the License for the specific language governing permissions and
# limitations under the License.

function run-cmd-with-retries {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be good to add a comment here saying where the logic of this function was copied from. So we can keep a track of it and also understand the weird-ish things we do inside it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was it copied?
If you can't source the file where it is defined for some reason, we should refactor it to a separate file and source only that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a common script to source. 👍

@@ -370,6 +370,7 @@ echo -e "${color_yellow}STARTING SETUP FOR MASTER${color_norm}"
find-release-tars
create-master-environment-file
create-master-instance-with-resources
MASTER_IP_AND_PORT="${MASTER_IP_AND_PORT:-$MASTER_IP}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is redundant. You're doing it inside the config-default.sh anyway.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since I changed a few variables from MASTER_IP to MASTER_IP_AND_PORT in start-kubemark.sh, MASTER_IP_AND_PORT will need to be set to MASTER_IP in the case that the dev is not using the pre-existing provider. That's what this does. But I could also do what I mentioned above. I could document when using the pre-existing provider for the dev to include the port into the MASTER_IP variable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aah I see. Makes sense.

@shyamjvs
Copy link
Member

shyamjvs commented May 12, 2017

@rthallisey Thanks for addressing the comments. One question I have is are you assuming your kubemark master VM to be all setup already (with the master components up and running)?

And regarding:

When testing I get the error from hollow-node logs, which makes it seem like my local kubeconfig is busted. But, I can confirm my kubeconfig works from the CLI. I'll investigate further:

You can try wrting the kubelet's kubeconfig (https://github.com/rthallisey/kubernetes/blob/9214e9edb415f225102dea9f7ba65caa7a741c54/test/kubemark/start-kubemark.sh#L208) to a file and then see what's wrong with it. Seems like some error with parsing json.

@rthallisey
Copy link
Author

@shyamjvs I'm assuming the the developer has setup their infrastructure and deployed a Kubernetes cluster on it. That's why I'm not copying anything to the master and not ssh'ing to the master to do anything except gather the credentials of the existing Kubernetes cluster.

@shyamjvs
Copy link
Member

shyamjvs commented May 15, 2017

@shyamjvs I'm assuming the the developer has setup their infrastructure and deployed a Kubernetes cluster on it.

This assumption is fine if you're talking just about the base cluster. But assuming that you've somehow magically configured the kubemark-master I feel is not apt. IMO you should make only the simplest assumption which is you have an SSH'able machine (identified by MASTER_IP) available to start your "kubemark"-master on it. Then you call start-kubemark.sh which does all the work needed to be done on that VM.

@rthallisey
Copy link
Author

rthallisey commented May 15, 2017

@shyamjvs I think I am making a pretty simple assumption. The Kubernetes Master exists at MASTER_IP. The only requirement from the kubemark side is gathering those credentials from the master and setting them correctly. This change isn't large, but it adds a ton of flexibility to the kubermark scripts because it provides a DIY option (do it yourself). Whereas the developer can build and test in an environment that's not currently supported without adding new code to the scripts.

@shyamjvs
Copy link
Member

Whereas the developer can build and test in an environment that's not currently supported without adding new code to the scripts.

@rthallisey SG. That makes it clearer for me now :) However, the reason why I was saying about assuming just the VM is that it enables people not using any cloud provider to startup a kubemark cluster with just the VM available (for eg. on-prem setups). Besides, not everyone would have enough knowledge to start their own custom kubernetes master. So this would do all the magic for them. Anyway, we can probably let this be the same for now, and make this assumption pluggable in future.

Also @wojtek-t / @gmarek , if you have time could you give your opinion/feedback on this PR?

@rthallisey rthallisey changed the title Launch kubemark with an existing Kubernetes cluster Launch kubemark with an existing Kubemark master May 25, 2017
@@ -14,35 +14,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.

source test/kubemark/common/util.sh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is new file that you added, then you didn't "git add" it to this PR.

@@ -14,35 +14,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.

source test/kubemark/common/util.sh

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing is that please use the convention of sourcing files, e.g. like here:
https://github.com/kubernetes/kubernetes/pull/44394/files#diff-a705eb8c1cb0096615ba0ac989e939f1R21


**Kubernetes Cluster**
- A real Kubernetes Cluster that has master and minions. The hollow-node pods
are run in this cluster, but appear as nodes to the Kubemark Master
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Master/cluster/

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this be referred to as a cluster? Since it's likely you'll be running many hollow node pods, I'd expect that to require many real nodes. Is the purpose of your comment to make the syntax match the definition above it?

# Kubemark Pre-existing Provider Guide

**Kubemark Master**
- A Kubernetes master running in a VM that will be used as the kubemark master
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A set of control plance components running in a VM ....

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


## Use Case

The goal of the pre-existing provider is to use the kubemark tools with a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/a/an/

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

KUBEMARK_IMAGE_MAKE_TARGET="push"
CONTAINER_REGISTRY=docker.io
PROJECT="rthallisey"
KUBEMARK_IMAGE_MAKE_TARGET="push"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's that?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like I had a duplicate

The all in one Kubemark cluster looks like the following:
1) A running Kubernetes cluster pointed to by the local kubeconfig
2) Some hollow-nodes that run on the Kubernetes Cluster from #1
3) The hollow-nodes are configured to talk with the master at #1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this setup makes any sense.
The setup I was trying to refer to in one of my comments was:
"kubemark master components (kubemark apiserver, kubemark scheduler, ...) are also running as pods in the base cluster".

The setup you describe is not only discouraged - it just doesn't make any sense to me...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this setup may not be 100% practical for testing, but it's possible to do. Therefore, I figured I would document it anyway but provide a warning message.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion it's not that it's not practical for testing. It's just wrong.

In this setup you don't even have a control over how many nodes will be in the cluster. Because depending whether your hollow-node will be scheduled on a real node or on a different "hollow node" it will either be a real node, or not-running pod.

This is just bad and asking for troubles. We should really point that it's wrong. Not just discourage.

@gmarek FYI

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to @wojtek-t .
Reusing the real master as kubemark master is not a good idea. In my earlier comments I was seeking clarity about it and thought we agreed upon the same thing.(#44394 (comment)
#44394 (comment))

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure I agree. I'll make the comment a stronger warning.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rthallisey - it's not about making a comment. We shouldn't even mention such thing. We should remove this second at all.

If you want to mention a different setup, the only other reasonable one is that we don't create an additional VM for master control plane components, but we are running them as pods on this bas kubernetes cluster.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wojtek-t 👍 That works for me. I misunderstood your last comment. Removing...

# See the License for the specific language governing permissions and
# limitations under the License.

source test/kubemark/common/util.sh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#
MASTER_IP="${MASTER_IP:-}"

# The container name and registry given to the kubemark container:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no container name below - it's just project and container registry.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, it should be project name.


NUM_NODES="${NUM_NODES:-1}"

SERVICE_CLUSTER_IP_RANGE="${SERVICE_CLUSTER_IP_RANGE:-}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I still don't understand why do we need to define those vars.
Either we are sourcing another config farther in the flow (e.g. source cluster/kubemark/gce/config-default.sh) and then any of those should be needed. Or we are not sourcing it (and then we should source it here instead).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  if [[ -z "${SERVICE_CLUSTER_IP_RANGE:-}" ]]; then
    cluster_range=$(echo "${MASTER_IP}" | awk -F '.' '{printf("%d.%d.%d.0", $1, $2, $3)}')
    SERVICE_CLUSTER_IP_RANGE="${SERVICE_CLUSTER_IP_RANGE:-$cluster_range/16}"
  fi

I can get rid of SERVICE_CLUSTER_IP_RANGE and DNS_DOMAIN. The remaining vars will never be defined if I don't set them.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file cluster/kubemark/gce/config-default.sh does the exact same thing for some of these variables. For instance TEST_CLUSTER_API_CONTENT_TYPE="${TEST_CLUSTER_API_CONTENT_TYPE:-}" is in there. If we want to have a common place to set these instead of provider config files, it might make it a little more confusing IMO.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still don't understand each other.
I know that we define those variables in this cluster/kubemark/gce/config-default.sh

And I really want to avoid having a second file that is basically exactly the same content.
My comment wasn't about this particular line - it's about the whole file starting from this line.

@wojtek-t
Copy link
Member

@gmarek - FYI

@@ -14,35 +14,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.

KUBE_ROOT=$(cd $(dirname "${BASH_SOURCE}")/../../.. && pwd)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No pwd please. Really, the convention in all files is:

KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../../..

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

4) The hollow-nodes are configured to talk with the kubemark master at #2

When using the pre-existing provider, the developer is responsible for creating
#2. Therefore, the kubemark scripts will not create any infrastructure or start
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact he is responsible for creating both 1 and 2.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. My assumption was the developer already knew that, but let's not assume.

The all in one Kubemark cluster looks like the following:
1) A running Kubernetes cluster pointed to by the local kubeconfig
2) Some hollow-nodes that run on the Kubernetes Cluster from #1
3) The hollow-nodes are configured to talk with the master at #1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion it's not that it's not practical for testing. It's just wrong.

In this setup you don't even have a control over how many nodes will be in the cluster. Because depending whether your hollow-node will be scheduled on a real node or on a different "hollow node" it will either be a real node, or not-running pod.

This is just bad and asking for troubles. We should really point that it's wrong. Not just discourage.

@gmarek FYI

@rthallisey
Copy link
Author

Rebased. Anymore comments from folks? @wojtek-t @shyamjvs

@k8s-github-robot k8s-github-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jun 6, 2017
@shyamjvs
Copy link
Member

@rthallisey Sorry for the delay in response, I was busy with few things important for the release. The PR looks in a much better shape now. :) However it'd be good to get it finally lgtm'd by @wojtek-t (he'll come back from vacation at the start of july).

@shyamjvs shyamjvs added this to the v1.8 milestone Jun 21, 2017
@rthallisey
Copy link
Author

@shyamjvs Cool 👍. Thanks for the heads up.

Copy link
Member

@wojtek-t wojtek-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just minor nits - other than that LGTM.

# $CONTAINER_REGISTRY/$PROJECT/kubemark
#
PROJECT="${PROJECT:-}"
CONTAINER_REGISTRY="${CONTAINER_REGISTRY:-}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please swap the order of those two - to reflect the order in which they are used. I mean:

CONTAINER_REGISTRY="${CONTAINER_REGISTRY:-}"
PROJECT="${PROJECT:-}"

@@ -14,34 +14,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.

KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../../..

source ${KUBE_ROOT}/test/kubemark/common/util.sh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use " for instructions.

Ryan Hallisey added 2 commits July 5, 2017 09:14
In order to expand the use of kubemark, allow developers to
use kubemark with a pre-existing Kubemark master.
@rthallisey
Copy link
Author

@wojtek-t rebased and addressed your comments

@wojtek-t
Copy link
Member

wojtek-t commented Jul 6, 2017

/retest

@wojtek-t
Copy link
Member

wojtek-t commented Jul 6, 2017

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 6, 2017
@wojtek-t
Copy link
Member

wojtek-t commented Jul 6, 2017

/approve no-issue

@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rthallisey, shyamjvs, wojtek-t

Associated issue: 44393

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 6, 2017
@wojtek-t
Copy link
Member

wojtek-t commented Jul 6, 2017

/test pull-kubernetes-e2e-gce-etcd3
/test pull-kubernetes-federation-e2e-gce

@k8s-github-robot
Copy link

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit 20e629b into kubernetes:master Jul 6, 2017
@rthallisey rthallisey deleted the pre-existing-provider branch July 6, 2017 20:59
@Kevin-fqh
Copy link

Kevin-fqh commented Apr 9, 2018

@rthallisey @shyamjvs how does your kubemark master start? The kubemark master and the base k8s cluster's master can be the same VM? Colud you tell more detail about how to use the pre-existing ? Thanks

My step is as follow, but it not work. Could you give me some help?
1、I create 4 VMs
2、start etcd kube-apiserver, kibe-controller-manager,scheduler on the 1th VM, the ip is 192.168.132.131, play the role of kubemark master.
3、I build a base k8s cluster on 2th, 3rd, 4th VM(not any cloud provider), the master is 2th VM, the master ip is 192.168.132.129.
4、 192.168.132.129 can ssh kubernetes@ 192.168.132.131
5、on the 192.168.132.129, change the test/kubemark/cloud-provider-config.sh as pre-existing MASTER_IP="192.168.132.131:6443"
6. on the 192.168.132.129, excute the k8s.io/kubernetes/test/kubemark/start-kubemark.sh,but some unexpected errors occurs such as:
`
[root@f kubernetes]# test/kubemark/start-kubemark.sh

                  STARTING SETUP FOR MASTER
                   Created the environment file for master.
                   MASTER_IP: 192.168.132.131:6443
                  MASTER_NAME: 
                  kubernetes@192.168.132.131's password: 
                   Attempt 1 failed to ssh kubernetes@192.168.132.131 sudo. Retrying.
                  Failed to ssh kubernetes@192.168.132.131 sudo.

`

but I excuse the cmd ssh kubernetes@192.168.132.131 on shell directly is ok.

I use the binary to build base k8s cluster and kubemark master directly .
Does must start the k8s cluster or kubemark in a specific way, such as kubeadm?

@gaorong
Copy link
Contributor

gaorong commented Jun 29, 2019

@rthallisey could you please help clarifying why we need a dedicated master (#2 master in readme file) used for hollow-nodes, it seems one master is enough?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants