New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Launch kubemark with an existing Kubemark master #44394
Launch kubemark with an existing Kubemark master #44394
Conversation
Hi @rthallisey. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@shyamjvs do you have any comments? |
@rthallisey Thank you for the PR. So I assume you intend to enable running kubemark on-premise without any particular cloud provider. |
@shyamjvs Yes. The goal would be to let the developer manage their vms/bm setup, but allow them to land a kubemark cluster on that infrastructure. So any infrastructure related functions would be handled by the user managing their infrastructure. |
I see you've defined ssh command, but didn't define copy-files (which is basically scp). Any reason? |
I don't move any files around. What I do there is copy the credentials from the running Kubernetes cluster that the developer provided and pass that around to be used by the kubermark scripts. |
But we need a definition for this https://github.com/rthallisey/kubernetes/blob/d25d6298965aae2414540707c74859965a4d9fb0/test/kubemark/start-kubemark.sh#L115 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clean and smooth @rthallisey ! Did a quick first pass. Also, have you verified if this works by running locally?
cluster/pre-existing/util.sh
Outdated
|
||
function detect-project() { | ||
if [[ -z "${MASTER_IP-}" ]]; then | ||
echo "Set 'MASTER_IP' to the instance assigned to be the Kubernetes master" 1>&2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix indentation, here and below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
cluster/pre-existing/util.sh
Outdated
if [[ -z "${SERVICE_CLUSTER_IP_RANGE-}" ]]; then | ||
octets=($(echo "${MASTER_IP}" | sed -e 's|/.*||' -e 's/\./ /g')) | ||
octets[3]=0 | ||
cluster_range=$(echo "${octets[*]}" | sed 's/ /./g') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you'd also have to add "/24" to make it a subnet.(https://github.com/kubernetes/kubernetes/blob/master/cluster/kubemark/gce/config-default.sh#L89)
Also why not assign a larger CIDR (like /16)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can live with the larger one.
cluster/pre-existing/util.sh
Outdated
|
||
if [[ -z "${SERVICE_CLUSTER_IP_RANGE-}" ]]; then | ||
octets=($(echo "${MASTER_IP}" | sed -e 's|/.*||' -e 's/\./ /g')) | ||
octets[3]=0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just cluster_range=$(echo "${MASTER_IP}" | awk -F '.' '{printf("%d.%d.%d.0", $1, $2, $3)}')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice use of awk
cluster/pre-existing/util.sh
Outdated
fi | ||
} | ||
|
||
function run-cmd-with-retries { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some bugs around this function's logic have been fixed recently. Please incorporate them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
cluster/pre-existing/util.sh
Outdated
} | ||
|
||
function execute-cmd-on-master-with-retries() { | ||
run-cmd-with-retries ssh kubernetes@"${MASTER_IP}" $@ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we have RETRIES="${2:-1}"
before the command (so we can allow setting RETRIES to a value different from 3). For eg. https://github.com/kubernetes/kubernetes/blob/master/test/kubemark/start-kubemark.sh#L91
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Ah I see where you're referring to. I don't define copy-files because I don't need any of those scripts since kubernetes is already running in the developer's infra. So that function just gets passed over. |
The way I test this is by creating a 3 node kubernetes cluster with Kubeadm and pointing master_ip to the vm with the Kube master. I tested this before I pushed, but I need to rebase and retest this since it's >20 days old. |
@@ -0,0 +1,79 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason why you've placed this file here? Given that it defines kubemark's cloud provider interface (the one in test/kubemark/skeleton), test/kubemark/pre-existing/util.sh seems to be the right path to put this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the function create-certs in start-kubemark.sh needs to be overwritten. Instead of going out to the common script for its definition I define it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. You can keep create-certs and detect-project here, but move the other two functions (which define kubemark interface) to test/kubemark/pre-existing/util.sh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll test that out. Though this script only sources cluster scripts and not test/pre-existing/util.sh so I'm not positive it will work since creat-certs uses that func.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would have it included through https://github.com/kubernetes/kubernetes/blob/master/test/kubemark/start-kubemark.sh#L24 in start-kubemark script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I'll add it to the next revision.
The function won't get passed over. As you reach here (https://github.com/rthallisey/kubernetes/blob/d25d6298965aae2414540707c74859965a4d9fb0/test/kubemark/skeleton/util.sh#L52) and return with error. I guess you'd have to define it, at least as some empty function. And thanks for testing! But please make sure you've thoroughly done it against the new code too. |
By passed over I mean it doesn't do anything. Sorry I should've been more specific. All it does is echo to stderr, which doesn't stop the script from running. But, I can add a blank function if that's an issue. |
Sorry, I was expecting it to exit with error when I defined the interface, but seems like it just stderrs (similar to cluster/skeleton). That's fine then, no need to redefine it. |
No problem. Just want to be sure we get it right. |
I'll update this hopefully later today. At a conference this week :) |
Sure.. No problem. |
@shyamjvs ok updated this. When testing I get the error from hollow-node logs, which makes it seem like my local kubeconfig is busted. But, I can confirm my kubeconfig works from the CLI. I'll investigate further: |
test/kubemark/pre-existing/util.sh
Outdated
|
||
# Leave the skeleton definition of execute-cmd-on-master-with-retries | ||
# so only the pre-existing provider functions will target this. | ||
function execute-cmd-on-pre-existing-master-with-retries() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this. Why have you named the function differently?
This would mean that the line calling start-kubemark-master.sh script on the master VM (inside the start-kubemark.sh function) would simply pass without doing anything.
How would we have our kubemark master setup then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function execute-cmd-on-master-with-retries
will be passed because I don't want to write anything to the master. I only need to read the existing cluster credentials. I commented at the bottom to further explain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks. The assumption of pre-existing kubemark master wasn't clear to me earlier.
cluster/pre-existing/util.sh
Outdated
exit 1 | ||
fi | ||
|
||
if [[ -z "${PROJECT-}" ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could replace this entire if statement with PROJECT="${PROJECT:-kubemark}"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. I'll add it to the pre-existing defaults.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since normally detect-project
sets the value of PROJECT
, I was imitating that here. But, adding PROJECT="${PROJECT:-kubemark}"
to config-default.sh
for the pre-existing provider will accomplish the same thing.
cluster/pre-existing/util.sh
Outdated
source "${KUBE_ROOT}/hack/lib/util.sh" | ||
|
||
function detect-project() { | ||
if [[ -z "${MASTER_IP-}" ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace '-' with ':-' ? Doesn't really matter in this case. But in general we follow the convention of using latter instead of the former mostly. Their difference (as stated by bash reference):
When not performing substring expansion, using the form described below (e.g., ‘:-’), Bash tests for a parameter that is unset or null. Omitting the colon results in a test only for a parameter that is unset. Put another way, if the colon is included, the operator tests for both parameter’s existence and that its value is not null; if the colon is omitted, the operator tests only for existence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
cluster/pre-existing/util.sh
Outdated
PROJECT="kubemark" | ||
fi | ||
|
||
if [[ -z "${SERVICE_CLUSTER_IP_RANGE-}" ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above. Replace with ':-'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
cluster/pre-existing/util.sh
Outdated
|
||
# A library of helper functions for a pre-existing Kubernetes cluster | ||
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../.. | ||
source "${KUBE_ROOT}/cluster/common.sh" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add newline before this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
cluster/pre-existing/util.sh
Outdated
# limitations under the License. | ||
|
||
# A library of helper functions for a pre-existing Kubernetes cluster | ||
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../.. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add newline before this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
cluster/pre-existing/util.sh
Outdated
fi | ||
|
||
if [[ -z "${PROJECT-}" ]]; then | ||
PROJECT="kubemark" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC this is used later in start-kubemark.sh for pushing the kubemark image to ${REGISTRY}/${PROJECT}/kubemark
. Assuming some default value might be a bit too vague.
|
||
# Pre-existing cluster expects a MASTER_IP | ||
MASTER_IP="${MASTER_IP:-}" | ||
MASTER_IP_AND_PORT="${MASTER_IP_AND_PORT:-$MASTER_IP:6443}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
6443 is the default secure port anyway. Why do you want to add this port?
Unless you want to override the port, you can just let it stay as $MASTER_IP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I set it to 6443 with the assumption of an existing Kubernetes cluster running that's using that port. This could be solved with the user setting MASTER_IP
to include the port. So I suppose documentation here could be an alternative.
clusters:
- name: kubemark
cluster:
certificate-authority-data: "${CA_CERT_BASE64}"
server: https://${MASTER_IP_AND_PORT}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We were not having port as part of the kubeconfig earlier also. We just passed the IP address in the config and IIUC the port is defaulted. Not sure if hardcoding the default number (when it automatically happens) is a good idea. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it really 6443? Isn't it 443?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like it says 6443 here (https://github.com/kubernetes/apiserver/blob/master/pkg/server/config.go#L157) which is what also shows up in the documentation:
--secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 6443)
But actually the value being used is 443 (https://github.com/kubernetes/apiserver/blob/master/pkg/server/config.go#L199).
Guess it's a bug, will send a PR fixing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like I was looking at the wrong place. It's 6443 here (https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/options/serving.go#L39).
I'm not sure I understand why we have this difference across the repos.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe 443 is the default. 6443 is the default secure port.
test/kubemark/pre-existing/util.sh
Outdated
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
function run-cmd-with-retries { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be good to add a comment here saying where the logic of this function was copied from. So we can keep a track of it and also understand the weird-ish things we do inside it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was it copied?
If you can't source the file where it is defined for some reason, we should refactor it to a separate file and source only that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add a common script to source. 👍
test/kubemark/start-kubemark.sh
Outdated
@@ -370,6 +370,7 @@ echo -e "${color_yellow}STARTING SETUP FOR MASTER${color_norm}" | |||
find-release-tars | |||
create-master-environment-file | |||
create-master-instance-with-resources | |||
MASTER_IP_AND_PORT="${MASTER_IP_AND_PORT:-$MASTER_IP}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is redundant. You're doing it inside the config-default.sh anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I changed a few variables from MASTER_IP
to MASTER_IP_AND_PORT
in start-kubemark.sh
, MASTER_IP_AND_PORT
will need to be set to MASTER_IP
in the case that the dev is not using the pre-existing provider. That's what this does. But I could also do what I mentioned above. I could document when using the pre-existing provider for the dev to include the port into the MASTER_IP
variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aah I see. Makes sense.
@rthallisey Thanks for addressing the comments. One question I have is are you assuming your kubemark master VM to be all setup already (with the master components up and running)? And regarding:
You can try wrting the kubelet's kubeconfig (https://github.com/rthallisey/kubernetes/blob/9214e9edb415f225102dea9f7ba65caa7a741c54/test/kubemark/start-kubemark.sh#L208) to a file and then see what's wrong with it. Seems like some error with parsing json. |
@shyamjvs I'm assuming the the developer has setup their infrastructure and deployed a Kubernetes cluster on it. That's why I'm not copying anything to the master and not ssh'ing to the master to do anything except gather the credentials of the existing Kubernetes cluster. |
This assumption is fine if you're talking just about the base cluster. But assuming that you've somehow magically configured the kubemark-master I feel is not apt. IMO you should make only the simplest assumption which is you have an SSH'able machine (identified by MASTER_IP) available to start your "kubemark"-master on it. Then you call start-kubemark.sh which does all the work needed to be done on that VM. |
@shyamjvs I think I am making a pretty simple assumption. The Kubernetes Master exists at |
@rthallisey SG. That makes it clearer for me now :) However, the reason why I was saying about assuming just the VM is that it enables people not using any cloud provider to startup a kubemark cluster with just the VM available (for eg. on-prem setups). Besides, not everyone would have enough knowledge to start their own custom kubernetes master. So this would do all the magic for them. Anyway, we can probably let this be the same for now, and make this assumption pluggable in future. Also @wojtek-t / @gmarek , if you have time could you give your opinion/feedback on this PR? |
test/kubemark/gce/util.sh
Outdated
@@ -14,35 +14,13 @@ | |||
# See the License for the specific language governing permissions and | |||
# limitations under the License. | |||
|
|||
source test/kubemark/common/util.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is new file that you added, then you didn't "git add" it to this PR.
test/kubemark/gce/util.sh
Outdated
@@ -14,35 +14,13 @@ | |||
# See the License for the specific language governing permissions and | |||
# limitations under the License. | |||
|
|||
source test/kubemark/common/util.sh | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another thing is that please use the convention of sourcing files, e.g. like here:
https://github.com/kubernetes/kubernetes/pull/44394/files#diff-a705eb8c1cb0096615ba0ac989e939f1R21
test/kubemark/pre-existing/README.md
Outdated
|
||
**Kubernetes Cluster** | ||
- A real Kubernetes Cluster that has master and minions. The hollow-node pods | ||
are run in this cluster, but appear as nodes to the Kubemark Master |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/Master/cluster/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this be referred to as a cluster? Since it's likely you'll be running many hollow node pods, I'd expect that to require many real nodes. Is the purpose of your comment to make the syntax match the definition above it?
test/kubemark/pre-existing/README.md
Outdated
# Kubemark Pre-existing Provider Guide | ||
|
||
**Kubemark Master** | ||
- A Kubernetes master running in a VM that will be used as the kubemark master |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A set of control plance components running in a VM ....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
test/kubemark/pre-existing/README.md
Outdated
|
||
## Use Case | ||
|
||
The goal of the pre-existing provider is to use the kubemark tools with a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/a/an/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
test/kubemark/pre-existing/README.md
Outdated
KUBEMARK_IMAGE_MAKE_TARGET="push" | ||
CONTAINER_REGISTRY=docker.io | ||
PROJECT="rthallisey" | ||
KUBEMARK_IMAGE_MAKE_TARGET="push" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like I had a duplicate
test/kubemark/pre-existing/README.md
Outdated
The all in one Kubemark cluster looks like the following: | ||
1) A running Kubernetes cluster pointed to by the local kubeconfig | ||
2) Some hollow-nodes that run on the Kubernetes Cluster from #1 | ||
3) The hollow-nodes are configured to talk with the master at #1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this setup makes any sense.
The setup I was trying to refer to in one of my comments was:
"kubemark master components (kubemark apiserver, kubemark scheduler, ...) are also running as pods in the base cluster".
The setup you describe is not only discouraged - it just doesn't make any sense to me...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree this setup may not be 100% practical for testing, but it's possible to do. Therefore, I figured I would document it anyway but provide a warning message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion it's not that it's not practical for testing. It's just wrong.
In this setup you don't even have a control over how many nodes will be in the cluster. Because depending whether your hollow-node will be scheduled on a real node or on a different "hollow node" it will either be a real node, or not-running pod.
This is just bad and asking for troubles. We should really point that it's wrong. Not just discourage.
@gmarek FYI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to @wojtek-t .
Reusing the real master as kubemark master is not a good idea. In my earlier comments I was seeking clarity about it and thought we agreed upon the same thing.(#44394 (comment)
#44394 (comment))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure I agree. I'll make the comment a stronger warning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rthallisey - it's not about making a comment. We shouldn't even mention such thing. We should remove this second at all.
If you want to mention a different setup, the only other reasonable one is that we don't create an additional VM for master control plane components, but we are running them as pods on this bas kubernetes cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wojtek-t 👍 That works for me. I misunderstood your last comment. Removing...
test/kubemark/pre-existing/util.sh
Outdated
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
source test/kubemark/common/util.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No relative paths please - please source e.g. as here:
https://github.com/kubernetes/kubernetes/pull/44394/files#diff-a705eb8c1cb0096615ba0ac989e939f1R23
# | ||
MASTER_IP="${MASTER_IP:-}" | ||
|
||
# The container name and registry given to the kubemark container: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no container name below - it's just project and container registry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, it should be project name.
|
||
NUM_NODES="${NUM_NODES:-1}" | ||
|
||
SERVICE_CLUSTER_IP_RANGE="${SERVICE_CLUSTER_IP_RANGE:-}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I still don't understand why do we need to define those vars.
Either we are sourcing another config farther in the flow (e.g. source cluster/kubemark/gce/config-default.sh) and then any of those should be needed. Or we are not sourcing it (and then we should source it here instead).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if [[ -z "${SERVICE_CLUSTER_IP_RANGE:-}" ]]; then
cluster_range=$(echo "${MASTER_IP}" | awk -F '.' '{printf("%d.%d.%d.0", $1, $2, $3)}')
SERVICE_CLUSTER_IP_RANGE="${SERVICE_CLUSTER_IP_RANGE:-$cluster_range/16}"
fi
I can get rid of SERVICE_CLUSTER_IP_RANGE
and DNS_DOMAIN
. The remaining vars will never be defined if I don't set them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file cluster/kubemark/gce/config-default.sh
does the exact same thing for some of these variables. For instance TEST_CLUSTER_API_CONTENT_TYPE="${TEST_CLUSTER_API_CONTENT_TYPE:-}"
is in there. If we want to have a common place to set these instead of provider config files, it might make it a little more confusing IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still don't understand each other.
I know that we define those variables in this cluster/kubemark/gce/config-default.sh
And I really want to avoid having a second file that is basically exactly the same content.
My comment wasn't about this particular line - it's about the whole file starting from this line.
@gmarek - FYI |
test/kubemark/gce/util.sh
Outdated
@@ -14,35 +14,15 @@ | |||
# See the License for the specific language governing permissions and | |||
# limitations under the License. | |||
|
|||
KUBE_ROOT=$(cd $(dirname "${BASH_SOURCE}")/../../.. && pwd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No pwd please. Really, the convention in all files is:
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../../..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
test/kubemark/pre-existing/README.md
Outdated
4) The hollow-nodes are configured to talk with the kubemark master at #2 | ||
|
||
When using the pre-existing provider, the developer is responsible for creating | ||
#2. Therefore, the kubemark scripts will not create any infrastructure or start |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact he is responsible for creating both 1 and 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. My assumption was the developer already knew that, but let's not assume.
test/kubemark/pre-existing/README.md
Outdated
The all in one Kubemark cluster looks like the following: | ||
1) A running Kubernetes cluster pointed to by the local kubeconfig | ||
2) Some hollow-nodes that run on the Kubernetes Cluster from #1 | ||
3) The hollow-nodes are configured to talk with the master at #1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion it's not that it's not practical for testing. It's just wrong.
In this setup you don't even have a control over how many nodes will be in the cluster. Because depending whether your hollow-node will be scheduled on a real node or on a different "hollow node" it will either be a real node, or not-running pod.
This is just bad and asking for troubles. We should really point that it's wrong. Not just discourage.
@gmarek FYI
@rthallisey Sorry for the delay in response, I was busy with few things important for the release. The PR looks in a much better shape now. :) However it'd be good to get it finally lgtm'd by @wojtek-t (he'll come back from vacation at the start of july). |
@shyamjvs Cool 👍. Thanks for the heads up. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just minor nits - other than that LGTM.
# $CONTAINER_REGISTRY/$PROJECT/kubemark | ||
# | ||
PROJECT="${PROJECT:-}" | ||
CONTAINER_REGISTRY="${CONTAINER_REGISTRY:-}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please swap the order of those two - to reflect the order in which they are used. I mean:
CONTAINER_REGISTRY="${CONTAINER_REGISTRY:-}"
PROJECT="${PROJECT:-}"
test/kubemark/gce/util.sh
Outdated
@@ -14,34 +14,15 @@ | |||
# See the License for the specific language governing permissions and | |||
# limitations under the License. | |||
|
|||
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../../.. | |||
|
|||
source ${KUBE_ROOT}/test/kubemark/common/util.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use "
for instructions.
In order to expand the use of kubemark, allow developers to use kubemark with a pre-existing Kubemark master.
@wojtek-t rebased and addressed your comments |
/retest |
/lgtm |
/approve no-issue |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rthallisey, shyamjvs, wojtek-t Associated issue: 44393 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test pull-kubernetes-e2e-gce-etcd3 |
Automatic merge from submit-queue |
@rthallisey @shyamjvs how does your kubemark master start? The kubemark master and the base k8s cluster's master can be the same VM? Colud you tell more detail about how to use the pre-existing ? Thanks My step is as follow, but it not work. Could you give me some help?
` but I excuse the cmd I use the binary to build base k8s cluster and kubemark master directly . |
@rthallisey could you please help clarifying why we need a dedicated master (#2 master in readme file) used for hollow-nodes, it seems one master is enough? |
In order to expand the use of kubemark, allow developers to use kubemark with a pre-existing Kubernetes cluster.
Ref issue #44393