Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactored kubemark code into provider-specific and provider-independent parts [Part-3] #41134

Merged
merged 1 commit into from Feb 15, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions cluster/kubemark/gce/config-default.sh
Expand Up @@ -43,6 +43,7 @@ NETWORK=${KUBE_GCE_NETWORK:-default}
INSTANCE_PREFIX="${INSTANCE_PREFIX:-"default"}"
MASTER_NAME="${INSTANCE_PREFIX}-kubemark-master"
MASTER_TAG="kubemark-master"
EVENT_STORE_NAME="${INSTANCE_PREFIX}-event-store"
MASTER_IP_RANGE="${MASTER_IP_RANGE:-10.246.0.0/24}"
CLUSTER_IP_RANGE="${CLUSTER_IP_RANGE:-10.240.0.0/11}"
RUNTIME_CONFIG="${KUBE_RUNTIME_CONFIG:-}"
Expand Down
3 changes: 3 additions & 0 deletions test/kubemark/cloud-provider-config.sh
Expand Up @@ -15,3 +15,6 @@
# limitations under the License.

CLOUD_PROVIDER="${CLOUD_PROVIDER:-gce}"
CONTAINER_REGISTRY="${CONTAINER_REGISTRY:-gcr.io}"
PROJECT="${PROJECT:-}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have a default value for PROJECT. If PROJECT gets passed into make as '', then the image will be built as gcr.io//kubemark.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can set the project if we want to, but it's fine even otherwise as we fallback to detect-project setting $PROJECT.

KUBEMARK_IMAGE_MAKE_TARGET="${KUBEMARK_IMAGE_MAKE_TARGET:-gcloudpush}"
74 changes: 0 additions & 74 deletions test/kubemark/common.sh

This file was deleted.

59 changes: 29 additions & 30 deletions test/kubemark/gce/util.sh
Expand Up @@ -14,6 +14,35 @@
# See the License for the specific language governing permissions and
# limitations under the License.

# Wrapper for gcloud compute, running it $RETRIES times in case of failures.
# Args:
# $@: all stuff that goes after 'gcloud compute'
function run-gcloud-compute-with-retries {
RETRIES="${RETRIES:-3}"
for attempt in $(seq 1 ${RETRIES}); do
local -r gcloud_result=$(gcloud compute "$@" 2>&1)
local -r ret_val="$?"
echo "${gcloud_result}"
if [[ "${ret_val}" -ne "0" ]]; then
if [[ $(echo "${gcloud_result}" | grep -c "already exists") -gt 0 ]]; then
if [[ "${attempt}" == 1 ]]; then
echo -e "${color_red}Failed to $1 $2 $3 as the resource hasn't been deleted from a previous run.${color_norm}" >& 2
exit 1
fi
echo -e "${color_yellow}Succeeded to $1 $2 $3 in the previous attempt, but status response wasn't received.${color_norm}"
return 0
fi
echo -e "${color_yellow}Attempt $attempt failed to $1 $2 $3. Retrying.${color_norm}" >& 2
sleep $(($attempt * 5))
else
echo -e "${color_green}Succeeded to gcloud compute $1 $2 $3.${color_norm}"
return 0
fi
done
echo -e "${color_red}Failed to $1 $2 $3.${color_norm}" >& 2
exit 1
}

function create-master-instance-with-resources {
GCLOUD_COMMON_ARGS="--project ${PROJECT} --zone ${ZONE}"

Expand Down Expand Up @@ -103,33 +132,3 @@ function delete-master-instance-and-resources {
${GCLOUD_COMMON_ARGS} || true
fi
}

function delete-master-instance-and-resources {
GCLOUD_COMMON_ARGS="--project ${PROJECT} --zone ${ZONE} --quiet"

gcloud compute instances delete "${MASTER_NAME}" \
${GCLOUD_COMMON_ARGS} || true

gcloud compute disks delete "${MASTER_NAME}-pd" \
${GCLOUD_COMMON_ARGS} || true

gcloud compute disks delete "${MASTER_NAME}-event-pd" \
${GCLOUD_COMMON_ARGS} &> /dev/null || true

gcloud compute addresses delete "${MASTER_NAME}-ip" \
--project "${PROJECT}" \
--region "${REGION}" \
--quiet || true

gcloud compute firewall-rules delete "${MASTER_NAME}-https" \
--project "${PROJECT}" \
--quiet || true

if [ "${SEPARATE_EVENT_MACHINE:-false}" == "true" ]; then
gcloud compute instances delete "${EVENT_STORE_NAME}" \
${GCLOUD_COMMON_ARGS} || true

gcloud compute disks delete "${EVENT_STORE_NAME}-pd" \
${GCLOUD_COMMON_ARGS} || true
fi
}
18 changes: 0 additions & 18 deletions test/kubemark/get-real-pod-for-hollow-node.sh

This file was deleted.

4 changes: 2 additions & 2 deletions test/kubemark/resources/hollow-node_template.json
Expand Up @@ -52,7 +52,7 @@
"containers": [
{
"name": "hollow-kubelet",
"image": "gcr.io/{{project}}/kubemark:latest",
"image": "{{registry}}/{{project}}/kubemark:latest",
"ports": [
{"containerPort": 4194},
{"containerPort": 10250},
Expand Down Expand Up @@ -106,7 +106,7 @@
},
{
"name": "hollow-proxy",
"image": "gcr.io/{{project}}/kubemark:latest",
"image": "{{registry}}/{{project}}/kubemark:latest",
"env": [
{
"name": "CONTENT_TYPE",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Farther down, the containergcr.io/google_containers/node-problem-detector should also be using {{registry}}/{{project}}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about this because node-problem-detector, just like other components (apiserver, scheduler, etc) is part of proper versioned releases. Unlike kubemark which we just build and push locally to the project on-the-go (as we want it to be built against the HEAD).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, if we want to experiment with different images, we can push them to {{registry}}/{{project}} and use them within the project. So probably this is something we'd want to do in future.

Expand Down
4 changes: 1 addition & 3 deletions test/kubemark/run-e2e-tests.sh
Expand Up @@ -22,9 +22,7 @@ KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../..
# We need an absolute path to KUBE_ROOT
ABSOLUTE_ROOT=$(readlink -f ${KUBE_ROOT})

source "${KUBE_ROOT}/test/kubemark/cloud-provider-config.sh"
source "${KUBE_ROOT}/cluster/kubemark/util.sh"
source "${KUBE_ROOT}/cluster/kubemark/${CLOUD_PROVIDER}/config-default.sh"

echo "Kubemark master name: ${MASTER_NAME}"

Expand All @@ -35,7 +33,7 @@ export KUBECONFIG="${ABSOLUTE_ROOT}/test/kubemark/resources/kubeconfig.kubemark"
export E2E_MIN_STARTUP_PODS=0

if [[ -z "$@" ]]; then
ARGS='--ginkgo.focus=should\sallow\sstarting\s30\spods\sper\snode'
ARGS='--ginkgo.focus=\[Feature:performance\]'
else
ARGS=$@
fi
Expand Down
28 changes: 23 additions & 5 deletions test/kubemark/start-kubemark.sh
Expand Up @@ -19,10 +19,27 @@
TMP_ROOT="$(dirname "${BASH_SOURCE}")/../.."
KUBE_ROOT=$(readlink -e ${TMP_ROOT} 2> /dev/null || perl -MCwd -e 'print Cwd::abs_path shift' ${TMP_ROOT})

source "${KUBE_ROOT}/test/kubemark/common.sh"
source "${KUBE_ROOT}/test/kubemark/skeleton/util.sh"
source "${KUBE_ROOT}/test/kubemark/cloud-provider-config.sh"
source "${KUBE_ROOT}/test/kubemark/${CLOUD_PROVIDER}/util.sh"
source "${KUBE_ROOT}/cluster/kubemark/${CLOUD_PROVIDER}/config-default.sh"
source "${KUBE_ROOT}/cluster/kubemark/util.sh"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both run-e2e-tests.sh and start-kubemark.sh source multiple scripts that get sourced again by ${KUBE_ROOT}/cluster/kubemark/util.sh. We can use ${KUBE_ROOT}/cluster/kubemark/util.sh to source the common scripts between run-e2e-tests.sh and start-kubemark.sh.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well believe me, the order of sourcing scripts is a very sensitive issue and after investing a few hours on this, I finally got it working. :)
It's a long answer to your question, but here's a crisp reply:

  • We need to source cluster/kubemark/${CLOUD_PROVIDER}/config-default.sh after cluster/kubemark/util.sh, as cluster/kubemark/util.sh calls gce's config-default.sh somewhere inside which sets MASTER_NAME to "kubernetes-master", while we want it to be set as "kubernetes-kubemark-master"
  • We need to also source cluster/kubemark/${CLOUD_PROVIDER}/config-default.sh before cluster/kubemark/util.sh otherwise some variables would be set by cluster/kubemark/util.sh that will not be over-written by cluster/kubemark/${CLOUD_PROVIDER}/config-default.sh due to assignments of the form VAR="${VAR:-"foobar"}".
  • And regarding the common scripts that you've mentioned, the only common script called (at the top-level) by run-e2e-tests.sh and start-kubemark.sh is cluster/kubemark/util.sh, which should be fine. On the other hand, scripts test/kubemark/skeleton/util.sh and test/kubemark/${CLOUD_PROVIDER}/util.sh (which overwrites the skeleton/util.sh) should only be called in start- and stop-kubemark scripts as those are the only places where we need those functions.

Hope that answers some questions about why I did stuff the way it is done :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably should make all this sourcing business cleaner, but that's for another day!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed answer. I suppose we can clean it another day :).


# hack/lib/init.sh will ovewrite ETCD_VERSION if this is unset
# what what is default in hack/lib/etcd.sh
# To avoid it, if it is empty, we set it to 'avoid-overwrite' and
# clean it after that.
if [ -z "${ETCD_VERSION:-}" ]; then
ETCD_VERSION="avoid-overwrite"
fi
source "${KUBE_ROOT}/hack/lib/init.sh"
if [ "${ETCD_VERSION:-}" == "avoid-overwrite" ]; then
ETCD_VERSION=""
fi

KUBECTL="${KUBE_ROOT}/cluster/kubectl.sh"
KUBEMARK_DIRECTORY="${KUBE_ROOT}/test/kubemark"
RESOURCE_DIRECTORY="${KUBEMARK_DIRECTORY}/resources"

# Write all environment variables that we need to pass to the kubemark master,
# locally to the file ${RESOURCE_DIRECTORY}/kubemark-master-env.sh.
Expand Down Expand Up @@ -153,7 +170,6 @@ EOF
# Finds the right kubemark binary for 'linux/amd64' platform and uses it to
# create a docker image for hollow-node and upload it to the appropriate
# docker container registry for the cloud provider.
# TODO(shyamjvs): Make the image upload URL and makefile variable w.r.t. provider.
function create-and-upload-hollow-node-image {
MAKE_DIR="${KUBE_ROOT}/cluster/images/kubemark"
KUBEMARK_BIN="$(kube::util::find-binary-for-platform kubemark linux/amd64)"
Expand All @@ -168,7 +184,7 @@ function create-and-upload-hollow-node-image {
cd "${MAKE_DIR}"
RETRIES=3
for attempt in $(seq 1 ${RETRIES}); do
if ! make; then
if ! REGISTRY="${CONTAINER_REGISTRY}" PROJECT="${PROJECT}" make "${KUBEMARK_IMAGE_MAKE_TARGET}"; then
if [[ $((attempt)) -eq "${RETRIES}" ]]; then
echo "${color_red}Make failed. Exiting.${color_norm}"
exit 1
Expand Down Expand Up @@ -281,7 +297,6 @@ current-context: kubemark-context")
--from-literal=npd.kubeconfig="${NPD_KUBECONFIG_CONTENTS}"

# Create addon pods.
# TODO(shyamjvs): Make path to docker image variable in heapster_template.json.
mkdir -p "${RESOURCE_DIRECTORY}/addons"
sed "s/{{MASTER_IP}}/${MASTER_IP}/g" "${RESOURCE_DIRECTORY}/heapster_template.json" > "${RESOURCE_DIRECTORY}/addons/heapster.json"
metrics_mem_per_node=4
Expand All @@ -293,8 +308,8 @@ current-context: kubemark-context")
"${KUBECTL}" create -f "${RESOURCE_DIRECTORY}/addons" --namespace="kubemark"

# Create the replication controller for hollow-nodes.
# TODO(shyamjvs): Make path to docker image variable in hollow-node_template.json.
sed "s/{{numreplicas}}/${NUM_NODES:-10}/g" "${RESOURCE_DIRECTORY}/hollow-node_template.json" > "${RESOURCE_DIRECTORY}/hollow-node.json"
sed -i'' -e "s/{{registry}}/${CONTAINER_REGISTRY}/g" "${RESOURCE_DIRECTORY}/hollow-node.json"
sed -i'' -e "s/{{project}}/${PROJECT}/g" "${RESOURCE_DIRECTORY}/hollow-node.json"
sed -i'' -e "s/{{master_ip}}/${MASTER_IP}/g" "${RESOURCE_DIRECTORY}/hollow-node.json"
"${KUBECTL}" create -f "${RESOURCE_DIRECTORY}/hollow-node.json" --namespace="kubemark"
Expand Down Expand Up @@ -338,8 +353,11 @@ function wait-for-hollow-nodes-to-run-or-timeout {
}

############################### Main Function ########################################
detect-project &> /dev/null

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could add this as a skeleton function. An external provider would use detect-project to search for an existing Kubernetes master, but that can also be handled in another patch.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really convinced that we should have this function as part of kubemark skeleton, because this function is something that should be defined upstream, which is in the cloud-provider's util. Like how we currently have it defined in cluster/$CLOUD_PROVIDER/util.sh (for gce, aws and gke).


# Setup for master.
echo -e "${color_yellow}STARTING SETUP FOR MASTER${color_norm}"
find-release-tars
create-master-environment-file
create-master-instance-with-resources
generate-pki-config
Expand Down
9 changes: 8 additions & 1 deletion test/kubemark/stop-kubemark.sh
Expand Up @@ -18,10 +18,17 @@

KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../..

source "${KUBE_ROOT}/test/kubemark/common.sh"
source "${KUBE_ROOT}/test/kubemark/skeleton/util.sh"
source "${KUBE_ROOT}/test/kubemark/cloud-provider-config.sh"
source "${KUBE_ROOT}/test/kubemark/${CLOUD_PROVIDER}/util.sh"
source "${KUBE_ROOT}/cluster/kubemark/${CLOUD_PROVIDER}/config-default.sh"
source "${KUBE_ROOT}/cluster/kubemark/util.sh"

KUBECTL="${KUBE_ROOT}/cluster/kubectl.sh"
KUBEMARK_DIRECTORY="${KUBE_ROOT}/test/kubemark"
RESOURCE_DIRECTORY="${KUBEMARK_DIRECTORY}/resources"

detect-project &> /dev/null

"${KUBECTL}" delete -f "${RESOURCE_DIRECTORY}/addons" &> /dev/null || true
"${KUBECTL}" delete -f "${RESOURCE_DIRECTORY}/hollow-node.json" &> /dev/null || true
Expand Down