Skip to content

Commit

Permalink
improvement(k8s): use deploymentRegistry with in-cluster building
Browse files Browse the repository at this point in the history
Previously we would always deploy and use the in-cluster registry
when building in-cluster. Now we allow using the configured
`deploymentRegistry`, which is often preferable (and more scalable)
than using the simpler in-cluster registry.

Closes #1034
  • Loading branch information
edvald committed Feb 14, 2020
1 parent def652c commit ef2ab15
Show file tree
Hide file tree
Showing 18 changed files with 357 additions and 102 deletions.
2 changes: 1 addition & 1 deletion .circleci/config.yml
Expand Up @@ -446,7 +446,7 @@ jobs:
environment:
K8S_VERSION: <<parameters.kubernetesVersion>>
MINIKUBE_VERSION: v1.5.2
GARDEN_LOG_LEVEL: silly
GARDEN_LOG_LEVEL: debug
GARDEN_LOGGER_TYPE: basic
steps:
- checkout
Expand Down
31 changes: 24 additions & 7 deletions docs/guides/in-cluster-building.md
Expand Up @@ -25,10 +25,8 @@ DigitalOcean (track [issue #877](https://github.com/garden-io/garden/issues/877)

Specifically, the clusters need the following:

- Support for `hostPort`, and for reaching `hostPort`s from the node/Kubelet. This should work out-of-the-box in most
standard setups, but clusters using Cilium for networking may need to configure this specifically, for example.
- At least 2GB of RAM _on top of your own service requirements_. More RAM is strongly recommended if you have many
concurrent developers or CI builds.
- Support for `hostPort`, and for reaching `hostPort`s from the node/Kubelet. This should work out-of-the-box in most standard setups, but clusters using Cilium for networking may need to configure this specifically, for example.
- At least 2GB of RAM _on top of your own service requirements_. More RAM is strongly recommended if you have many concurrent developers or CI builds.
- Support for `PersistentVolumeClaim`s and enough disk space for layer caches and the in-cluster image registry.

You can—_and should_—adjust the allocated resources and storage in the provider configuration, under
Expand Down Expand Up @@ -69,9 +67,7 @@ In this mode, builds are executed as follows:
After enabling this mode (we currently still default to the `local-docker` mode), you will need to run `garden plugins kubernetes cluster-init --env=<env-name>` for each applicable environment, in order to install the required cluster-wide services. Those services include the Docker daemon itself, as well as an image registry, a sync service for receiving build contexts, two persistent volumes, an NFS volume provisioner for one of those volumes, and a couple of small utility services.

Make sure your cluster has enough resources and storage to support the required services, and keep in mind that these
services are shared across all users of the cluster. Please look at the
[resources](../providers/kubernetes.md#providersresources) and
[storage](../providers/kubernetes.md#providersstorage) sections in the provider reference for
services are shared across all users of the cluster. Please look at the [resources](../providers/kubernetes.md#providersresources) and [storage](../providers/kubernetes.md#providersstorage) sections in the provider reference for
details.

### Kaniko
Expand Down Expand Up @@ -164,3 +160,24 @@ providers:
This registry auth secret will then be copied and passed to the in-cluster builder. You can specify as many as you like, and they will be merged together.

> Note: Any time you add or modify imagePullSecrets after first initializing your cluster, you need to run `garden plugins kubernetes cluster-init` again for them to work when pulling base images!
## Using private registries for deployments

You can also use your private registry to store images after building and for deployment. If you've completed the steps above for configuring your `imagePullSecrets`, you can also configure a `deploymentRegistry` in your provider configuration:

```yaml
kind: Project
name: my-project
...
providers:
- name: kubernetes
...
imagePullSecrets:
- name: my-registry-secret
namespace: default
deploymentRegistry:
hostname: my-private-registry.com
namespace: my-project # <--- make sure your configured imagePullSecrets can write to repos in this namespace
```

This is often more scalable than using the default in-cluster registry, and may fit better with existing deployment pipelines. Just make sure the configured `imagePullSecrets` have the privileges to push to repos in the configured namespace.
6 changes: 6 additions & 0 deletions docs/providers/kubernetes.md
Expand Up @@ -277,6 +277,10 @@ providers:
context:

# The registry where built containers should be pushed to, and then pulled to the cluster when deploying services.
#
# Important: If you specify this in combination with `buildMode: cluster-docker` or `buildMode: kaniko`, you must
# make sure `imagePullSecrets` includes authentication with the specified deployment registry, that has the
# appropriate write privileges (usually full write access to the configured `deploymentRegistry.namespace`).
deploymentRegistry:
# The hostname (and optionally port, if not the default port) of the registry.
hostname:
Expand Down Expand Up @@ -1312,6 +1316,8 @@ providers:

The registry where built containers should be pushed to, and then pulled to the cluster when deploying services.

Important: If you specify this in combination with `buildMode: cluster-docker` or `buildMode: kaniko`, you must make sure `imagePullSecrets` includes authentication with the specified deployment registry, that has the appropriate write privileges (usually full write access to the configured `deploymentRegistry.namespace`).

| Type | Required |
| -------- | -------- |
| `object` | No |
Expand Down
7 changes: 4 additions & 3 deletions garden-service/src/plugins/container/config.ts
Expand Up @@ -407,9 +407,10 @@ export const containerRegistryConfigSchema = joi.object().keys({
.default("_")
.description("The namespace in the registry where images should be pushed.")
.example("my-project"),
}).description(deline`
The registry where built containers should be pushed to, and then pulled to the cluster when deploying
services.
}).description(dedent`
The registry where built containers should be pushed to, and then pulled to the cluster when deploying services.
Important: If you specify this in combination with \`buildMode: cluster-docker\` or \`buildMode: kaniko\`, you must make sure \`imagePullSecrets\` includes authentication with the specified deployment registry, that has the appropriate write privileges (usually full write access to the configured \`deploymentRegistry.namespace\`).
`)

export interface ContainerService extends Service<ContainerModule> {}
Expand Down
1 change: 1 addition & 0 deletions garden-service/src/plugins/kubernetes/constants.ts
Expand Up @@ -14,3 +14,4 @@ export const MAX_RUN_RESULT_OUTPUT_LENGTH = 900 * 1024 // max ConfigMap data siz

export const dockerAuthSecretName = "builder-docker-config"
export const dockerAuthSecretKey = ".dockerconfigjson"
export const inClusterRegistryHostname = "127.0.0.1:5000"
12 changes: 8 additions & 4 deletions garden-service/src/plugins/kubernetes/container/build.ts
Expand Up @@ -14,7 +14,7 @@ import { buildContainerModule, getContainerBuildStatus, getDockerBuildFlags } fr
import { GetBuildStatusParams, BuildStatus } from "../../../types/plugin/module/getBuildStatus"
import { BuildModuleParams, BuildResult } from "../../../types/plugin/module/build"
import { millicpuToString, megabytesToString, getRunningPodInDeployment, makePodName } from "../util"
import { RSYNC_PORT, dockerAuthSecretName, dockerAuthSecretKey } from "../constants"
import { RSYNC_PORT, dockerAuthSecretName, dockerAuthSecretKey, inClusterRegistryHostname } from "../constants"
import { posix, resolve } from "path"
import { KubeApi } from "../api"
import { kubectl } from "../kubectl"
Expand Down Expand Up @@ -246,11 +246,15 @@ const remoteBuild: BuildHandler = async (params) => {
"--destination",
deploymentImageId,
"--cache=true",
"--insecure", // The in-cluster registry is not exposed, so we don't configure TLS on it.
// "--verbosity", "debug",
...getDockerBuildFlags(module),
]

if (provider.config.deploymentRegistry?.hostname === inClusterRegistryHostname) {
// The in-cluster registry is not exposed, so we don't configure TLS on it.
args.push("--insecure")
}

args.push(...getDockerBuildFlags(module))

// Execute the build
const buildRes = await runKaniko({ provider, log, module, args, outputStream: stdout })
buildLog = buildRes.log
Expand Down
21 changes: 12 additions & 9 deletions garden-service/src/plugins/kubernetes/kubernetes.ts
Expand Up @@ -35,6 +35,7 @@ import pluralize from "pluralize"
import { getSystemMetadataNamespaceName } from "./system"
import { removeTillerCmd } from "./commands/remove-tiller"
import { DOCS_BASE_URL } from "../../constants"
import { inClusterRegistryHostname } from "./constants"

export async function configureProvider({
projectName,
Expand All @@ -56,17 +57,19 @@ export async function configureProvider({
}

if (config.buildMode === "cluster-docker" || config.buildMode === "kaniko") {
// TODO: support external registry
// This is a special configuration, used in combination with the registry-proxy service,
// to make sure every node in the cluster can resolve the image from the registry we deploy in-cluster.
config.deploymentRegistry = {
hostname: `127.0.0.1:5000`,
namespace: config.namespace,
config._systemServices.push("build-sync")

if (!config.deploymentRegistry || config.deploymentRegistry.hostname === inClusterRegistryHostname) {
// Deploy an in-cluster registry, unless otherwise specified.
// This is a special configuration, used in combination with the registry-proxy service,
// to make sure every node in the cluster can resolve the image from the registry we deploy in-cluster.
config.deploymentRegistry = {
hostname: inClusterRegistryHostname,
namespace: config.namespace,
}
config._systemServices.push("docker-registry", "registry-proxy")
}

// Deploy build services on init
config._systemServices.push("build-sync", "docker-registry", "registry-proxy")

if (config.buildMode === "cluster-docker") {
config._systemServices.push("docker-daemon")
}
Expand Down
8 changes: 5 additions & 3 deletions garden-service/src/plugins/kubernetes/local/config.ts
Expand Up @@ -131,9 +131,11 @@ export async function configureProvider(params: ConfigureProviderParams<LocalKub
await configureMicrok8sAddons(log, addons)

// Need to push to the built-in registry
config.deploymentRegistry = {
hostname: "localhost:32000",
namespace,
if (config.buildMode === "local-docker") {
config.deploymentRegistry = {
hostname: "localhost:32000",
namespace,
}
}
}

Expand Down
15 changes: 14 additions & 1 deletion garden-service/test/data/test-projects/container/garden.yml
Expand Up @@ -5,7 +5,9 @@ environments:
- name: cluster-docker
- name: cluster-docker-buildkit
- name: cluster-docker-auth
- name: cluster-docker-remote-registry
- name: kaniko
- name: kaniko-remote-registry
providers:
- name: local-kubernetes
environments: [local]
Expand All @@ -22,6 +24,17 @@ providers:
enableBuildKit: true
- <<: *clusterDocker
environments: [cluster-docker-auth]
- <<: *clusterDocker
environments: [cluster-docker-remote-registry]
deploymentRegistry:
hostname: index.docker.io
namespace: gardendev
- <<: *clusterDocker
environments: [kaniko]
buildMode: kaniko
buildMode: kaniko
- <<: *clusterDocker
environments: [kaniko-remote-registry]
buildMode: kaniko
deploymentRegistry:
hostname: index.docker.io
namespace: gardendev
@@ -1,3 +1,3 @@
FROM busybox
FROM busybox:1.31.1

RUN rm -f /bin/tar
@@ -0,0 +1,3 @@
FROM busybox:1.31.1

ADD foo.txt /foo.txt
Empty file.
@@ -0,0 +1,10 @@
kind: Module
name: remote-registry-test
description: Test module for pushing to private registry
type: container
services:
- name: remote-registry-test
command: [sh, -c, "nc -l -p 8080"]
ports:
- name: http
containerPort: 8080
102 changes: 48 additions & 54 deletions garden-service/test/integ/src/plugins/kubernetes/container/build.ts
Expand Up @@ -6,83 +6,32 @@
* file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/

import { getDataDir, makeTestGarden, expectError } from "../../../../../helpers"
import { expectError } from "../../../../../helpers"
import { Garden } from "../../../../../../src/garden"
import { ConfigGraph } from "../../../../../../src/config-graph"
import { k8sBuildContainer } from "../../../../../../src/plugins/kubernetes/container/build"
import { PluginContext } from "../../../../../../src/plugin-context"
import { clusterInit } from "../../../../../../src/plugins/kubernetes/commands/cluster-init"
import { KubernetesProvider } from "../../../../../../src/plugins/kubernetes/config"
import { decryptSecretFile } from "../../../../helpers"
import { GARDEN_SERVICE_ROOT } from "../../../../../../src/constants"
import { resolve } from "path"
import { KubeApi } from "../../../../../../src/plugins/kubernetes/api"
import { expect } from "chai"
import { V1Secret } from "@kubernetes/client-node"
import { KubernetesResource } from "../../../../../../src/plugins/kubernetes/types"
import { getContainerTestGarden } from "./container"

describe("k8sBuildContainer", () => {
let garden: Garden
let graph: ConfigGraph
let provider: KubernetesProvider
let ctx: PluginContext

let initialized = false

const root = getDataDir("test-projects", "container")

before(async () => {
garden = await makeTestGarden(root, { environmentName: "local" })
provider = <KubernetesProvider>await garden.resolveProvider("local-kubernetes")
})

after(async () => {
if (garden) {
await garden.close()
}
})

const init = async (environmentName: string) => {
garden = await makeTestGarden(root, { environmentName })

if (!initialized && environmentName !== "local") {
// Load the test authentication for private registries
const api = await KubeApi.factory(garden.log, provider)
try {
const authSecret = JSON.parse(
(await decryptSecretFile(resolve(GARDEN_SERVICE_ROOT, "..", "secrets", "test-docker-auth.json"))).toString()
)
await api.upsert({ kind: "Secret", namespace: "default", obj: authSecret, log: garden.log })
} catch (err) {
// This is expected when running without access to gcloud (e.g. in minikube tests)
// tslint:disable-next-line: no-console
console.log("Warning: Unable to decrypt docker auth secret")
const authSecret: KubernetesResource<V1Secret> = {
apiVersion: "v1",
kind: "Secret",
type: "kubernetes.io/dockerconfigjson",
metadata: {
name: "test-docker-auth",
namespace: "default",
},
stringData: {
".dockerconfigjson": JSON.stringify({ auths: {} }),
},
}
await api.upsert({ kind: "Secret", namespace: "default", obj: authSecret, log: garden.log })
}
}

garden = await getContainerTestGarden(environmentName)
graph = await garden.getConfigGraph(garden.log)
provider = <KubernetesProvider>await garden.resolveProvider("local-kubernetes")
ctx = garden.getPluginContext(provider)

// We only need to run the cluster-init flow once, because the configurations are compatible
if (!initialized && environmentName !== "local") {
// Run cluster-init
await clusterInit.handler({ ctx, log: garden.log })
initialized = true
}
}

context("local mode", () => {
Expand Down Expand Up @@ -145,6 +94,34 @@ describe("k8sBuildContainer", () => {
}
)
})

it("should push to configured deploymentRegistry if specified (remote only)", async () => {
const module = await graph.getModule("private-base")
await garden.buildDir.syncFromSrc(module, garden.log)

await k8sBuildContainer({
ctx,
log: garden.log,
module,
})
})
})

context("cluster-docker-remote-registry mode", () => {
before(async () => {
await init("cluster-docker-remote-registry")
})

it("should push to configured deploymentRegistry if specified (remote only)", async () => {
const module = await graph.getModule("remote-registry-test")
await garden.buildDir.syncFromSrc(module, garden.log)

await k8sBuildContainer({
ctx,
log: garden.log,
module,
})
})
})

context("cluster-docker mode with BuildKit", () => {
Expand Down Expand Up @@ -239,4 +216,21 @@ describe("k8sBuildContainer", () => {
)
})
})

context("kaniko-remote-registry mode", () => {
before(async () => {
await init("kaniko-remote-registry")
})

it("should push to configured deploymentRegistry if specified (remote only)", async () => {
const module = await graph.getModule("remote-registry-test")
await garden.buildDir.syncFromSrc(module, garden.log)

await k8sBuildContainer({
ctx,
log: garden.log,
module,
})
})
})
})

0 comments on commit ef2ab15

Please sign in to comment.