Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions assets/js/main.js
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,11 @@ function generateMiniToc() {
// Create a mini TOC if desired.
generateMiniToc();

// Mobile menu toggle.
// Mobile menu toggles.
$(".nav-header-toggle").click(function() {
$(".nav-header-items").toggleClass("hidden");
});

$(".blog-sidebar-toggle").click(function () {
$(".blog-sidebar-content").toggleClass("hidden");
});
}(jQuery));
Original file line number Diff line number Diff line change
Expand Up @@ -63,28 +63,30 @@ combined with, for example, the Pulumi Azure provider to deploy and
manage both the cluster and Kubernetes resources that should be
installed into the cluster.

import * as azure from "@pulumi/azure";
import * as k8s from "@pulumi/kubernetes";
import * as helm from "@pulumi/kubernetes/helm";

// Create an Azure Kubernetes Service cluster
const resourceGroup = new azure.core.ResourceGroup("aks", { location: "West US" });
const kubernetesService = new azure.containerservice.KubernetesCluster("kubernetes", {
/* ... */
});

// Create a Pulumi Kubernetes provider configured to deploy to the AKS cluster above
export const azk8s = new k8s.Provider("azk8s", {
kubeconfig: kubernetesService.kubeConfigRaw,
});

// Deploy a Helm chart into the cluster
const kibana = new helm.v2.Chart("kibana", {
repo: "stable",
chart: "kibana",
version: "0.8.0",
values: { service: { type: "LoadBalancer" } },
}, { providers: { kubernetes: azk8s } });
```typescript
import * as azure from "@pulumi/azure";
import * as k8s from "@pulumi/kubernetes";
import * as helm from "@pulumi/kubernetes/helm";

// Create an Azure Kubernetes Service cluster
const resourceGroup = new azure.core.ResourceGroup("aks", { location: "West US" });
const kubernetesService = new azure.containerservice.KubernetesCluster("kubernetes", {
/* ... */
});

// Create a Pulumi Kubernetes provider configured to deploy to the AKS cluster above
export const azk8s = new k8s.Provider("azk8s", {
kubeconfig: kubernetesService.kubeConfigRaw,
});

// Deploy a Helm chart into the cluster
const kibana = new helm.v2.Chart("kibana", {
repo: "stable",
chart: "kibana",
version: "0.8.0",
values: { service: { type: "LoadBalancer" } },
}, { providers: { kubernetes: azk8s } });
```

Check out the [Kubernetes
overview]({{< ref "/docs/quickstart/kubernetes" >}}) docs, the [API
Expand Down Expand Up @@ -162,16 +164,18 @@ For example, the code below uses the `axios` NPM package to make an HTTP
request inside an AWS Lambda invoked by an AWS API Gateway (in just a
few lines of code!).

import * as axios from "axios";
import * as cloud from "@pulumi/cloud-aws";
```typescript
import * as axios from "axios";
import * as cloud from "@pulumi/cloud-aws";

const api = new cloud.API("api");
api.get("/", async (req, res) => {
const statusText = (await axios.default.get("https://pulumi.io")).statusText;
res.write(`GET https://pulumi.io/ == ${statusText}`).end();
});
const api = new cloud.API("api");
api.get("/", async (req, res) => {
const statusText = (await axios.default.get("https://pulumi.io")).statusText;
res.write(`GET https://pulumi.io/ == ${statusText}`).end();
});

export const url = api.publish().url;
export const url = api.publish().url;
```

## OpenStack

Expand All @@ -183,13 +187,15 @@ infrastructure deployments.

For example, a VM can be deployed to OVH with just the following:

const os = require("@pulumi/openstack");
const instance = new os.compute.Instance("test", {
flavorName: "s1-2",
imageName: "Ubuntu 16.04",
});
```typescript
const os = require("@pulumi/openstack");
const instance = new os.compute.Instance("test", {
flavorName: "s1-2",
imageName: "Ubuntu 16.04",
});

exports.instanceIP = instance.accessIpV4;
exports.instanceIP = instance.accessIpV4;
```

Check out the [API documentation]({{< ref "/docs/reference/pkg/nodejs/pulumi/openstack" >}})
and the [pulumi-openstack](https://github.com/pulumi/pulumi-openstack)
Expand All @@ -207,30 +213,34 @@ On Azure, the new `@pulumi/azure-serverless` package makes it easy to
work with serverless functions, and has initial support for hooking up
to Blob storage event sources:

import * as azure from "@pulumi/azure";
import * as serverless from "@pulumi/azure-serverless";
```typescript
import * as azure from "@pulumi/azure";
import * as serverless from "@pulumi/azure-serverless";

const storageAccount = new azure.storage.Account("images-container", { /* ... */ });
serverless.storage.onBlobEvent("newImage", storageAccount, (context, blob) => {
context.log(context);
context.log(blob);
context.done();
}, { containerName: "folder", filterSuffix: ".png" });
const storageAccount = new azure.storage.Account("images-container", { /* ... */ });
serverless.storage.onBlobEvent("newImage", storageAccount, (context, blob) => {
context.log(context);
context.log(blob);
context.done();
}, { containerName: "folder", filterSuffix: ".png" });

export let storageAccountName = storageAccount.name;
export let storageAccountName = storageAccount.name;
```

On Google Cloud, the new `gcp.serverless.Function` provides an easy way
to create a Google Cloud Function from a JavaScript callback in a Pulumi
program. Thanks to Mikhail Shilkov
([@mikhailshilkov](https://github.com/mikhailshilkov)) for contributing
this feature!

import * as gcp from "@pulumi/gcp";
let f = new gcp.serverless.Function("f", {}, (req, res) => {
res.send(`Hello ${req.body.name || 'World'}!`);
});
```typescript
import * as gcp from "@pulumi/gcp";
let f = new gcp.serverless.Function("f", {}, (req, res) => {
res.send(`Hello ${req.body.name || 'World'}!`);
});

export let url = f.function.httpsTriggerUrl;
export let url = f.function.httpsTriggerUrl;
```

## GitHub App for CI/CD Integration

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,45 +68,47 @@ This creates a new project in the `hello-colada` directory.

Second, replace the contents of `index.js` with the following:

const cloud = require("@pulumi/cloud-aws");

// A bucket to store videos and thumbnails.
const bucket = new cloud.Bucket("bucket");
const bucketName = bucket.bucket.id;

// A task which runs a containerized FFMPEG job to extract a thumbnail image.
const ffmpegThumbnailTask = new cloud.Task("ffmpegThumbTask", {
build: "./", // folder containing the Dockerfile
memoryReservation: 128,
});

// When a new video is uploaded, run the FFMPEG task on the video file.
// Use the time index specified in the filename (e.g. cat_00-01.mp4 uses timestamp 00:01)
bucket.onPut("onNewVideo", async (bucketArgs) => {
console.log(`*** New video: file ${bucketArgs.key} was uploaded at ${bucketArgs.eventTime}.`);
const file = bucketArgs.key;

const thumbnailFile = file.substring(0, file.indexOf('_')) + '.jpg';
const framePos = file.substring(file.indexOf('_')+1, file.indexOf('.')).replace('-',':');

await ffmpegThumbnailTask.run({
environment: {
"S3_BUCKET": bucketName.get(),
"INPUT_VIDEO": file,
"TIME_OFFSET": framePos,
"OUTPUT_FILE": thumbnailFile,
},
});
console.log(`Running thumbnailer task.`);
}, { keySuffix: ".mp4" });

// When a new thumbnail is created, log a message.
bucket.onPut("onNewThumbnail", async (bucketArgs) => {
console.log(`*** New thumbnail: file ${bucketArgs.key} was saved at ${bucketArgs.eventTime}.`);
}, { keySuffix: ".jpg" });

// Export the bucket name.
exports.bucketName = bucketName;
```javascript
const cloud = require("@pulumi/cloud-aws");

// A bucket to store videos and thumbnails.
const bucket = new cloud.Bucket("bucket");
const bucketName = bucket.bucket.id;

// A task which runs a containerized FFMPEG job to extract a thumbnail image.
const ffmpegThumbnailTask = new cloud.Task("ffmpegThumbTask", {
build: "./", // folder containing the Dockerfile
memoryReservation: 128,
});

// When a new video is uploaded, run the FFMPEG task on the video file.
// Use the time index specified in the filename (e.g. cat_00-01.mp4 uses timestamp 00:01)
bucket.onPut("onNewVideo", async (bucketArgs) => {
console.log(`*** New video: file ${bucketArgs.key} was uploaded at ${bucketArgs.eventTime}.`);
const file = bucketArgs.key;

const thumbnailFile = file.substring(0, file.indexOf('_')) + '.jpg';
const framePos = file.substring(file.indexOf('_')+1, file.indexOf('.')).replace('-',':');

await ffmpegThumbnailTask.run({
environment: {
"S3_BUCKET": bucketName.get(),
"INPUT_VIDEO": file,
"TIME_OFFSET": framePos,
"OUTPUT_FILE": thumbnailFile,
},
});
console.log(`Running thumbnailer task.`);
}, { keySuffix: ".mp4" });

// When a new thumbnail is created, log a message.
bucket.onPut("onNewThumbnail", async (bucketArgs) => {
console.log(`*** New thumbnail: file ${bucketArgs.key} was saved at ${bucketArgs.eventTime}.`);
}, { keySuffix: ".jpg" });

// Export the bucket name.
exports.bucketName = bucketName;
```

This code uses `cloud.Task`, a high-level, convenient component for
working with containers. The component automatically provisions a
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,22 +40,24 @@ simple Pulumi program that creates a load balanced Fargate Service that
accessible to the internet, but uses the *public* `nginx` container
image from the Docker Hub:

// A simple NGINX service, scaled out over two containers.
const nginx = new awsx.ecs.FargateService("nginx", {
cluster,
desiredCount: 2,
taskDefinitionArgs: {
containers: {
nginx: {
image: "nginx",
memory: 128,
portMappings: [new awsx.elasticloadbalancingv2.ApplicationListener("nginx", { port: 80 })],
},
```typescript
// A simple NGINX service, scaled out over two containers.
const nginx = new awsx.ecs.FargateService("nginx", {
cluster,
desiredCount: 2,
taskDefinitionArgs: {
containers: {
nginx: {
image: "nginx",
memory: 128,
portMappings: [new awsx.elasticloadbalancingv2.ApplicationListener("nginx", { port: 80 })],
},
},
});
},
});

export const nginxEndpoint = nginxListener.endpoint;
export const nginxEndpoint = nginxListener.endpoint;
```

Running this give us:

Expand Down Expand Up @@ -116,19 +118,22 @@ Docker image, push it to ECR, get the resulting image name, and
reference it from our ECS service (it works the same in both ECS and
EKS):

// common code from before trimmed out
const repository = new awsx.ecr.Repository("repo");
```typescript
// common code from before trimmed out
const repository = new awsx.ecr.Repository("repo");

// Invoke 'docker' to actually build the DockerFile that is in the 'app' folder relative to
// this program. Once built, push that image up to our personal ECR repo.
const image = repository.buildAndPushImage("./app")
// Invoke 'docker' to actually build the DockerFile that is in the 'app' folder relative to
// this program. Once built, push that image up to our personal ECR repo.
const image = repository.buildAndPushImage("./app")

const service = new awsx.ecs.FargateService("service", {
// ... common code from before trimmed out
taskDefinitionArgs: {
containers: {
service: {
image: image,
const service = new awsx.ecs.FargateService("service", {
// ... common code from before trimmed out
taskDefinitionArgs: {
containers: {
service: {
image: image,
...
```

So let's see what happens when we actually try to run this:

Expand Down Expand Up @@ -174,15 +179,17 @@ images based on flexible criteria to meet your needs.
As a simple example, here's a way to just remove any untagged images
that are older than one week old:

// common code from before trimmed out
const repository = new awsx.ecr.Repository("repo", {
lifeCyclePolicyArgs: {
rules: [{
selection: "untagged",
maximumAgeLimit: 7,
}],
},
});
```typescript
// common code from before trimmed out
const repository = new awsx.ecr.Repository("repo", {
lifeCyclePolicyArgs: {
rules: [{
selection: "untagged",
maximumAgeLimit: 7,
}],
},
});
```

Now, you can keep your last two weeks of images around if you want them
with all tagged images (like `'latest'`) being preserved. You can
Expand Down
Loading