Skip to content

Commit

Permalink
Merge pull request #829 from tf/external-services-doc
Browse files Browse the repository at this point in the history
Import external services doc from wiki
  • Loading branch information
tf committed Aug 10, 2017
2 parents 1368780 + 88e879d commit 3d34806
Show file tree
Hide file tree
Showing 3 changed files with 278 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,8 @@ things do not look too blank in development mode.
## Configuration

Pageflow stores files in S3 buckets also in development
mode. Otherwise there's no way to have Zencoder encode them. See the
wiki page [Setting up external services](https://github.com/codevise/pageflow/wiki/Setting-up-External-Services).
mode. Otherwise there's no way to have Zencoder encode them. See
[setting up external services](./doc/setting_up_external_services.md).

The host application can utilize environment variables to configure the API keys for S3 and Zencoder. The variables can be found in the generated Pageflow initializer.

Expand Down
181 changes: 181 additions & 0 deletions doc/setting_up_external_services.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
# Setting up External Services

Pageflow uses some external services to process and store media files:

* [Amazon S3](http://aws.amazon.com/s3/) to host its videos, images
and audios.
* [Zencoder](https://zencoder.com/) to encode the content to fit the
different formats needed.
* (In production environment) A
[CDN](http://en.wikipedia.org/wiki/Content_delivery_network) to
speed up access to the content
(e.g. [Amazon Cloudfront](http://aws.amazon.com/cloudfront/)).

Please create accounts for [Amazon AWS](http://aws.amazon.com/) and
[Zencoder](https://zencoder.com/).

Files that are uploaded to be used in Pageflow will be stored in an S3
bucket. Zencoder takes them from there, encodes the files and stores
them in a second bucket. The website displays the content from this
second bucket via the CDN.

## Amazon S3

Two Amazon S3 buckets are needed are needed for each environment.

* Main bucket for paperclip attachments and Zencoder to read from.
* Output bucket for Zencoder to write encoded files to.

For example:

* `de-mycompany-pageflow-production`
* `de-mycompany-pageflow-production-out`
* `de-mycompany-pageflow-development`
* `de-mycompany-pageflow-development-out`

You can add buckets for `staging` or other deployment environments.
Multiple developers can share the same development buckets because
files in this buckets are namespaced by the hostname of the
developer's machine.

Hint: Its better to use dashes `-` than dots `.`. Dots can cause
trouble when using https.

### Bucket Configuration

* Configure static website hosting for each bucket.

* Go to Properties -> Static Website Hosting
* Click "Enable Static Website Hosting".
* Enter arbitrary string into the field "Index Document"
(i.e. "foo", you can't save without this)
* Click "Save"

* Grant access to the buckets by adding
[bucket policies](./setting_up_s3_bucket_policies.md).

* Our policies also enable public read access to your bucket. This
is required.
* Go to Properties -> Permissions.
* Click "Bucket Policy".
* Copy the correct JSON snippet from
[bucket policies](./setting_up_s3_bucket_policies.md) (main or
output)
* Replace the \<main bucket\> or \<output bucket\> placeholders with
the full bucket name.
* Click "Save"

* You need to enable Cross Origin Resource Sharing (CORS).

* Still in the Permissions screen, click _CORS Configuration_.
* Paste the code below. This grants everyone GET requests, which is
what Pageflow needs.

```
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>28800</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
```

You can tweak these rules if you want to. See the
[Amazon Developer Guide](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html#how-do-i-enable-cors).

### Bandwidth Detection

Pageflow measures the download times of some static files to detect
the bandwidth of the client.

The following files have to be placed in the Output ("-out") bucket:

* [app/assets/images/pageflow/bandwidth_probe_large.png](https://raw.githubusercontent.com/codevise/pageflow/master/app/assets/images/pageflow/bandwidth_probe_large.png)
* [app/assets/images/pageflow/bandwidth_probe_small.png](https://raw.githubusercontent.com/codevise/pageflow/master/app/assets/images/pageflow/bandwidth_probe_small.png).

### Configuring Paperclip

Pageflow uses the
[Paperclip](https://github.com/thoughtbot/paperclip/) gem, to upload
files to S3. Edit the `paperclip_s3_default_options` settings in
`config/initializers/pageflow.rb`:

```ruby
config.paperclip_s3_default_options.merge!(
s3_credentials: {
bucket: 'com-example-pageflow-development',
access_key_id: 'xxx',
secret_access_key: 'xxx',
s3_host_name: 's3-eu-west-1.amazonaws.com'
},
s3_host_alias: 'com-example-pageflow.s3-website-eu-west-1.amazonaws.com',
s3_protocol: 'http'
)
```

The required options are:

* `bucket`: The name of the main S3 bucket you chose above.

* `access_key_id`/`secret_access_key`:
[IAM](http://aws.amazon.com/de/iam/) credentials that grant write
access to your main bucket.

* `s3_host_name`: The host name that shall be used on the server side
to connect to the S3 REST API to upload files. The correct value
depends on the region of your bucket. See the list of
[S3 AWS endpoints](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region)
for more information. **Important**: Be sure NOT to enter an S3
Website endpoint (`s3-website-*.amazonaws.com`) here. Those enpoints
can only be used for read access to files from your buckets.

* `s3_host_alias`: The host name that shall be used for image URLs in
your published entries. This can be the hostname of some CDN you
have configured to deliver files from your bucket. Even though use
of a CDN is strongly recommended, you can also use an S3 Website
endpoint here, as in the example above.

* `s3_protocol`: The protocol to use for public image URLs. Note that
S3 Website endpoints only support `http`.

Refer to the
[Paperclip documentation](http://www.rubydoc.info/gems/paperclip/Paperclip/Storage/S3)
for a full list of supported options.

## Zencoder

Get your API Key from [Zencoder](https://zencoder.com) and make sure
it is used in the `zencoder_options` in
`config/initializers/pageflow.rb`:

```ruby
config.zencoder_options.merge!(
api_key: 'xxx',
output_bucket: 'com-example-pageflow-out',
s3_host_alias: 'com-example-pageflow-out.s3-website-eu-west-1.amazonaws.com',
s3_protocol: 'http',
attachments_version: 'v1'
)
```

Just like in the previous section, the `s3_host_alias` and
`s3_protocol` settings are used to build video and audio URLs inside
your published entries.

Note that Zencoder offers a special "Integration API Key" that can be
used free of charge during development. Encoded files are cropped at
five seconds.

## Amazon Cloudfront

Create three distributions:

- one for each of the two S3 buckets
- one for which has the rails app itself as origin

Configure CNAMES and make sure they are used in
`config/initializers/pageflow.rb` as `s3_host_alias` in production
mode. Please see the inline docs and examples in
`config/initializers/pageflow.rb`.
95 changes: 95 additions & 0 deletions doc/setting_up_s3_bucket_policies.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# Setting up S3 Bucket Policies

For Zencoder to be able to access S3 buckets of your AWS account, the
following bucket policies have to be configured. In addition we're
granting public read access to the entire bucket, which is needed
before we can start using S3 as a web server.

Note that `<main bucket>` and `<output bucket>` have to be replaced
with the correct bucket names below.

Grant read access to main bucket:

{
"Version": "2017-06-20",
"Id": "PageflowMainBucketPolicy",
"Statement": [
{
"Sid": "Stmt1497951043738",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<main bucket>/*",
"Principal": "*"
},
{
"Sid": "Stmt1295042087538",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::395540211253:root"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<main bucket>/*"
},
{
"Sid": "Stmt1295042087538",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::395540211253:root"
},
"Action": [
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::<main bucket>"
}
]
}

Grant full access to output bucket:

{
"Version": "2017-06-20",
"Id": "PageflowOutputBucketPolicy",
"Statement": [
{
"Sid": "Stmt1497951043738",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<output bucket>/*",
"Principal": "*"
},
{
"Sid": "Stmt1295042087538",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::395540211253:root"
},
"Action": [
"s3:GetObject",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<output bucket>/*"
},
{
"Sid": "Stmt1295042087538",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::395540211253:root"
},
"Action": [
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::<output bucket>"
}
]
}

See also
[Using Zencoder with S3](https://app.zencoder.com/docs/guides/getting-started/working-with-s3)

0 comments on commit 3d34806

Please sign in to comment.