Skip to content

kalisio/feathers-s3

Repository files navigation

feathers-s3

Latest Release Build Status Code Climate Test Coverage Download Status

feathers-s3 allows to deal with AWS S3 API compatible storages to manage file upload/download in a FeathersJS application.

Unlike the solution feathers-blob, which provides a store abstraction, feathers-s3 is limited to be used with stores providing a S3 compatible API. However, it takes advantage of the S3 API by using presigned URLs to manage (upload, share) objects on a store in a more reliable and secure way.

Using Presigned URL has different pros and cons:

  • Pros
    • It decreases the necessary resources of the server because the transfer process is established between the client and the S3 service.
    • It speeds up the transfer process because it can be easily parallelized.
    • It reduces the risk of your server becoming a bottleneck.
    • It supports multipart upload by design.
    • It is inherently secure.
  • Cons
    • It involves extra complexity on the client side.
    • It requires your S3 bucket to have CORS enabled.
    • It requires your provider to support S3 Signature Version 4.
    • The access to the object is limited to a short time.

To address these drawbacks, feathers-s3 provides:

  • Helper functions to simplify usage from a client application.
  • An Express middleware to directly access objects based on URLs without using presignedl url. There is no time constraint unlike with presigned url and you can also access only a portion of an object using range requests.
  • A proxy mode that let you use service methods that don't rely on presigned URL in case your S3 provider doesn't support CORS settings or you'd like to process data on your backend. In this case the objects are always transferred through your backend.

Principle

The following sections illustrate the different process implemented by feathers-s3:

Upload

The upload process can be a singlepart upload or a multipart upload depending on the size of the object to be uploaded. If the size is greater than a chunkSize (by default 5MB), feathers-s3 performs a multipart upload. Otherwise it performs a singlepart upload.

Singlepart upload

Upload principle

Multipart upload

Mulitpart upload principle

Download

Donwload principle

Usage

Installation

npm install @kalisio/feathers-s3 --save

or

yarn add @kalisio/feathers-s3

Example

The provided example illustrates how to setup:

  • a server app

import { feathers } from '@feathersjs/feathers'
import express from '@feathersjs/express'
import socketio from '@feathersjs/socketio'
import { Service } from '../lib/Service.js'
const port = process.env.PORT || 3333
// Create the Feathers app
const app = express(feathers())
// Configure express
app.use(express.json())
app.use(express.urlencoded({ extended: true }))
// Host the public folder
app.use('/', express.static('./public'))
// Serve feathers-s3
app.use('/feathers-s3', express.static('../lib'))
// Configure Socket.io
app.configure(socketio({
cors: { origin: '*' },
maxHttpBufferSize: 1e8
}))
// Define the options used to instanciate the S3 service
const options = {
s3Client: {
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY
},
endpoint: process.env.S3_ENDPOINT,
region: process.env.S3_REGION,
signatureVersion: 'v4'
},
bucket: process.env.S3_BUCKET,
prefix: 'feathers-s3-example'
}
// Register the message service on the Feathers application
// /!\ do not forget to declare the custom methods
app.use('s3', new Service(options), {
methods: ['create', 'get', 'find', 'remove', 'createMultipartUpload', 'completeMultipartUpload', 'uploadPart', 'putObject']
})
// Start the server
app.listen(port).then(() => {
console.log(`Feathers server listening on localhost:${port}`)
})

  • a browser client app

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>feathers-s3 axample</title>
</head>
<body>
<main id="main" class="container">
<h1>Welcome to feathers-s3</h1>
<input id="file" type="file" />
<button id="upload" onclick="uploadFile()" disabled>Upload file</button>
<button id="download" onclick="downloadFile()" disabled>Download file</button>
</main>
<script src="https://unpkg.com/@feathersjs/client@^5.0.0-pre.28/dist/feathers.js"></script>
<script src="https://unpkg.com/socket.io-client@4.5.3/dist/socket.io.min.js"></script>
<!--script type="module" src="/feathers-s3/client.js"></script-->
<script type="module">
import { getClientService } from '/feathers-s3/client.js'
// Create the client Feathers app
const app = feathers()
// Configure the transport using socket.io
const socket = io('http://localhost:3333')
const transport = feathers.socketio(socket)
app.configure(transport)
// Configure and get the s3 service
const s3Service = getClientService(app, {
transport,
fetch: window.fetch.bind(window),
useProxy: true
})
// listen to the file input in order to disable/enable the buttons
document.getElementById('file').addEventListener('input', (evt) => {
const files = document.getElementById('file').files
if (files.Length > 0) document.getElementById('upload').disabled = true
else document.getElementById('upload').disabled = false
document.getElementById('download').disabled = true
})
// Upload file
window.uploadFile = async () => {
const files = document.getElementById('file').files
if (files.length === 0) return
try {
// upload the selected file
const file = files[0]
const response = await s3Service.upload(file.name, file)
document.getElementById('download').disabled = false
} catch (error) {
console.error(error)
}
}
// Download file
window.downloadFile = async () => {
try {
// download the selected file
const file = document.getElementById('file').files[0]
const response = await s3Service.download(file.name)
// setup an anchor to download the file from the browser
const anchor = document.createElement('a')
anchor.href = URL.createObjectURL(new Blob([ response.buffer ], { type: response.type }))
anchor.download = file.name
document.body.appendChild(anchor)
console.log(anchor)
anchor.click()
} catch (error) {
console.error(error)
}
}
</script>
</body>
</html>

As a general rule you can also have a look at the tests in order to have different use cases of the library.

Data processing

Some use cases might require you directly process the data on your server before sending it to the object storage, e.g. if you'd like to resize an image. You can do that by:

  1. registering a before hook on the putObject custom method to process the data before sending it to the object storage,
  2. using the proxy mode on the client side service to send the data to your server instead of the object storage,
  3. defining the appropriate chunkSize on the client service to not use multipart upload as processing usually requires the whole content to be sent.

Here is a simple example relying on sharp to resize an image:

async function resizeImage (hook) {
  hook.data.buffer = await sharp(hook.data.buffer)
    .resize(128, 48, { fit: 'contain', background: '#00000000' }).toBuffer()
}

app.service('s3').hooks({
  before: {
    putObject: [resizeImage]
  }
})

// Here you can proceed as usual from server side
service.putObject({ id, buffer, type })
// or client side
clientService.upload(id, blob, options)

API

feathers-s3 consists of three parts:

  • Service that provides basic methods for using S3 API.
  • Middlewares that provides an Express middleware to access an object from the store.
  • Client that provides helper functions to simplify the upload and download logic for the client side.

Service

constructor (options)

Create an instance of the service with the given options:

Parameter Description Required
s3Client the s3Client configuration. yes
bucket the bucket to use. yes
prefix an optional prefix to use when computing the final Key no
btoa the binary to ascii function used to transform sent data into a string. Default is to transform to base64. no
atob the ascii to binary function used to transform received data into a Buffer. Default is to transform back from base64. no

find (params)

Lists some objects in a bucket according given criteria provided in the params.query object.

Check the ListObjectsCommandInput documentation to have the list of supported poperties.

create (data, params)

Generates a presigned URL for the following commands:

The payload data must contain the following properties:

Property Description
command the command for which the presigned URL should be created. The possible values are GetObject, PutObject and UploadPart.
id the object key. Note that the final computed Key takes into account the prefix option of the service.
UploadId the UploadId generated when calling the createMultipartUpload method. It is required if the command is UploadPart
PartNumber the PartNumber of the part to be uploaded. It is required if the command is UploadPart

get (id, params)

Get an object content from a bucket.

Parameter Description
id the object key. Note that the final computed Key takes into account the prefix option of the service.

Note

The object will be entirely read and transferred to the client, for large files consider using presigned URLs instead.

remove (id, params)

Remove an object fromt the bucket.

Parameter Description
id the object key. Note that the final computed Key takes into account the prefix option of the service.

createMultipartUpload (data, params)

Initiate a multipart upload.

It wraps the CreateMultipartUploadCommand.

The payload data must contain the following properties:

Property Description
id the object key. Note that the final computed Key takes into account the prefix option of the service.
type the content type to be uploaded.

Any optional properties are forwarded to the underlying CreateMultipartUploadCommand command parameters.

completeMultipartUpload (data, params)

Finalize a multipart upload.

It wraps the CompleteMultipartUploadCommand.

The payload data must contain the following properties:

Property Description
id the object key. Note that the final computed Key takes into account the prefix option of the service.
UploadId the UploadId generated when calling the createMultipartUpload method.
parts the uploaded parts. It consists in an array of objects following the schema: { PartNumber: <number>, ETag: <etag> )}.

Any optional properties are forwarded to the underlying CompleteMultipartUploadCommand command parameters.

uploadPart (data, params)

Upload a part to a bucket.

The payload data must contain the following properties:

Property Description
id the object key. Note that the final computed Key takes into account the prefix option of the service.
UploadId the UploadId generated when calling the createMultipartUpload method.
PartNumber the part number.
buffer the content to be uploaded.
type the content type to be uploaded.

putObject (data, params)

Upload an object to a bucket.

The payload data must contain the following properties:

Property Description
id the object key. Note that the final computed Key takes into account the prefix option of the service.
buffer the content to be uploaded.
type the content type to be uploaded.

getObjectCommand (data, params)

Execute the GetObjectCommand and returns the response.

Note

This method is not declared on the client side.

The payload data must contain the following properties:

Property Description
id the object key. Note that the final computed Key takes into account the prefix option of the service.

uploadFile (data, params)

Convenient method to upload a file.

Note

This method is not declared on the client side.

The payload data must contain the following properties:

Property Description
filePath the path to the file to be uploaded. The basename is used for computing the object key.
contentType the content type of the file to be uploaded.

Note

You can also provide an id property to override the computed object key.

downloadFile (data, params)

Convenient method to download a file.

Note

This method is not declared on the client side.

The payload data must contain the following properties:

Property Description
ìd the file key. Note that the final computed Key takes into account the prefix option of the service.
filePath the path to the downloaded file.

Middlewares

getObject (service)

It expect to be setup on a route path like path/to/get/* where the last parameter is the path to the object in the target bucket. It is associated with an S3 service to use the same configuration (s3client, bucket, etc...).

Argument Description Required
service the service to be associated to this midlleware. yes
// How to setup the route
app.get('/s3-objects/*', getObject(service))
// How to use it with any request agent like superagent
const response = await superagent.get('your.domain.com/s3-objects/my-file.png')

You can also simply use the target object URL in your HTML: <img src="your.domain.com/s3-objects/my-file.png">.

If you'd like to authenticate the route on your backend you will have to do something like this:

import { authenticate } from '@feathersjs/express'

app.get('/s3-objects/*', authenticate('jwt'), getObject(service))

In this case, if you need to reference the object by the URL it will require you to add the JWT as a query parameter like this: <img src="your.domain.com/s3-objects/my-file.png?jwt=TOKEN">. The JWT can then be extracted by a middleware:

import { authenticate } from '@feathersjs/express'

app.get('/s3-objects/*', (req, res, next) => {
  req.feathers.authentication = {
    strategy: 'jwt',
    accessToken: req.query.jwt
  }
  next()
}, authenticate('jwt'), getObject(service))

Client

getClientService (app, options)

Return the client service interface. The client service exposes the custom methods defined in the Service and is also decorated with 2 helper functions that really simplify the logic when implementing a client application, notably for multipart upload.

Argument Description Required
app the Feathers client application yes
options the options to pass to the client service no

The options are:

Options Description Default
transport the transport layer used by the Feathers client application. For now it is required.
servicePath the path to the service. s3
chunkSize the size of the chunk to perfom multipart upload. 5MB
useProxy define whether to use backend as a proxy for custom methods. false
fetch the fetch function. browser fetch function
btoa the binary to ascii function used to transform sent data into a string. transform to base64
atob the ascii to binary function used to transform received data into a Buffer. transform from base64
debug the debug function. null

upload (id, blob, options)

Upload a Blob object to the bucket with the given key id.

According the size of chunk you set when instanciang the client service and the size of the blob, the method will automatically perform a singlepart upload or a mulitpart upload.

If the proxy option is undefined. The client performs the upload action directly using fetch. Otherwise, it uses the proxyUpload custom method.

Argument Description Required
id the object key. Note that the final computed Key takes into account the prefix option of the service. yes
blob the content of the object to be uploaded defined as a Blob. yes
options options to be forwarded to the underlying service methods. no

download (key, type, options)

Download an object from the bucket with the given key id.

If the proxy option is undefined. The client performs the download action directly using fetch. Otherwise, it uses the getObject custom method.

Argument Description Required
id the object key. Note that the final computed Key takes into account the prefix option of the service. yes
type the type of the content to be downloaded. yes
options options to be forwarded to the underlying service methods. no

License

Copyright (c) 2017-20xx Kalisio

Licensed under the MIT license.

Authors

This project is sponsored by

Kalisio