Fully async rest and static file serving backend based on vert.x
Installation
Building
mvn clean packageRunning at port 443
mkdir -p /opt/nitor/backend
cp -a target/backend-1.0.0-fat.jar /opt/nitor/backend
cp -a certs /opt/nitor/backend
cp src/systemd/* /etc/systemd/system
systemd daemon-reload
systemd start nitor-backend.socketHealth check
The url /healthCheck will always be available unauthenticated and without client certificates.
Configuration
Configuration is done by config.json file in the current working directory. Most options can be also overriden with system property.
The only exception is the listen port that is specified either by system property port (that has default value of 8443) or if by the file handle passed in by the systemd/xineted socket listener.
Using the systemd socket listener (as is done in the prepackaged rpm/deb packages allows running the service with limited permissions).
Server options
"idleTimeout": 3600,
"http2": trueEnabling TLS
"tls": {
"serverKey": "certs/localhost.key.clear",
"serverCert": "certs/localhost.crt"
}The serverCert should include the whole concatenated certificate chain.
Requiring client certificates
"clientAuth": {
"route": "/*",
"clientChain": "certs/client.chain"
}The route specifies which urls require the certificate.
The url /certCheck will always be available and will respond back with plain text information about the cliente certificate that server received.
Requiring basic authentication
"basicAuth": {
"route": "/*",
"realm": "nitor",
"users": {
"test": "test",
"admin": "nimda"
}
}The route specifies which urls require the basic auth.
Customizing outgoing proxy request or outoing response
A list of customization scripts can be provided.
"customize": [{
"route": "/*",
"jsFile": "custom.js"
}]The route specifies which requests are processed by the customization script.
The jsFile specifies which javascript file that customize the operation. The script can mostly only customize the request and response headers, not the body.
Example Script
var api = {};
api.handleRequest = function(request, context) {
console.log('REQ:', request.headers());
context.put('variable', 'value');
}
api.handleResponse = function(response, context) {
console.log('RES:', response.headers(), context.get('variable'));
}
console.log('js customizations loaded');
module.exports = api;Azure AD authentication and user account information forwarding
"adAuth": {
"route": "/*",
"clientId": "",
"clientSecret": "",
"configurationURI": "https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration",
"scope": "openid profile https://graph.microsoft.com/user.read",
"customParam": {
"domain_hint": "organizations"
},
"graphQueries": [
{
"graphQueryURI": "https://graph.microsoft.com/beta/me",
"headerMappings": {
"x-auth-name": ".displayName",
"x-auth-mail": ".mail",
"x-auth-phone": ".mobilePhone"
},
{
"graphQueryURI": "https://graph.microsoft.com/beta/me/memberOf?$top=100",
"headerMappings": {
"x-auth-groups": ".value[].mailNickname"
}
}
],
"requiredHeaders": {
"x-auth-groups": "(^|.*,)admin(,.*|$)"
}
},
"session": {
"serverName": "my-service.domain.com",
"secretFile": ".secret",
"secretGenerator": ["/bin/bash", "-c", "echo hello $USER"],
"sessionAge": 14
}The route specifies which urls require the Azure AD auth. The microsoft side of the application is configured in https://apps.dev.microsoft.com.
The graphQueries is an array of objets with graphQueryURI and headerMapping keys.
The graphQueryURI specifies the extra rest query done to microsoft graph v2 api to fetch account details. [https://developer.microsoft.com/en-us/graph/graph-explorer](interactive practice site)
The headerMappings maps the json data to headers using json pointer syntax. The keys are used as header names and values are json pointers to the json result of the queries.
The requiredHeaders runs regexp validaton on the headers to decide if the account has access.
The session section configures the stateless cookie that is generated to the client. Do note that if there is a cluster of servers then the serverName and the contents of the secretFile contents must match on each node. The sessionAge is configured as days. If sharing secretFile is hard then secretGenerator can be used to invoke a system command that creates the same output (unpredictable to outsiders) on all nodes.
Differing authentication by path
Add defaultAuthType option with either NONE/BASIC/AD as value.
The add the following element to any service element. Note that authType can be empty and the optional parameters are interpreted by the authenticator.
"auth": {
"authType": "ad",
"x-auth-groups": "(^|.*,)(admin|superadmin)(,.*|$)"
}Currently only the ad authenticator reacts to the custom parameters and interprets them as overrides for requiredHeaders.
Virtual hosting
Support for virtual hosts can be enabled with the following option. If authentication is required then the user is redirected to the publicURI and then back to the original location.
{
"virtualHost": true,
"publicURI": "https://my.auth.domain/"
}Note: there is a service of type virtualHost that allows limiting services to specific virtual hosts.
Configuring services
The list of services to provide from requested paths or hosts is defined in the services element array. The type field of each service defines the action done for the mathing requests.
The services are matched in the order given.
"services": [
{
"type": "<type>",
"route": "/srv1/*"
},
...
]Matching to virtual host
{
"type": "virtualHost",
"host": "auth.localdev.nitor.zone",
"services": [
{
}
]
}The host The virtual host to match.
The services The services that are active when request come to the matching virtual host.
Serving static files
{
"type": "static",
"route": "/static/*",
"dir": "/my/web/root",
"readOnly": false,
"cacheTimeout": 1800
}The route specifies where the static files are exposed.
The dir specifies where the static files are loaded from.
Setting readOnly to false stops assumeing that files do not change during the lifetime of the service.
Serving files from S3
{
"type": "s3",
"route": "/s3/*",
"bucket": "webroot",
"path": "pictures/big",
"indexFile": "index.html",
"region": "eu-central-1",
"accessKey": "xyz",
"secretKey": "123"
}The route specifies where the s3 bucket contents are exposed.
The bucket specifies the S3 bucket.
The optional basePath specifies where the path inside the bucket (no access outside this path is allowed). You can use variables with ${variable} syntax in the path.
The optional indexFile specifies what file to fetch if request for a directory is requested. The value is appended to the path.
The optional region specifies the aws region.
The optional accessKey specifies the aws access key.
The optional secretKey specifies the aws secret key.
If the region or accessKey/secretKey -pair is not given then standard AWS sdk code is used to detect/fetch the values from environment or from the AWS instance profile.
DynamoDB access
{
"type": "dynamo",
"route": "/dynamo/:table/:operation",
"table": "${table}",
"operations": "GetItem,PutItem,UpdateItem",
"region": "eu-central-1",
"accessKey": "xyz",
"secretKey": "123"
}The route specifies where the low level DynamoDB api is exposed. You must use :operation as one of the path elements.
The table specifies how the table name is constructed. You can use variables with ${variable} syntax in the path.
The operations specifies the list of DynamoDB operations allowed.
The optional region specifies the aws region.
The optional accessKey specifies the aws access key.
The optional secretKey specifies the aws secret key.
If the region or accessKey/secretKey -pair is not given then standard AWS sdk code is used to detect/fetch the values from environment or from the AWS instance profile.
Proxying to another HTTP service
{
"type": "proxy",
"route": "/proxy/*",
"host": "example.org",
"port": 80,
"path": "/",
"hostHeader": null,
"receiveTimeout": 300
}The route specifies where the proxy files are exposed. (Note: different from all other configuration sections).
The hostHeader allows setting a Host header into the outgoing http request - the original request information is available in X-Host, X-Forwarded-For and X-Forwarded-Proto headers.
The receiveTimeout does not seem to work yet correctly.
Proxying to AWS Lambda
A list of path templates and matching lambda functions can be provided.
{
"type": "lambda",
"route": "/apis/*",
"accessKey": "xyz",
"secretKey": "123",
"paths": [
{
"template": "/obj/{type}/{id}",
"function": "api-dev-obj",
"qualifier": "8"
},
{
"template": "/ref/{type}/{id}",
"function": "api-dev-ref"
}
]
}The route specifies where the lambda functions are exposed.
The optional accessKey specifies AWS IAM access to lambda.
The optional secretKey specifies AWS IAM access to lambda.
The paths map path templates to lamda functions
The template is the path template to match to the request path
The function is the function name or arn to call
The optional qualifier is the specific version tag of the function to call, "$LATEST" by default.
Caching content
Caches response from supported services that follow this entry.
Unlike normal service entries this lets the request through unless the cache is able to serve a cached copy of the response.
Currently only services of type lambda support caching.
The cache honors the response Cache-Control header. If it is missing or it does not heve either the max-age or s-maxage header then the response will not be cached.
{
"type": "cache",
"route": "/apis/*"
}The route specifies the paths to cache.
Goals
- single runnable über jar
- no need for apache/ngingx/varnish in front
- servers both static files and rest services
Future goals
- make the stack more configurable so that it can be used in other projects too
Some features of apache/ngingx/varnish that might be useful to reimplement
- modifying/cleaning up of incoming requests
- url, headers
- caching of dynamic content
- cache to disk
- on-the-fly compression
- configurable cache key (query parameters, cookies, vary headers etc)
- cache invalidation api
- rate limiting
Ideas
- define request handling attributes in yaml file
- match rules and action rules
- match against path/header/query parameter etc
- actions can be
- cache (with settings x)
- cors (with settings x)
- or is it better to have just a fluent api?