config
The geofront-server
command takes a configuration file as required argument. The configuration is an ordinary Python script that defines the following required and optional variables. Note that all names have to be uppercase.
TEAM
(geofront.team.Team
) The backend implementation for team authentication. For example, in order to authorize members of GitHub organization use ~geofront.backends.github.GitHubOrganization
implementation:
from geofront.backends.github import GitHubOrganization
TEAM = GitHubOrganization(
client_id='GitHub OAuth app client id goes here',
client_secret='GitHub OAuth app client secret goes here',
org_login='your_org_name' # in https://github.com/your_org_name
)
Or you can implement your own backend by subclassing ~geofront.team.Team
.
- Module
geofront.team
--- Team authentication The interface for team authentication.
- Class
geofront.backends.github.GitHubOrganization
The
~geofront.team.Team
implementation for GitHub organizations.- Class
geofront.backends.bitbucket.BitbucketTeam
The
~geofront.team.Team
implementation for Bitbucket Cloud teams.- Class
geofront.backends.stash.StashTeam
The
~geofront.team.Team
implementation for Atlassian's Bitbucket Server (which was Stash).
REMOTE_SET
(~geofront.remote.RemoteSet
) The set of remote servers to be managed by Geofront. It can be anything only if it's an mapping object. For example, you can hard-code it by using Python dict
data structure:
from geofront.remote import Remote
REMOTE_SET = {
'web-1': Remote('ubuntu', '192.168.0.5'),
'web-2': Remote('ubuntu', '192.168.0.6'),
'web-3': Remote('ubuntu', '192.168.0.7'),
'worker-1': Remote('ubuntu', '192.168.0.25'),
'worker-2': Remote('ubuntu', '192.168.0.26'),
'db-1': Remote('ubuntu', '192.168.0.50'),
'db-2': Remote('ubuntu', '192.168.0.51'),
}
Every key has to be a string, and every valye has to be an instance of ~geofront.remote.Remote
. ~geofront.remote.Remote
consits of an user, a hostname, and the port to SSH. For example,if you've ssh
-ed to a remote server by the following command:
$ ssh -p 2222 ubuntu@192.168.0.50
A ~geofront.remote.Remote
object for it should be:
Remote('ubuntu', '192.168.0.50', 2222)
You can add more dynamism by providing custom dict
-like mapping object. collections.abc.Mapping
could help to implement it. For example, ~geofront.backends.cloud.CloudRemoteSet
is a subtype of ~collections.abc.Mapping
, and it dynamically loads the list of available instance nodes in the cloud e.g. EC2 of AWS. Due to Apache Libcloud it can work with more than 20 cloud providers like AWS, Azure, or Rackspace. :
from geofront.backends.cloud import CloudRemoteSet
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
driver_cls = get_driver(Provider.EC2_US_WEST)
driver = driver_cls('access id', 'secret key')
REMOTE_SET = CloudRemoteSet(driver)
- Class
geofront.remote.Remote
Value type that represents a remote server to
ssh
.- Class
geofront.backends.cloud.CloudRemoteSet
The Libcloud-backed dynamic remote set.
- Module
collections.abc
--- Abstract Base Classes for Containers This module provides abstract base classes that can be used to test whether a class provides a particular interface; for example, whether it is hashable or whether it is a mapping.
TOKEN_STORE
(werkzeug.contrib.cache.BaseCache
) The store to save access tokens. It uses Werkzeug's cache interface, and Werkzeug provides several built-in implementations as well e.g.:
~werkzeug.contrib.cache.MemcachedCache
~werkzeug.contrib.cache.RedisCache
~werkzeug.contrib.cache.FileSystemCache
For example, in order to store access tokens into Redis:
from werkzeug.contrib.cache import RedisCache
TOKEN_STORE = RedisCache(host='localhost', db=0)
Of course you can implement your own backend by subclassing ~werkzeug.contrib.cache.BaseCache
.
Although it's a required configuration, but when -d
<geofront-server -d>
/--debug <geofront-server --debug>
is enabled, ~werkzeug.contrib.cache.SimpleCache
(which is all expired after geofront-server
process terminated) is used by default.
- Cache__ --- Werkzeug
Cache backend interface and implementations provided by Werkzeug.
KEY_STORE
(geofront.keystore.KeyStore
) The store to save public keys for each team member. (Not the master key; don't be confused with MASTER_KEY_STORE
.)
If TEAM
is a ~geofront.backends.github.GitHubOrganization
object, KEY_STORE
also can be ~geofront.backends.github.GitHubKeyStore
. It's an adapter class of GitHub's per-account public key list. :
from geofront.backends.github import GitHubKeyStore
KEY_STORE = GitHubKeyStore()
You also can store public keys into the database like SQLite, PostgreSQL, or MySQL through ~geofront.backends.dbapi.DatabaseKeyStore
:
import sqlite3
from geofront.backends.dbapi import DatabaseKeyStore
KEY_STORE = DatabaseKeyStore(sqlite3,
'/var/lib/geofront/public_keys.db')
Some cloud providers like Amazon EC2 and Rackspace (Next Gen) support key pair service. ~geofront.backends.cloud.CloudKeyStore
helps to use the service as a public key store:
from geofront.backends.cloud import CloudKeyStore
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
driver_cls = get_driver(Provider.EC2)
driver = driver_cls('api key', 'api secret key')
KEY_STORE = CloudKeyStore(driver)
0.2.0 Added ~geofront.backends.dbapi.DatabaseKeyStore
class. Added ~geofront.backends.cloud.CloudKeyStore
class.
0.3.0 Added ~geofront.backends.stash.StashKeyStore
class.
MASTER_KEY_STORE
(geofront.masterkey.MasterKeyStore
) The store to save the master key. (Not public keys; don't be confused with KEY_STORE
.)
The master key store should be secure, and hard to lose the key at the same time. Geofront provides some built-in implementations:
~geofront.masterkey.FileSystemMasterKeyStore
It stores the master key into the file system as the name suggests. You can set the path to save the key. Although it's not that secure, but it might help you to try out Geofront.
~geofront.backends.cloud.CloudMasterKeyStore
It stores the master key into the cloud object storage like S3 of AWS. It supports more than 20 cloud providers through the efforts of Libcloud.
from geofront.masterkey import FileSystemMasterKeyStore
MASTER_KEY_STORE = FileSystemMasterKeyStore('/var/lib/geofront/id_rsa')
PERMISSION_POLICY
(~geofront.remote.PermissionPolicy
) The permission policy to determine which remotes are visible for each team member, and allowed them to SSH.
The default is ~geofront.remote.DefaultPermissionPolicy
, and it allows everyone in the team to view and access through SSH to all available remotes.
If your remote set has metadata for ACL i.e. group identifiers to allow you can utilize it through ~geofront.remote.GroupMetadataPermissionPolicy
.
If you need more subtle and complex rules for ACL you surely can implement your own policy by subclassing ~geofront.remote.PermissionPolicy
interface.
0.2.0
MASTER_KEY_TYPE
(~typing.Type
[~paramiko.pkey.PKey
]) The type of the master key that will be generated. It has to be a subclass of paramiko.pkey.PKey
:
- RSA
paramiko.rsakey.RSAKey
- ECDSA
paramiko.ecdsakey.ECDSAKey
- DSA (DSS)
paramiko.dsskey.DSSKey
~paramiko.rsakey.RSAKey
by default.
0.4.0
MASTER_KEY_BITS
(~typing.Optional
[int
]) The number of bits the generated master key should be. 2048 by default.
0.4.0 Since the appropriate
MASTER_KEY_BITS
depends on itsMASTER_KEY_TYPE
, the default value ofMASTER_KEY_BITS
becameNone
(from 2048).
None
means to followMASTER_KEY_TYPE
's own default (appropriate) bits.
0.2.0
MASTER_KEY_RENEWAL
(datetime.timedelta
) The interval of master key renewal. None
means never. For example, if you want to renew the master key every week:
import datetime
MASTER_KEY_RENEWAL = datetime.timedelta(days=7)
A day by default.
TOKEN_EXPIRE
(datetime.timedelta
) The time to expire each access token. As shorter it becomes more secure but more frequent to require team members to authenticate. So too short time would interrupt team members.
A week by default.