Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ft: ZENKO-404 service account support for lifecycle #299

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 0 additions & 15 deletions conf/config.joi.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,21 +16,6 @@ const joiSchema = {
},
transport: transportJoi,
s3: hostPortJoi.required(),
auth: joi.object({
type: joi.alternatives().try('account', 'vault').required(),
account: joi.string()
.when('type', { is: 'account', then: joi.required() }),
vault: joi.object({
host: joi.string().required(),
port: joi.number().greater(0).required(),
adminPort: joi.number().greater(0)
.when('adminCredentialsFile', {
is: joi.exist(),
then: joi.required(),
}),
adminCredentialsFile: joi.string().optional(),
}).when('type', { is: 'vault', then: joi.required() }),
}).required(),
queuePopulator: {
cronRule: joi.string().required(),
batchMaxRead: joi.number().default(10000),
Expand Down
18 changes: 9 additions & 9 deletions conf/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,6 @@
"host": "127.0.0.1",
"port": 8000
},
"auth": {
"type": "account",
"account": "bart",
"vault": {
"host": "127.0.0.1",
"port": 8500,
"adminPort": 8600
}
},
"queuePopulator": {
"cronRule": "*/5 * * * * *",
"batchMaxRead": 10000,
Expand Down Expand Up @@ -93,6 +84,15 @@
}
},
"lifecycle": {
"auth": {
"type": "service",
"account": "service-lifecycle",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

service-lifecycle is actually the accountType attribute, not really the account

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh ok, as multiple such account can coexist with different canonical IDs? I re-used the terminology we used for the "auth" config objects.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with my latest commit I also removed the incrementing account ID so if such case can occur they would all have the same account ID in the array (but I doubt this can occur in practice?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe in case we re-generate a new service account because one is compromised, there could be a time where two of them co-exist before we remove the older one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case a key is compromised Orbit can regenerate the key without changing the account canonical id.

I think it's safe to assume you'll have at most one account of type service-lifecycle

"vault": {
"host": "127.0.0.1",
"port": 8500,
"adminPort": 8600
}
},
"zookeeperPath": "/lifecycle",
"bucketTasksTopic": "backbeat-lifecycle-bucket-tasks",
"objectTasksTopic": "backbeat-lifecycle-object-tasks",
Expand Down
25 changes: 21 additions & 4 deletions docker-entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,15 @@ set -e
# modifying config.json
JQ_FILTERS_CONFIG="."

if [[ "$LOG_LEVEL" ]]; then
if [[ "$LOG_LEVEL" == "info" || "$LOG_LEVEL" == "debug" || "$LOG_LEVEL" == "trace" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .log.logLevel=\"$LOG_LEVEL\""
echo "Log level has been modified to $LOG_LEVEL"
else
echo "The log level you provided is incorrect (info/debug/trace)"
fi
fi

if [[ "$ZOOKEEPER_AUTO_CREATE_NAMESPACE" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .zookeeper.autoCreateNamespace=true"
fi
Expand Down Expand Up @@ -47,6 +56,14 @@ if [[ "$MONGODB_DATABASE" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .queuePopulator.mongo.database=\"$MONGODB_DATABASE\""
fi

if [[ "$S3_HOST" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .s3.host=\"$S3_HOST\""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this used as a fallback if extensions are missing s3 host/port?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that to allow extensions to access an S3 endpoint they need those two environment variables, which get translated into config.json. We introduced this s3 config object for lifecycle. Is there any existing environment that is set automatically? I think NicolasT referred to kube setting some env var to other services automatically in another PR which could be used here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All kubernetes pods get injected with environment variables that have the updated IP and port information for all available services which I believe is what NicolasT was referring to and could potentially be used 🙂

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't a simple docker run impose the weird var name format? It seems fine as an intermediary mapping step to inject say CACHE_SERVICE_HOST into REDIS_HOST (fictional names) in the pod template, but a name like CACHE_SERVICE_HOST is kind of dictated by the service names defined at runtime outside of the pod, whereas these variables in the entrypoint are defined at build time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah exactly, they would be dictated by the potentially different service names and release names so we wouldn't be able to hard code the values. My point really being that I think the idea in general to be good, it would require to rework the entire entrypoint script (at the very least) to take something like release name as a parameter and assume the default services names for Zenko K8s specific context which seems like a lot of work for not really much benefit.

fi

if [[ "$S3_PORT" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .s3.port=\"$S3_PORT\""
fi

if [[ "$EXTENSIONS_REPLICATION_SOURCE_S3_HOST" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .extensions.replication.source.s3.host=\"$EXTENSIONS_REPLICATION_SOURCE_S3_HOST\""
fi
Expand Down Expand Up @@ -123,12 +140,12 @@ if [[ "$EXTENSIONS_LIFECYCLE_RULES_ABORT_INCOMPLETE_MPU_ENABLED" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .extensions.lifecycle.rules.abortIncompleteMultipartUpload.enabled=\"$EXTENSIONS_LIFECYCLE_RULES_ABORT_INCOMPLETE_MPU_ENABLED\""
fi

if [[ "$AUTH_TYPE" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .auth.type=\"$AUTH_TYPE\""
if [[ "$EXTENSIONS_LIFECYCLE_AUTH_TYPE" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .extensions.lifecycle.auth.type=\"$EXTENSIONS_LIFECYCLE_AUTH_TYPE\""
fi

if [[ "$AUTH_ACCOUNT" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .auth.account=\"$AUTH_ACCOUNT\""
if [[ "$EXTENSIONS_LIFECYCLE_AUTH_ACCOUNT" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .extensions.lifecycle.auth.account=\"$EXTENSIONS_LIFECYCLE_AUTH_ACCOUNT\""
fi

if [[ $JQ_FILTERS_CONFIG != "." ]]; then
Expand Down
17 changes: 17 additions & 0 deletions extensions/lifecycle/LifecycleConfigValidator.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,23 @@ const joiSchema = {
zookeeperPath: joi.string().required(),
bucketTasksTopic: joi.string().required(),
objectTasksTopic: joi.string().required(),
auth: joi.object({
type: joi.alternatives().try('account', 'service', 'vault')
.required(),
account: joi.string()
.when('type', { is: 'account', then: joi.required() })
.when('type', { is: 'service', then: joi.required() }),
vault: joi.object({
host: joi.string().required(),
port: joi.number().greater(0).required(),
adminPort: joi.number().greater(0)
.when('adminCredentialsFile', {
is: joi.exist(),
then: joi.required(),
}),
adminCredentialsFile: joi.string().optional(),
}).when('type', { is: 'vault', then: joi.required() }),
}).required(),
backlogMetrics: {
zkPath: joi.string().default('/lifecycle/run/backlog-metrics'),
intervalS: joi.number().default(60),
Expand Down
6 changes: 3 additions & 3 deletions extensions/lifecycle/lifecycleConsumer/LifecycleConsumer.js
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ class LifecycleConsumer extends EventEmitter {
* @param {string} kafkaConfig.hosts - list of kafka brokers
* as "host:port[,host:port...]"
* @param {Object} lcConfig - lifecycle configuration object
* @param {String} lcConfig.auth - authentication info
* @param {String} lcConfig.objectTasksTopic - lifecycle object topic name
* @param {Object} lcConfig.consumer - kafka consumer object
* @param {String} lcConfig.consumer.groupId - kafka consumer group id
Expand All @@ -35,18 +36,17 @@ class LifecycleConsumer extends EventEmitter {
* @param {Object} s3Config - S3 configuration
* @param {Object} s3Config.host - s3 endpoint host
* @param {Number} s3Config.port - s3 endpoint port
* @param {Object} authConfig - authentication info on source
* @param {String} [transport="http"] - transport method ("http"
* or "https")
*/
constructor(zkConfig, kafkaConfig, lcConfig, s3Config, authConfig,
constructor(zkConfig, kafkaConfig, lcConfig, s3Config,
transport = 'http') {
super();
this.zkConfig = zkConfig;
this.kafkaConfig = kafkaConfig;
this.lcConfig = lcConfig;
this.authConfig = lcConfig.auth;
this.s3Config = s3Config;
this.authConfig = authConfig;
this._transport = transport;
this._consumer = null;

Expand Down
23 changes: 20 additions & 3 deletions extensions/lifecycle/lifecycleConsumer/task.js
Original file line number Diff line number Diff line change
@@ -1,25 +1,42 @@
'use strict'; // eslint-disable-line
const werelogs = require('werelogs');

const { initManagement } = require('../../../lib/management');
const LifecycleConsumer = require('./LifecycleConsumer');

const config = require('../../../conf/Config');
const zkConfig = config.zookeeper;
const kafkaConfig = config.kafka;
const lcConfig = config.extensions.lifecycle;
const s3Config = config.s3;
const authConfig = config.auth;
const transport = config.transport;

const log = new werelogs.Logger('Backbeat:Lifecycle:Consumer');

const lifecycleConsumer = new LifecycleConsumer(
zkConfig, kafkaConfig, lcConfig, s3Config, authConfig, transport);
zkConfig, kafkaConfig, lcConfig, s3Config, transport);

werelogs.configure({ level: config.log.logLevel,
dump: config.log.dumpLevel });

lifecycleConsumer.start();
function initAndStart() {
initManagement({
serviceName: 'lifecycle',
serviceAccount: lcConfig.auth.account,
}, error => {
if (error) {
log.error('could not load management db',
{ error: error.message });
setTimeout(initAndStart, 5000);
return;
}
log.info('management init done');

lifecycleConsumer.start();
});
}

initAndStart();

process.on('SIGTERM', () => {
log.info('received SIGTERM, exiting');
Expand Down
45 changes: 34 additions & 11 deletions extensions/lifecycle/lifecycleProducer/LifecycleProducer.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ const { errors } = require('arsenal');
const BackbeatProducer = require('../../../lib/BackbeatProducer');
const BackbeatConsumer = require('../../../lib/BackbeatConsumer');
const LifecycleTask = require('../tasks/LifecycleTask');
const { getAccountCredentials } =
require('../../../lib/credentials/AccountCredentials');
const VaultClientCache = require('../../../lib/clients/VaultClientCache');
const safeJsonParse = require('../util/safeJsonParse');

Expand All @@ -33,27 +35,20 @@ class LifecycleProducer {
* @param {string} kafkaConfig.hosts - list of kafka brokers
* as "host:port[,host:port...]"
* @param {Object} lcConfig - lifecycle config
* @param {Object} lcConfig.auth - authentication info
* @param {Object} [lcConfig.backlogMetrics] - param object to
* publish backlog metrics to zookeeper (see {@link
* BackbeatConsumer} constructor)
* @param {Object} s3Config - s3 config
* @param {String} s3Config.host - host ip
* @param {String} s3Config.port - port
* @param {Object} authConfig - auth config
* @param {String} [authConfig.account] - account name
* @param {Object} [authConfig.vault] - vault details
* @param {String} authConfig.vault.host - vault host ip
* @param {number} authConfig.vault.port - vault port
* @param {number} authConfig.vault.adminPort - vault admin port
* @param {String} transport - http or https
*/
constructor(zkConfig, kafkaConfig, lcConfig, s3Config, authConfig,
transport) {
constructor(zkConfig, kafkaConfig, lcConfig, s3Config, transport) {
this._log = new Logger('Backbeat:LifecycleProducer');
this._zkConfig = zkConfig;
this._kafkaConfig = kafkaConfig;
this._lcConfig = lcConfig;
this._authConfig = authConfig;
this._s3Endpoint = `${transport}://${s3Config.host}:${s3Config.port}`;
this._transport = transport;
this._bucketProducer = null;
Expand Down Expand Up @@ -284,12 +279,25 @@ class LifecycleProducer {
});
}

/**
* Set up the credentials (service account credentials or provided
* by vault depending on config)
* @return {undefined}
*/
_setupCredentials() {
const { type } = this._lcConfig.auth;
if (type === 'vault') {
return this._setupVaultClientCache();
}
return undefined;
}

/**
* Set up the vault client cache for making requests to vault.
* @return {undefined}
*/
_setupVaultClientCache() {
const { vault } = this._authConfig;
const { vault } = this._lcConfig.auth;
const { host, port, adminPort, adminCredentialsFile } = vault;
const adminCredsJSON = fs.readFileSync(adminCredentialsFile);
const adminCredsObj = JSON.parse(adminCredsJSON);
Expand Down Expand Up @@ -321,6 +329,21 @@ class LifecycleProducer {
if (cachedAccountCreds) {
return process.nextTick(() => cb(null, cachedAccountCreds));
}
const credentials = getAccountCredentials(this._lcConfig.auth,
this._log);
if (credentials) {
this.accountCredsCache[canonicalId] = credentials;
return process.nextTick(() => cb(null, credentials));
}
const { type } = this._lcConfig.auth;
if (type === 'vault') {
return this._generateVaultAdminCredentials(canonicalId, cb);
}
return cb(errors.InternalError.customizeDescription(
`invalid auth type ${type}`));
}

_generateVaultAdminCredentials(canonicalId, cb) {
const vaultClient = this._vaultClientCache.getClient('lifecycle:s3');
const vaultAdmin = this._vaultClientCache.getClient('lifecycle:admin');
return async.waterfall([
Expand Down Expand Up @@ -366,7 +389,7 @@ class LifecycleProducer {
* @return {undefined}
*/
start() {
this._setupVaultClientCache();
this._setupCredentials();
return async.parallel([
// Set up producer to populate the lifecycle bucket task topic.
next => this._setupProducer(this._lcConfig.bucketTasksTopic,
Expand Down
26 changes: 22 additions & 4 deletions extensions/lifecycle/lifecycleProducer/task.js
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
'use strict'; // eslint-disable-line
const werelogs = require('werelogs');
const { initManagement } = require('../../../lib/management');
const LifecycleProducer = require('./LifecycleProducer');
const { zookeeper, kafka, extensions, s3, auth, transport, log } =
require('../../../conf/Config');
const { zookeeper, kafka, extensions, s3, transport, log } =
require('../../../conf/Config');

werelogs.configure({ level: log.logLevel,
dump: log.dumpLevel });
Expand All @@ -11,9 +12,26 @@ const logger = new werelogs.Logger('Backbeat:Lifecycle:Producer');

const lifecycleProducer =
new LifecycleProducer(zookeeper, kafka, extensions.lifecycle,
s3, auth, transport);
s3, transport);

lifecycleProducer.start();
function initAndStart() {
initManagement({
serviceName: 'lifecycle',
serviceAccount: extensions.lifecycle.auth.account,
}, error => {
if (error) {
logger.error('could not load management db',
{ error: error.message });
setTimeout(initAndStart, 5000);
return;
}
logger.info('management init done');

lifecycleProducer.start();
});
}

initAndStart();

process.on('SIGTERM', () => {
logger.info('received SIGTERM, exiting');
Expand Down
25 changes: 20 additions & 5 deletions extensions/lifecycle/tasks/LifecycleObjectTask.js
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
const async = require('async');
const AWS = require('aws-sdk');

const errors = require('arsenal').errors;
const BackbeatTask = require('../../../lib/tasks/BackbeatTask');
const async = require('async');
const { getAccountCredentials } =
require('../../../lib/credentials/AccountCredentials');
const getVaultCredentials =
require('../../../lib/credentials/getVaultCredentials');
const { attachReqUids } = require('../../../lib/clients/utils');
Expand All @@ -23,17 +26,29 @@ class LifecycleObjectTask extends BackbeatTask {
this.accountCredsCache = {};
}

_getCredentials(canonicalId, cb) {
_getCredentials(canonicalId, log, cb) {
const cachedCreds = this.accountCredsCache[canonicalId];
if (cachedCreds) {
return process.nextTick(() => cb(null, cachedCreds));
}
return getVaultCredentials(this.authConfig, canonicalId, 'lifecycle',
(err, accountCreds) => cb(err, accountCreds));
const credentials = getAccountCredentials(this.lcConfig.auth, log);
if (credentials) {
this.accountCredsCache[canonicalId] = credentials;
return process.nextTick(() => cb(null, credentials));
}
const { type } = this.lcConfig.auth;
if (type === 'vault') {
return getVaultCredentials(
this.authConfig, canonicalId, 'lifecycle',
(err, accountCreds) => cb(err, accountCreds));
}
return process.nextTick(
() => cb(errors.InternalError.customizeDescription(
`invalid auth type ${type}`)));
}

_setupClients(canonicalId, log, done) {
this._getCredentials(canonicalId, (err, accountCreds) => {
this._getCredentials(canonicalId, log, (err, accountCreds) => {
if (err) {
log.error('error generating new access key', {
error: err.message,
Expand Down
3 changes: 2 additions & 1 deletion extensions/replication/ReplicationConfigValidator.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ const joiSchema = {
type: joi.alternatives().try('account', 'role', 'service').
required(),
account: joi.string()
.when('type', { is: 'account', then: joi.required() }),
.when('type', { is: 'account', then: joi.required() })
.when('type', { is: 'service', then: joi.required() }),
vault: joi.object({
host: joi.string().required(),
port: joi.number().greater(0).required(),
Expand Down
Loading