-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ft: ZENKO-404 service account support for lifecycle #299
ft: ZENKO-404 service account support for lifecycle #299
Conversation
de716eb
to
ad9bf88
Compare
PR has been updated. Reviewers, please be cautious. |
1 similar comment
PR has been updated. Reviewers, please be cautious. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice cleanup 👍
@@ -90,6 +81,15 @@ | |||
} | |||
}, | |||
"lifecycle": { | |||
"auth": { | |||
"type": "service", | |||
"account": "service-lifecycle", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
service-lifecycle
is actually the accountType
attribute, not really the account
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh ok, as multiple such account can coexist with different canonical IDs? I re-used the terminology we used for the "auth" config objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with my latest commit I also removed the incrementing account ID so if such case can occur they would all have the same account ID in the array (but I doubt this can occur in practice?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe in case we re-generate a new service account because one is compromised, there could be a time where two of them co-exist before we remove the older one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case a key is compromised Orbit can regenerate the key without changing the account canonical id.
I think it's safe to assume you'll have at most one account of type service-lifecycle
@@ -47,6 +56,14 @@ if [[ "$MONGODB_DATABASE" ]]; then | |||
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .queuePopulator.mongo.database=\"$MONGODB_DATABASE\"" | |||
fi | |||
|
|||
if [[ "$S3_HOST" ]]; then | |||
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .s3.host=\"$S3_HOST\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this used as a fallback if extensions are missing s3 host/port?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that to allow extensions to access an S3 endpoint they need those two environment variables, which get translated into config.json. We introduced this s3
config object for lifecycle. Is there any existing environment that is set automatically? I think NicolasT referred to kube setting some env var to other services automatically in another PR which could be used here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All kubernetes pods get injected with environment variables that have the updated IP and port information for all available services which I believe is what NicolasT was referring to and could potentially be used 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't a simple docker run
impose the weird var name format? It seems fine as an intermediary mapping step to inject say CACHE_SERVICE_HOST
into REDIS_HOST
(fictional names) in the pod template, but a name like CACHE_SERVICE_HOST
is kind of dictated by the service names defined at runtime outside of the pod, whereas these variables in the entrypoint are defined at build time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah exactly, they would be dictated by the potentially different service names and release names so we wouldn't be able to hard code the values. My point really being that I think the idea in general to be good, it would require to rework the entire entrypoint script (at the very least) to take something like release name as a parameter and assume the default services names for Zenko K8s specific context which seems like a lot of work for not really much benefit.
c687ea2
to
5be190b
Compare
Do it again human slave!:point_right: :runner: (Oh and the pull request has been updated, by the way.)
PR has been updated. Reviewers, please be cautious. |
|
5be190b
to
5085081
Compare
Do it again human slave!:point_right: :runner: (Oh and the pull request has been updated, by the way.)
PR has been updated. Reviewers, please be cautious. |
Support a service account 'service-lifecycle' for lifecycle feature * move code related to replication management from lib/management.js to extensions/replication/management.js * add optional callback function 'applyState' that modules can use to periodically apply and save their state to management db * change authentication support for lifecycle to use configuration in extensions.lifecycle.auth config, to be able to configure the service account name * update docker-entrypoint.sh with missing S3_HOST and S3_PORT env var check * update docker-entrypoint.sh to add log level support (LOG_LEVEL)
5085081
to
915f30c
Compare
Do it again human slave!:point_right: :runner: (Oh and the pull request has been updated, by the way.)
PR has been updated. Reviewers, please be cautious. |
Support a service account 'service-lifecycle' for lifecycle feature
move code related to replication management from lib/management.js
to extensions/replication/management.js
add optional callback function 'applyState' that modules can use to
periodically apply and save their state to management db
change authentication support for lifecycle to use configuration in
extensions.lifecycle.auth config, to be able to configure the service
account name
update docker-entrypoint.sh with missing S3_HOST and S3_PORT env var
check
update docker-entrypoint.sh to add log level support (LOG_LEVEL)