-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add GCP credentials deployment #2118
Conversation
04c53f0
to
a02f0f8
Compare
This pull request does not have a backport label. Could you fix it @orouz? 🙏
|
📊 Allure Report - 💚 No failures were reported.
|
a02f0f8
to
8febe94
Compare
773ec4b
to
3a94c87
Compare
This pull request is now in conflicts. Could you fix it? 🙏
|
3a94c87
to
e8309ef
Compare
# Test a GCP Deployment Manager deployment using Application Default Credentials | ||
gcp_dm_adc: | ||
name: CSPM GCP with ADC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this job tests the code path that uses GCP's application default credentials
return p.getApplicationDefaultCredentials(ctx, cfg, log) |
# Test a GCP Deployment Manager deployment using a Service Account | ||
gcp_dm_sa: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this job tests the code path that uses a service account json
return p.getCustomCredentials(ctx, cfg, log) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is some repetitiveness in this workflow as the two jobs do mostly the same thing. we could abstract some of the steps to an action (like actions/init_env
, actions/check_findings
, actions/deploy_ec
)
run: | | ||
# Deploys a GCP Service Account | ||
cd ${{ env.DEPLOYMENT_MANAGER_DIR }} | ||
export DEPLOYMENT_NAME="${{ env.GCP_SA_DEPLOYMENT_NAME }}" | ||
export SERVICE_ACCOUNT_NAME="${{ env.GCP_SA_DEPLOYMENT_NAME }}-sa" | ||
./deploy_service_account.sh | ||
mv KEY_FILE.json ../../${{ env.INTEGRATIONS_SETUP_DIR }} | ||
|
||
# Installs CSPM GCP integration | ||
cd ../../${{ env.INTEGRATIONS_SETUP_DIR }} | ||
export SERVICE_ACCOUNT_JSON_PATH="KEY_FILE.json" | ||
export DEPLOYMENT_NAME="${{ env.GCP_AGENT_DEPLOYMENT_NAME }}" | ||
poetry install | ||
poetry run python ./install_cspm_gcp_integration.py | ||
|
||
# Deploys the agent using an existing service account (SERVICE_ACCOUNT_NAME) | ||
cd ../../${{ env.DEPLOYMENT_MANAGER_DIR }} | ||
. ./set_env.sh && ./deploy.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's some trickery going on here:
- we first deploy a service account using the new deployment manager template, which generates a
KEY_FILE.json
with the said service account - we then install a CSPM GCP integration that uses
gcp.credentials.json
variable to set the JSON content - then we deploy the agent, but because we've defined a service account name, it won't deploy a service account and instead use the one we deployed earlier
this is all combined into a single step because otherwise we won't be able to share the JSON file
@@ -52,7 +51,7 @@ def generate_config(context): | |||
), | |||
"serviceAccounts": [ | |||
{ | |||
"email": f"$(ref.{sa_name}.email)", | |||
"email": get_service_account_email(sa_name, project), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can't use ref
because we might be using an already-deployed service account
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is mostly copied from the original ./deploy.sh
script
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not create a common.sh file?
# pylint: disable=duplicate-code | ||
service_account = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it'd be nice to define a ./common.py
file and import it to both deployment templates. not sure why that didn't work tbh
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't manage to do it as well.
It's because we're using deployment manager templates and other py files can not be imported to the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not create a common.sh file?
# pylint: disable=duplicate-code | ||
service_account = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't manage to do it as well.
It's because we're using deployment manager templates and other py files can not be imported to the file.
@@ -115,7 +116,7 @@ fi | |||
# Apply the deployment manager templates | |||
run_command "gcloud deployment-manager deployments create --automatic-rollback-on-error ${DEPLOYMENT_NAME} --project ${PROJECT_NAME} \ | |||
--template compute_engine.py \ | |||
--properties elasticAgentVersion:${STACK_VERSION},fleetUrl:${FLEET_URL},enrollmentToken:${ENROLLMENT_TOKEN},allowSSH:${ALLOW_SSH},zone:${ZONE},elasticArtifactServer:${ELASTIC_ARTIFACT_SERVER},scope:${SCOPE},parentId:${PARENT_ID}" | |||
--properties elasticAgentVersion:${STACK_VERSION},fleetUrl:${FLEET_URL},enrollmentToken:${ENROLLMENT_TOKEN},allowSSH:${ALLOW_SSH},zone:${ZONE},elasticArtifactServer:${ELASTIC_ARTIFACT_SERVER},scope:${SCOPE},parentId:${PARENT_ID},serviceAccountName:${SERVICE_ACCOUNT_NAME}" | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will the SERVICE_ACCOUNT_NAME
param be propagated from the UI and if not we'll create one?
In case the user provides a wrong SA name, the deployment will not succeed and an error will be printed to the user.
Are we ok with it? do we want to add validation that the SA exists and if not to create one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will the
SERVICE_ACCOUNT_NAME
param be propagated from the UI and if not we'll create one?
no, SERVICE_ACCOUNT_NAME
is not user-facing.
the addition of SERVICE_ACCOUNT_NAME
to the deploy.sh
script is done purely for testing purposes of deploying an agent with an existing service account. it simulates the new agentless flow, where a user will deploy a service account and we'd use its key in the integration credentials input.
to test the deployment of a service account is working properly, i've used the existing agent deployment script to make it so that it can operate either by deploying the full resources: VM + SA, or just a VM, while using an existing SA, which is currently passed only during tests.
In case the user provides a wrong SA name, the deployment will not succeed and an error will be printed to the user. Are we ok with it? do we want to add validation that the SA exists and if not to create one?
as mentioned, SERVICE_ACCOUNT_NAME
is only used in tests, so we don't want to deploy a new service account if SERVICE_ACCOUNT_NAME
doesn't exists. if for some reason SERVICE_ACCOUNT_NAME
is provided but doesn't exists, failing the deployment would be the right outcome.
def read_json_file(file_path): | ||
"""Reads a json file and returns its content""" | ||
try: | ||
with open(file_path, "r") as json_file: | ||
return json_file.read() | ||
except FileNotFoundError: | ||
logger.error(f"Error: File '{file_path}' not found.") | ||
sys.exit(1) | ||
except IOError as e: | ||
logger.error(f"Error reading file '{file_path}': {e}") | ||
sys.exit(1) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gurevichdmitry i would prefer using read_json
but it returns an empty dict when it fails (also missing decode exception) and i don't want to check for an empty dict in my use case and then exit, as it's kinda dirty. i'd prefer updating read_json
to remove the try
and let consumers do it and act on errors. in my case, i would sys.exit(1)
if reading the file failed. can we do the same in load_data()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@orouz considering that read_json
is only used in load_data
, we can modify read_json
to exit on error. This approach fulfills your requirement, and the reuse will be valid in your case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving deployment manager files
99f9a77
to
551662d
Compare
a3e7ea5
to
cef8f43
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@orouz Workflow and tests are approved. One more question: did you test the environment destruction when the service account is used?
yes, deletion of GCP env works |
adds a Deployment Manager template to deploy a new service account and key
adds a CI test to verify deployment of service account is useable
this is how a successful deployment looks from a user POV after they ran the service account deployment script in a GCP shell:
tested we get findings using the generated key by providing it as a JSON blob credentials option on the manual deployment option:
closes [GCP] Automated credentials creation flow for agentless #2005