Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal Error: Received RST_STREAM with error code 2 #156

Open
onetwopunch opened this issue Dec 7, 2020 · 18 comments
Open

Internal Error: Received RST_STREAM with error code 2 #156

onetwopunch opened this issue Dec 7, 2020 · 18 comments

Comments

@onetwopunch
Copy link

I've just migrated to use Config Validator using Terraform. I have a few custom Rego files, all of which pass tests locally and are fairly simple. The config-validator service is running just fine on the server (i.e. sudo systemctl status config-validator). When I run forseti using the command in the crontab I get a 500 error: Received RST_STREAM with error code 2, which, I'm assuming, is why none of my config validator constraints are being executed. I've tried this multiple times with the same error so it's not an ephemeral error. At the time of running it https://status.cloud.google.com/ is all green too so I'm not sure what's going on.

Steps to reproduce (from the forseti-server)

$ sudo su - ubuntu
$ (/usr/bin/flock -n /home/ubuntu/forseti-security/forseti_cron_runner.lock /home/ubuntu/forseti-security/install/gcp/scripts/run_forseti.sh -b forseti-server-d09b6fba || echo '[forseti-security] Warning: New Forseti cron job will not be started, because previous Forseti job is still running.') 2>&1

Full Error from Cloud Logging

[forseti-security][2.25.1] google.cloud.forseti.scanner.scanner(run): Error running scanner: ConfigValidatorScanner: 'Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/forseti_security-2.25.1-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_util/validator_client.py", line 176, in review
    return self.stub.Review(review_request).violations
  File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 565, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
	status = StatusCode.INTERNAL
	details = "Received RST_STREAM with error code 2"
	debug_error_string = "{"created":"@1607376144.569555686","description":"Error received from peer ipv6:[::1]:50052","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"Received RST_STREAM with error code 2","grpc_status":
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/forseti_security-2.25.1-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_util/validator_client.py", line 176, in review
    return self.stub.Review(review_request).violations
  File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 565, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
	status = StatusCode.INTERNAL
	details = "Received RST_STREAM with error code 2"
	debug_error_string = "{"created":"@1607376144.569555686","description":"Error received from peer ipv6:[::1]:50052","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"Received RST_STREAM with error code 2","grpc_status":13}"
>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/forseti_security-2.25.1-py3.6.egg/google/cloud/forseti/scanner/scanner.py", line 119, in run
    scanner.run()
  File "/usr/local/lib/python3.6/dist-packages/forseti_security-2.25.1-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_scanner.py", line 203, in run
    for flattened_violations in self._retrieve_flattened_violations():
  File "/usr/local/lib/python3.6/dist-packages/forseti_security-2.25.1-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_scanner.py", line 183, in _retrieve_flattened_violations
    for violations in self.validator_client.paged_review(cv_assets):
  File "/usr/local/lib/python3.6/dist-packages/forseti_security-2.25.1-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_util/validator_client.py", line 122, in paged_review
    violations = self.review(paged_assets)
  File "/home/ubuntu/forseti-security/.eggs/retrying-1.3.3-py3.6.egg/retrying.py", line 49, in wrapped_f
    return Retrying(*dargs, **dkw).call(f, *args, **kw)
  File "/home/ubuntu/forseti-security/.eggs/retrying-1.3.3-py3.6.egg/retrying.py", line 206, in call
    return attempt.get(self._wrap_exception)
  File "/home/ubuntu/forseti-security/.eggs/retrying-1.3.3-py3.6.egg/retrying.py", line 247, in get
    six.reraise(self.value[0], self.value[1], self.value[2])
  File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
    raise value
  File "/home/ubuntu/forseti-security/.eggs/retrying-1.3.3-py3.6.egg/retrying.py", line 200, in call
    attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
  File "/usr/local/lib/python3.6/dist-packages/forseti_security-2.25.1-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_util/validator_client.py", line 183, in review
    raise errors.ConfigValidatorAuditError(e)
google.cloud.forseti.scanner.scanners.config_validator_util.errors.ConfigValidatorAuditError: <_Rendezvous of RPC that terminated with:
	status = StatusCode.INTERNAL
	details = "Received RST_STREAM with error code 2"
	debug_error_string = "{"created":"@1607376144.569555686","description":"Error received from peer ipv6:[::1]:50052","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"Received RST_STREAM with error code 2","grpc_status":13}"
>
@debakkerb
Copy link

debakkerb commented Dec 9, 2020

I have the exact same issue in my environment. I only added the always_violets-policy, so I didn't customise any of the Config Validator constraints. I tested the rules, to see if that made a difference and that worked fine. It's Config Validator that causes issues.

These are the roles assigned to the Service Account attached to the VM that runs Forseti:

  forseti_identity_org_roles = [
    "roles/appengine.appViewer",
    "roles/bigquery.metadataViewer",
    "roles/browser",
    "roles/cloudasset.viewer",
    "roles/cloudsql.viewer",
    "roles/compute.networkViewer",
    "roles/iam.securityReviewer",
    "roles/orgpolicy.policyViewer",
    "roles/servicemanagement.quotaViewer",
    "roles/serviceusage.serviceUsageConsumer"
  ]

  forseti_identity_project_roles = [
    "roles/cloudsql.client",
    "roles/logging.logWriter",
    "roles/monitoring.metricWriter",
    "roles/storage.objectViewer",
    "roles/storage.objectCreator"
  ]

This is my terraform configuration for Forseti:

module "forseti" {
  source             = "terraform-google-modules/forseti/google"
  version            = "~> 5.2.2"
  gsuite_admin_email = "XYZ@XYZ.co.uk"
  domain             = "XXXYYYZZZ"
  project_id         = module.forseti_project.project_id

  org_id = local.organization_id
  composite_root_resources = [
    "organizations/${local.organization_id}"
  ]

  server_private   = true
  client_enabled   = false
  cloudsql_private = true

  server_region           = "europe-west2"
  cloudsql_region         = "europe-west2"
  storage_bucket_location = "europe-west2"
  bucket_cai_location     = "europe-west2"

  network                  = google_compute_network.forseti_network.name
  subnetwork               = google_compute_subnetwork.forseti_subnetwork.id
  server_service_account   = google_service_account.forseti_identity.email
  cscc_violations_enabled  = true
  cscc_source_id           = local.forseti_source_id
  server_type              = "n1-standard-4"
  admin_disable_polling    = true
  server_grpc_allow_ranges = []

  # Scanners
  enabled_apis_enabled               = false
  blacklist_enabled                  = false
  bigquery_enabled                   = false
  bucket_acl_enabled                 = false
  cloudsql_acl_enabled               = false
  audit_logging_enabled              = false
  firewall_rule_enabled              = false
  forwarding_rule_enabled            = false
  group_enabled                      = false
  groups_settings_enabled            = false
  iam_policy_enabled                 = false
  iap_enabled                        = false
  instance_network_interface_enabled = false
  ke_scanner_enabled                 = false
  ke_version_scanner_enabled         = false
  kms_scanner_enabled                = false
  lien_enabled                       = false
  location_enabled                   = false
  log_sink_enabled                   = false
  resource_enabled                   = false
  role_enabled                       = false
  service_account_key_enabled        = false

  cloudbilling_disable_polling      = true
  compute_disable_polling           = true
  container_disable_polling         = true
  crm_disable_polling               = true
  groups_settings_disable_polling   = true
  iam_disable_polling               = true
  logging_disable_polling           = true
  servicemanagement_disable_polling = true
  serviceusage_disable_polling      = true
  sqladmin_disable_polling          = true
  appengine_disable_polling         = true
  bigquery_disable_polling          = true
  storage_disable_polling           = true

  config_validator_enabled                  = true
  config_validator_violations_should_notify = true
}

I've stepped through the run_forseti.sh-script line by line and validated the logs. The error is thrown when running the command scanner_command=forseti scanner run.

@debakkerb
Copy link

Ok, that was quick. I solved it with the help of this bug. I've removed things like - "organizations/**" and replaced ** with the actual organization ID. That resolved the gRPC error.

What puzzles me is that I have another organization where this worked absolutely fine a few months ago (Forseti TF module v5.2.0).

@onetwopunch
Copy link
Author

Thanks @debakkerb I made this change, which fixed the error...but the constraint is not being executed now. Are you facing a similar issue?

@debakkerb
Copy link

Unfortunately not, I was a bit too trigger happy I'm afraid. Btw, you can also replace ** with *, instead of the actual organization ID.

However, that didn't fire off any policy violations, even though I added the always_violates-constraint. The problem as well is that it's impossible to debug config-validator, as there are no clear logs or anything.

@krab-skunk
Copy link

krab-skunk commented Dec 14, 2020

Did you guys found any workaround ?
Happen to me the same in vesion 2.23.2 and 2.25.2 :(

  • I did replaced ** by my org ID
  • try to run manually configValidator by invoking
    /home/ubuntu/forseti-security/external-dependencies/config-validator/ConfigValidatorRPCServer --policyPath='/home/ubuntu/policy-library/policy-library/policies' --policyLibraryPath='/home/ubuntu/policy-library/policy-library/lib' -port=50052

Forseti scanner run command works then without error ,
but at the time to run:
forseti notifier run --scanner_index_id $ID

No violations found :/

{
  "serverMessage": "Resource 'config_validator_violations' has no violations"
}

nb: i was wondering why rules in contrainst folder have a field organization/** , while we already have this field in the forseti server itself, it's double job to define in each rule the scope while we can simply do it as forseti config level

@onetwopunch
Copy link
Author

@krab-skunk or @debakkerb was there an older version of Forseti that worked for you? Did this happen after an upgrade?

@krab-skunk
Copy link

krab-skunk commented Dec 15, 2020

@onetwopunch It never worked for me, i'm fairly new to forseti (1 month), so i was focusing on trying out the python scanners ( that work perfectly fine ). For both versions (2.23.2 and 2.25.2), i was installing them from scratch using terraform. If someone could tell me which combination of forseti release + config validator commit works great, i'd be happy to try it out , there is a 7k+ppl IT company that everyone know very well i'm sure counting on it as we want to get rid of redlock ;P

@krab-skunk
Copy link

Worth adding, that yaml rules that does not contain organizations/..... things inside like this one iam_audit_log_all.yaml

apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: GCPIAMAuditLogConstraintV1
metadata:
  name: audit_log_all
  annotations:
    description: Checks that all services have all types of audit logs enabled.
    bundles.validator.forsetisecurity.org/healthcare-baseline-v1: security
spec:
  parameters:
    services: [allServices]
    log_types: [DATA_READ, DATA_WRITE, ADMIN_READ]

Works like a charm, just got 14000 notifications

@krab-skunk
Copy link

krab-skunk commented Dec 18, 2020

@onetwopunch @debakkerb
Ok it WORKS !! Even with letting organizations/** syntaxin the yaml policies in the constraints folder and it does found all the issues via config validator !!! :)

The trick is to use forseti from master branch and along with config validator running on docker ;)

Here is my terraform file

module "forseti" {
  source             = "git::github.com/forseti-security/terraform-google-forseti"
  forseti_version    = "master"
  
  gsuite_admin_email = "$EMAIL"
  domain             = "$DOAMIN"
  project_id         = "$PROJECT"
  org_id             = "$ORG_ID"

#   composite_root_resources = [
#     "organizations/${local.organization_id}"
#   ]

#   server_private   = true
  client_enabled   = false
#   cloudsql_private = true

  server_region           = "us-east1"
  cloudsql_region         = "us-east1"
  storage_bucket_location = "us-east1"
  bucket_cai_location     = "us-east1"

#   network                  = google_compute_network.forseti_network.name
#   subnetwork               = google_compute_subnetwork.forseti_subnetwork.id
  server_service_account   = ""
  cscc_violations_enabled  = true
  cscc_source_id           = "organizations/$ORG_ID"
  server_type              = "n1-standard-4"
#   admin_disable_polling    = true
  server_grpc_allow_ranges = []

  # Scanners
  enabled_apis_enabled               = false
  blacklist_enabled                  = false
  bigquery_enabled                   = false
  bucket_acl_enabled                 = false
  cloudsql_acl_enabled               = false
  audit_logging_enabled              = false
  firewall_rule_enabled              = false
  forwarding_rule_enabled            = false
  group_enabled                      = false
  groups_settings_enabled            = false
  iam_policy_enabled                 = false
  iap_enabled                        = false
  instance_network_interface_enabled = false
  ke_scanner_enabled                 = false
  ke_version_scanner_enabled         = false
  kms_scanner_enabled                = false
  lien_enabled                       = false
  location_enabled                   = false
  log_sink_enabled                   = false
  resource_enabled                   = false
  role_enabled                       = false
  service_account_key_enabled        = false

  cloudbilling_disable_polling      = true
#   compute_disable_polling           = true
#   container_disable_polling         = true
#   crm_disable_polling               = true
#   groups_settings_disable_polling   = true
#   iam_disable_polling               = true
#   logging_disable_polling           = true
#   servicemanagement_disable_polling = true
#   serviceusage_disable_polling      = true
#   sqladmin_disable_polling          = true
#   appengine_disable_polling         = true
#   bigquery_disable_polling          = true
#   storage_disable_polling           = true

  config_validator_enabled                  = true
  config_validator_violations_should_notify = true
}

And this is how i run config validator:

 sudo docker run --rm \

   -v /home/ubuntu/policy-library/policy-library/policies:/tmp/policies \

   -v /home/ubuntu/policy-library/policy-library/lib:/tmp/lib \

   -p50052:50052 \

   gcr.io/forseti-containers/config-validator \

   -policyPath=/tmp/policies \

   -policyLibraryPath=/tmp/lib \

   -port=50052

Thanks again @gkowalski-google for the tip to try using master branch instead :)

@nkaravias
Copy link

nkaravias commented Dec 18, 2020

@krab-skunk I'm having the exact issue right now.

Can you reply with the image version you're using?

I'm using the following off of the charts

    image: gcr.io/forseti-containers/config-validator:572e207
    image: gcr.io/forseti-containers/forseti:v2.25.0
    = no errors but no violations are detected

    image: gcr.io/forseti-containers/config-validator
    image: gcr.io/forseti-containers/forseti:v2.25.0
    = Internal Error: Received RST_STREAM with error code 2

I have a single constraint I'm trying to test, without any wildcards, as per the previous comments:

apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: GCPAlwaysViolatesConstraintV1
metadata:
  name: always_violates_all
  annotations:
    description: Testing policy, will always violate.
spec:
  constraintVersion: 0.1.0
  severity: high
  match:
    target: # {"$ref":"#/definitions/io.k8s.cli.setters.target"}
    - "organizations/myorgid"
  parameters: {}

Both are ancient images, but I've only had bad luck trying newer ones. If you can point me to something that will get the validator to not break ("RST_STREAM with error code 2") and also detect constraint violations you'll make my weekend much much better 👍

Edit: this is the scanner output I get from the orchestrator:

│ }                                                                                                                                                                                                                                          ││ {                                                                                                                                                                                                                                          │
│   "serverMessage": "Error running scanner: ConfigValidatorScanner: 'Traceback (most recent call last):\n  File \"/home/forseti/.local/lib/python3.6/site-packages/forseti_security-2.25.0-py3.6.egg/google/cloud/forseti/scanner/scanners/ ││ config_validator_util/validator_client.py\", line 196, in reset\n    self.stub.Reset(validator_pb2.ResetRequest())\n  File \"/home/forseti/.local/lib/python3.6/site-packages/grpc/_channel.py\", line 565, in __call__\n    return _end_u │
│ nary_response_blocking(state, call, False, None)\n  File \"/home/forseti/.local/lib/python3.6/site-packages/grpc/_channel.py\", line 467, in _end_unary_response_blocking\n    raise _Rendezvous(state, None, None, deadline)\ngrpc._chann ││ el._Rendezvous: <_Rendezvous of RPC that terminated with:\n\tstatus = StatusCode.INTERNAL\n\tdetails = \"Not supported\"\n\tdebug_error_string = \"{\"created\":\"@1608321431.434476587\",\"description\":\"Error received from peer ipv4: │
│ 10.118.253.219:50052\",\"file\":\"src/core/lib/surface/call.cc\",\"file_line\":1052,\"grpc_message\":\"Not supported\",\"grpc_status\":13}\"\n>\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most  ││ recent call last):\n  File \"/home/forseti/.local/lib/python3.6/site-packages/forseti_security-2.25.0-py3.6.egg/google/cloud/forseti/scanner/scanner.py\", line 119, in run\n    scanner.run()\n  File \"/home/forseti/.local/lib/python3. │
│ 6/site-packages/forseti_security-2.25.0-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_scanner.py\", line 203, in run\n    for flattened_violations in self._retrieve_flattened_violations():\n  File \"/home/forseti/.l ││ ocal/lib/python3.6/site-packages/forseti_security-2.25.0-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_scanner.py\", line 178, in _retrieve_flattened_violations\n    self.validator_client.reset()\n  File \"/home/for │
│ seti/forseti-security/.eggs/retrying-1.3.3-py3.6.egg/retrying.py\", line 49, in wrapped_f\n    return Retrying(*dargs, **dkw).call(f, *args, **kw)\n  File \"/home/forseti/forseti-security/.eggs/retrying-1.3.3-py3.6.egg/retrying.py\",  ││ line 206, in call\n    return attempt.get(self._wrap_exception)\n  File \"/home/forseti/forseti-security/.eggs/retrying-1.3.3-py3.6.egg/retrying.py\", line 247, in get\n    six.reraise(self.value[0], self.value[1], self.value[2])\n  F │
│ ile \"/home/forseti/.local/lib/python3.6/site-packages/six.py\", line 703, in reraise\n    raise value\n  File \"/home/forseti/forseti-security/.eggs/retrying-1.3.3-py3.6.egg/retrying.py\", line 200, in call\n    attempt = Attempt(fn( ││ *args, **kwargs), attempt_number, False)\n  File \"/home/forseti/.local/lib/python3.6/site-packages/forseti_security-2.25.0-py3.6.egg/google/cloud/forseti/scanner/scanners/config_validator_util/validator_client.py\", line 203, in rese │
│ t\n    raise errors.ConfigValidatorResetError(e)\ngoogle.cloud.forseti.scanner.scanners.config_validator_util.errors.ConfigValidatorResetError: <_Rendezvous of RPC that terminated with:\n\tstatus = StatusCode.INTERNAL\n\tdetails = \"N ││ ot supported\"\n\tdebug_error_string = \"{\"created\":\"@1608321431.434476587\",\"description\":\"Error received from peer ipv4:10.118.253.219:50052\",\"file\":\"src/core/lib/surface/call.cc\",\"file_line\":1052,\"grpc_message\":\"Not │
│  supported\",\"grpc_status\":13}\"\n>\n'"                                                                                                                                                                                                  ││ }

10.118.253.219:50052 is the IP for the config-validator service I'm using

@krab-skunk
Copy link

krab-skunk commented Dec 18, 2020

I'm running the latest, basically all i do on forseti server is runing this docker, that pull latest image

docker images
REPOSITORY                                   TAG       IMAGE ID       CREATED        SIZE
gcr.io/forseti-containers/config-validator   latest    ba0f2009e549   23 hours ago   55.2MB

And with this image, u don't have to care about - "organizations/myorgid", you can leave it like - "organizations/**

as a starter ,you can try this very simple policy: restrict_fw_rules_world_open.yaml

@nkaravias
Copy link

nkaravias commented Dec 18, 2020

Ah, so you're running forseti on a VM and simply running the validator in a container?

What version of forseti are you using? My problem is that the orchestrator, config validator and the server run in containers. So i'm struggling to figure out which version of server/config-validator work well with each other. At this point I either get the RST_STREAM error or simply no violations are recorded.

@krab-skunk
Copy link

krab-skunk commented Dec 18, 2020

Yes, the GKE version is some kind of alpha stuff, and i plan to put that in Prod with hundreds of GKE clusters that i wanna monitor, so for now, as simple POC, i try not to go too too wild. So i went the VM way for now, forseti on a VM , and config validator on a container in this same VM.

As you can see in my terraform file, i run the latest forseti from master branch, coz its the only one that worked for me as you can read in my posts above ;)

module "forseti" {
  source             = "git::github.com/forseti-security/terraform-google-forseti"
  forseti_version    = "master"
  

@nkaravias
Copy link

Yeah, so I just tried again with the following images:

  • for the server image: gcr.io/forseti-containers/forseti:master
  • for the config validator image: gcr.io/forseti-containers/config-validator

I can see that the validator is properly copying the constraint I want to test from GCS:

/ $ cat /policy-library/policies/constraints/always_violates.yaml
apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: GCPAlwaysViolatesConstraintV1
metadata:
  name: always_violates_all
  annotations:
    description: Testing policy, will always violate.
spec:
  constraintVersion: 0.1.0
  severity: high
  match:
    target: # {"$ref":"#/definitions/io.k8s.cli.setters.target"}
    - "organizations/<my org id>"
  parameters: {}

There's no errors thrown anywhere, however no violations are detected by the config_validator:

│ 2020-12-18 21:35:25,462 INFO google.cloud.forseti.notifier.notifier(run): Resource 'config_validator_violations' has no violations     

I couldnt help but notice this message, but I doubt its an issue:

│ 2020-12-18 21:35:12,742 DEBUG google.cloud.forseti.scanner.scanners.config_validator_scanner(_retrieve): Resource type composite_root is not currently supported in Config Validator scanner.  

I guess I won't end my week with a win. Thanks anyway.

@krab-skunk
Copy link

krab-skunk commented Dec 18, 2020

Well, for sure some others here are way more expert than me in this matter, but if you set up composite_root in your forseti config server file, well, i think its look pretty clear that config validator scanner will not work with it :/

Personally i'm not using it, i'm using instead

root_resource_id: organizations/$MY_ORG_ID

along with lots of excluded_resources: [...]

@nkaravias
Copy link

nkaravias commented Dec 18, 2020

Yeah I switched from it to root and retried. The reason for using the composite root is because I can't use a folder in the root (and an inventory for my whole org would take too long). Based on the docs, something like root_resource_id: folders/<folder id> should work, but it doesn't.

Regardless, I can now see config_validator recording violations:

│   "serverMessage": "Retrieved 2 violations for resource 'config_validator_violations'"                                       ││ }                                                                                                                            

However nothing is recorded by the notifier output to GCS
Here's my config for the notifier. I'm using the enabled_apis rule to make sure the forseti server rules work properly (they do).

        resources:
            - resource: enabled_apis_violations
              should_notify: true
              notifiers:
                - name: gcs_violations
                  configuration:
                    data_format: csv
                    gcs_path: gs://<my secret bucket>/scanner/scanner_violations
            - resource: config_validator_violations
              should_notify: true
              notifiers:
                - name: gcs_violations
                  configuration:
                    data_format: csv
                    gcs_path: gs://<my secret bucket>/scanner_violations

The content of scanner_violations only shows the enabled_apis rule violations, but not the two config_validator_violations that were reported.

Seeing 2 config_validator_violations captured is progress :) - However nothing is captured in the output (nor sent to Security Command Center)

Edit: I think I see the problem 👍 !!!!

@krab-skunk
Copy link

and an inventory for my whole org would take too long

Totally understand that, that's why i put plenty of excluded folders and projects for my POC,
so i can have a quick scan for test. ;)

Make sure when you run the notifier, that the ID you use is indeed the one returned by your scanner

forseti notifier run --scanner_index_id SCANNER_ID

and also that the source_id for your violation/cscc is well configured ;)

Perso, each time i run a test, i launch the following command in order:

source /home/ubuntu/forseti_env.sh
sudo rm -rf /tmp/forseti-cai-*
forseti config format json
forseti config delete model
forseti inventory purge 0

MODEL_ID=$(/bin/date -u +%Y%m%dT%H%M%S)
forseti inventory create --import_as ${MODEL_ID}
forseti model use ${MODEL_ID}

forseti scanner run
forseti notifier run --scanner_index_id $ID_FROM_THE_SCANNER_COMMAND

Good luck

@ralsu091
Copy link

If you are using the terraform module to install Forseti with CV, make sure your policy files use the singular form of organization, folder etc.

I was facing this issue with always_violates and changed spec.match.target to organization/* to make it work. Notice the singular * as well.
See GoogleCloudPlatform/policy-library#385

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants