Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

databrew jobs and awscc provider #887

Open
dbbc96 opened this issue Mar 22, 2023 · 3 comments
Open

databrew jobs and awscc provider #887

dbbc96 opened this issue Mar 22, 2023 · 3 comments

Comments

@dbbc96
Copy link

dbbc96 commented Mar 22, 2023

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • The resources and data sources in this provider are generated from the CloudFormation schema, so they can only support the actions that the underlying schema supports. For this reason submitted bugs should be limited to defects in the generation and runtime code of the provider. Customizing behavior of the resource, or noting a gap in behavior are not valid bugs and should be submitted as enhancements to AWS via the CloudFormation Open Coverage Roadmap.

Terraform CLI and Terraform AWS Cloud Control Provider Version

terraform1.3.6 with provider registry.terraform.io/hashicorp/awscc v0.48.0

Affected Resource(s)

  • awscc_databrew_job

Terraform Configuration Files

resource "awscc_databrew_job" "databrew" {
  name         = "${var.agency}-${var.project}-${var.environment}-job"
  role_arn     = aws_iam_role.databrew_role.arn
  type         = "RECIPE"
  output_location = {
    bucket = "tempbucket
   }
  outputs  = [
    {
      format         = "CSV"
      format_options = {
          csv = {
            delimiter = "," 
          }
        } 
      location       = {
          bucket       = "tempbucket"
        }
      overwrite      = true
    },
  ]
  project_name = tempprojectname

### Debug Output

### Expected Behavior

should not find any changes

### Actual Behavior

  ~ resource "awscc_databrew_job" "databrew_BA" {
      + data_catalog_outputs      = [
        ] -> (known after apply)
      + database_outputs          = [
        ] -> (known after apply)
      + dataset_name              = (known after apply)
      + encryption_key_arn        = (known after apply)
      + encryption_mode           = (known after apply)
        id                        = "dor-dmp-fs-budget-t-job"
      + job_sample                = {
          + mode = (known after apply)
          + size = (known after apply)
        } -> (known after apply)
        name                      = "test-job"
      + output_location           = {
          + bucket       = "tempbucket"
          + bucket_owner = (known after apply)
          + key          = (known after apply)
        }
      ~ outputs                   = [
          ~ {
              + compression_format = (known after apply)
              + max_output_files   = (known after apply)
              + partition_columns  = (known after apply)
                # (4 unchanged attributes hidden)
            },
        ]
      + profile_configuration     = {
          + column_statistics_configurations = [
            ] -> (known after apply)
          + dataset_statistics_configuration = {
              + included_statistics = (known after apply)
              + overrides           = [
                ] -> (known after apply)
            } -> (known after apply)
          + entity_detector_configuration    = {
              + allowed_statistics = {
                  + statistics = (known after apply)
                } -> (known after apply)
              + entity_types       = (known after apply)
            } -> (known after apply)
          + profile_columns                  = [
            ] -> (known after apply)
        } -> (known after apply)
      + recipe                    = {
          + name    = (known after apply)
          + version = (known after apply)
        } -> (known after apply)
        tags                      = [
            # (4 unchanged elements hidden)
        ]
      + validation_configurations = [
        ] -> (known after apply)
        # (7 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Steps to Reproduce

  1. terraform apply (after resource was created by same tf file.)

Important Factoids

This setup is just a job with 1 output to csv and not using "profile jobs". tried also using lifecycle ignores but that doesn't seem to work. If i comment out the "outputs" section and output location then it shows and no changes needed to be made.

References

@breathingdust
Copy link
Member

To confirm @dbbc96 you are seeing unexpected drift after a successful apply?

@dbbc96
Copy link
Author

dbbc96 commented Mar 22, 2023

correct..it still want to add these items which would be null as they are under the profiles job. but it never does goes away.

@dbbc96
Copy link
Author

dbbc96 commented Sep 27, 2023

just wanting to check on the status of this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants