Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Splat operator not working in Terraform 0.12 alpha1 #19181

Closed
rberlind opened this issue Oct 23, 2018 · 10 comments
Closed

New Splat operator not working in Terraform 0.12 alpha1 #19181

rberlind opened this issue Oct 23, 2018 · 10 comments
Milestone

Comments

@rberlind
Copy link
Contributor

rberlind commented Oct 23, 2018

The new Splat operator does not seem to be working in Terraform 0.12 alpha1, at least not in the example below.

Terraform Version

terraform -v
Terraform v0.12.0-alpha1
+ provider.aws v1.40.0-5-g4f24bc8b7

Terraform Configuration Files

terraform {
  required_version = ">= 0.12.0"
}

provider "aws" {
  region = var.aws_region
}

variable "aws_region" {
  description = "AWS region"
  default = "us-east-1"
}

variable "azs" {
  description = "AWS availability zones to use"
  default = ["a", "b", "c"]
}

resource "aws_instance" "ubuntu" {
  count = 3
  ami           = "ami-2e1ef954"
  instance_type = "t2.micro"
  availability_zone = format("%s%s", var.aws_region, var.azs[count.index])
  associate_public_ip_address = true
  tags = {
    Name  = format("terraform-0.12-for-demo-%d", count.index)
  }
  credit_specification {
    cpu_credits = "unlimited"
  }
}

# This uses the old splat expression
output "public_addresses_old" {
  value = [aws_instance.ubuntu.*.public_dns ]
}

# This uses the new full splat operator (*)
output "public_addresses_full_splat" {
  value = [ aws_instance.ubuntu[*].public_dns ]
}

# This uses the new for expression
output "public_addresses_new" {
  value = [
    for instance in aws_instance.ubuntu:
    instance.public_dns
  ]
}

Debug Output

$ terraform apply
Error: Invalid expression
  on main.tf line 40:
  40:   value = [ aws_instance.ubuntu[*].id ]
Expected the start of an expression, but found an invalid expression token.

Expected Behavior

Expected to get list of public_dns attributes from all 3 instances.

Actual Behavior

Got error indicating that my syntax was invalid.

Steps to Reproduce

  1. terraform init
  2. terraform apply

Additional Context

I set this up based on https://github.com/hashicorp/terraform/blob/v0.12-alpha/website/docs/configuration/expressions.html.md#splat-expressions

@apparentlymart
Copy link
Member

Thanks for reporting this, @rberlind!

I think this must be a regression over in the HCL2 repository, so once I've investigated further to confirm we'll probably make an issue over there to track it, but we'll use this issue to track that fix being included in Terraform.

@rberlind
Copy link
Contributor Author

Thanks @apparentlymart

@apparentlymart apparentlymart added this to the v0.12.0 milestone Oct 29, 2018
@rberlind
Copy link
Contributor Author

rberlind commented Nov 2, 2018

By the way, it is only the full splat operator that is not working. While aws_instance.ubuntu[*].public_dns gives the error I previously mentioned, I was able to use the following example and get the expected output:

resource "aws_security_group" "allow_some_ingress" {
  name        = "allow_some_ingress"
  description = "Allow some inbound traffic"
  vpc_id      = "vpc-0e56931573507c9dd"

  ingress {
    from_port   = 8200
    to_port     = 8200
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 8500
    to_port     = 8500
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

output "ports" {
  value = aws_security_group.allow_some_ingress.ingress.*.from_port
}

This gave me:

Outputs:

ports = [
  8500,
  8200,
]

@Constantin07
Copy link

I'm getting similar error with the latest 0.12-dev though it works fine with 0.11.10

21:03:43 Upgrading modules...
21:03:43 - core in ../modules/vpc
21:03:43 - key_pairs in ../modules/key_pairs
21:03:43 
21:03:43 Initializing the backend...
21:03:45 
21:03:45 Successfully configured the backend "s3"! Terraform will automatically
21:03:45 use this backend unless the backend configuration changes.
21:03:46 Error refreshing state: 7 problems:
21:03:46 
21:03:46 - Attribute name required: Splat expressions (.*) may not be used here.
21:03:46 - Attribute name required: Splat expressions (.*) may not be used here.
21:03:46 - Attribute name required: Splat expressions (.*) may not be used here.
21:03:46 - Attribute name required: Splat expressions (.*) may not be used here.
21:03:46 - Attribute name required: Splat expressions (.*) may not be used here.
21:03:46 - Attribute name required: Splat expressions (.*) may not be used here.
21:03:46 - Attribute name required: Splat expressions (.*) may not be used here.

@apparentlymart
Copy link
Member

Hi @Constantin07! That seems like a different problem than what Roger was describing. Would you mind opening a new issue for that and completing the issue template so we can understand better what's going on there? Thanks!

@Constantin07
Copy link

@apparentlymart sure, will do that.

apparentlymart added a commit that referenced this issue Dec 14, 2018
This includes a number of upstream bug fixes, which in turn fix a number
of issues here in Terraform:

- New-style "full splat" operator now working correctly (#19181)
- The weird HCL1-ish single-line block syntax is now supported (#19153)
- Formatting of single-line blocks adds spaces around the braces (#19154)

This also includes a number of other upstream fixes that were not tracked
as issues in the Terraform repository. The highlights of those are:

- A for expression with the "for" keyword wrapped onto a newline after its
  opening bracket now parses correctly.
- In JSON syntax, interpolation sequences in properties of objects that
  are representing expressions now have their variables properly detected.
- The "flush" heredoc variant is now functional again after being broken
  in some (much-)earlier rework of the template parser.
apparentlymart added a commit that referenced this issue Dec 14, 2018
This includes a number of upstream bug fixes, which in turn fix a number
of issues here in Terraform:

- New-style "full splat" operator now working correctly (#19181)
- The weird HCL1-ish single-line block syntax is now supported (#19153)
- Formatting of single-line blocks adds spaces around the braces (#19154)

This also includes a number of other upstream fixes that were not tracked
as issues in the Terraform repository. The highlights of those are:

- A for expression with the "for" keyword wrapped onto a newline after its
  opening bracket now parses correctly.
- In JSON syntax, interpolation sequences in properties of objects that
  are representing expressions now have their variables properly detected.
- The "flush" heredoc variant is now functional again after being broken
  in some (much-)earlier rework of the template parser.
apparentlymart added a commit that referenced this issue Dec 15, 2018
This includes a number of upstream bug fixes, which in turn fix a number
of issues here in Terraform:

- New-style "full splat" operator now working correctly (#19181)
- The weird HCL1-ish single-line block syntax is now supported (#19153)
- Formatting of single-line blocks adds spaces around the braces (#19154)

This also includes a number of other upstream fixes that were not tracked
as issues in the Terraform repository. The highlights of those are:

- A for expression with the "for" keyword wrapped onto a newline after its
  opening bracket now parses correctly.
- In JSON syntax, interpolation sequences in properties of objects that
  are representing expressions now have their variables properly detected.
- The "flush" heredoc variant is now functional again after being broken
  in some (much-)earlier rework of the template parser.
@apparentlymart
Copy link
Member

This was fixed by #19181 and the fix will be included in the next release.

@rberlind
Copy link
Contributor Author

Thanks @apparentlymart

xiaozhu36 added a commit to xiaozhu36/terraform that referenced this issue Jan 7, 2019
* configs: Reserve various names for future use

We want the forthcoming v0.12.0 release to be the last significant
breaking change to our main configuration constructs for a long time, but
not everything could be implemented in that release.

As a compromise then, we reserve various names we have some intent of
using in a future release so that such future uses will not be a further
breaking change later.

Some of these names are associated with specific short-term plans, while
others are reserved conservatively for possible later work and may be
"un-reserved" in a later release if we don't end up using them. The ones
that we expect to use in the near future were already being handled, so
we'll continue to decode them at the config layer but also produce an
error so that we don't get weird behavior downstream where the
corresponding features don't work yet.

* core: Remove GraphSemanticChecker, etc

These overly-general interfaces are no longer used anywhere, and their
presence in the important-sounding semantics.go file was a distracting
red herring.

We'd previously replaced the one checker in here with a simple helper
function for checking input variables, and that's arguably more at home
with all of the other InputValue functionality in variables.go, and that
allows us to remove semantics.go (and its associated test file) altogether
and make room for some forthcoming new files for static validation.

* configs/configschema: Block.StaticValidateTraversal method

This allows basic static validation of a traversal against a schema, to
verify that it represents a valid path through the structural parts of
the schema.

The main purpose of this is to produce better error messages (using our
knowledge of the schema) than we'd be able to achieve by just relying
on HCL expression evaluation errors. This is particularly important for
nested blocks because it may not be obvious whether one is represented
internally by a set or a list, and incorrect usage would otherwise produce
a confusing HCL-oriented error message.

* core: Static-validate resource references against schemas

In the initial move to HCL2 we started relying only on full expression
evaluation to catch attribute errors, but that's not sufficient for
resource attributes in practice because during validation we can't know
yet whether a resource reference evaluates to a single object or to a
list of objects (if count is set).

To address this, here we reinstate some static validation of resource
references by analyzing directly the reference objects, disregarding any
instance index if present, and produce errors if the remaining subsequent
traversal steps do not correspond to items within the resource type
schema.

This also allows us to produce some more specialized error messages for
certain situations. In particular, we can recognize a reference like
aws_instance.foo.count, which in 0.11 and prior was a weird special case
for determining the count value of a resource block, and offer a helpful
error showing the new length(aws_instance.foo) usage pattern.

This eventually delegates to the static traversal validation logic that
was added to the configschema package in a previous commit, which also
includes some specialized error messages that distinguish between
attributes and block types in the schema so that the errors relate more
directly to constructs the user can see in the configuration.

In future we could potentially move more of the checks from the dynamic
schema construction step to the static validation step, but resources
are the reference type that most needs this immediately due to the
ambiguity caused by the instance indexing syntax. We can safely refactor
other reference types to be statically validated in later releases.

This is verified by two pre-existing context validate tests which we
temporarily disabled during earlier work (now re-enabled) and also by a
new validate test aimed specifically at the special case for the "count"
attribute.

* backend/remote: implement the Local interface

* formatting edits

* vendor: Upgrade hashicorp/hcl2 to latest

See https://github.com/hashicorp/hcl2/pull/58

* command/format: Fix rendering of attribute-agnostic diagnostics

* Fix tests after upgrading hcl

* Update CHANGELOG.md

* helper/schema: Tell Core attribute is optional if set conditionally

The SDK has a mechanism that effectively makes it possible to declare an
attribute as being _conditionally_ required, which is not a concept that
Terraform Core is aware of.

Since this mechanism is in practice only used for a small UX improvement
in prompting for these values interactively when the environment variable
is not set, we avoid here introducing all of this complexity into the
plugin protocol by just having the provider selectively modify its schema
if it detects that such an attribute might be set dynamically.

This then prevents Terraform Core from validating the presence of the
argument or prompting for a new value for it, allowing the null value to
pass through into the provider so that the default value can be generated
again dynamically.

This is a kinda-kludgey solution which we're accepting here because the
alternative would be a much-more-complex two-pass decode operation within
Core itself, and that doesn't seem worth it.

This fixes #19139.

* helper/schema: Better mimic some undocumented behaviors in Core schema

Since the SDK's schema system conflates attributes and nested blocks, it's
possible to state some nonsensical schema situations such as:

- A nested block is both optional but has MinItems > 0
- A nested block is entirely computed but has MinItems or MaxItems set

Both of these weird situations are handled here in the same way that the
existing helper/schema validation code would've handled them: by
effectively disabling the MinItems/MaxItems checks where they would've
been ignored before.

the MinItems/MaxItems

* helper/schema: Update docs for PromoteSingle

This is no longer effective and should not be used in any new schema.

* preserve possible zero values when normalizing

When normalizing flatmapped containers, compare the attributes to the
prior state and preserve pre-existing zero-length or unknown values. A
zero-length value that was previously unknown is preserved as a
zero-length value, as that may have been computed as such by the
provider.

* return early when comparing Null values

* computed value wasn't being set

* remove unnecessary computed flag

The "with_list" attr wasn't actually computed,

Make sure we read with the correct function.

* add test for computed map value

This ensures that a computed map can be correctly applied.

* core: Make resource type schema versions visible to callers

Previously we were fetching these from the provider but then immediately
discarding the version numbers because the schema API had nowhere to put
them.

To avoid a late-breaking change to the internal structure of
terraform.ProviderSchema (which is constructed directly all over the
tests) we're retaining the resource type schemas in a new map alongside
the existing one with the same keys, rather than just switching to
using the providers.Schema struct directly there.

The methods that return resource type schemas now return two arguments,
intentionally creating a little API friction here so each new caller can
be reminded to think about whether they need to do something with the
schema version, though it can be ignored by many callers.

Since this was a breaking change to the Schemas API anyway, this also
fixes another API wart where there was a separate method for fetching
managed vs. data resource types and thus every caller ended up having a
switch statement on "mode". Now we just accept mode as an argument and
do the switch statement within the single SchemaForResourceType method.

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md

* lang/funcs: zipmap accepts tuple of values and produces object

Now that our language supports tuple/object types in addition to list/map
types, it's convenient for zipmap to be able to produce an object type
given a tuple, since this makes it symmetrical with "keys" and "values"
such the the following identity holds for any map or object value "a"

    a == zipmap(keys(a), values(a))

When the values sequence is a tuple, the result has an object type whose
attribute types correspond to the given tuple types.

Since an object type has attribute names as part of its definition, there
is the additional constraint here that the result has an unknown type
(represented by the dynamic pseudo-type) if the given values is a tuple
and the given keys contains any unknown values. This isn't true for values
as a list because we can predict the resulting map element type using
just the list element type.

* website: Fix redundant "be" in workspaces documentation

* Use registry alias to fetch providers

* Update CHANGELOG.md

* Update CHANGELOG.md

* core: Validate module references

Previously we were making an invalid assumption in evaluating module call
references (like module.foo) that the module must exist, which is
incorrect for that particular case because it's a reference to a child
module, not to an object within the current module.

However, now that we have the mechanism for static validation of
references, we'll deal with this one there so it can be caught sooner.
That then makes the original assumption valid, though for a different
reason.

This is verified by two new context tests for validation:
  - TestContext2Validate_invalidModuleRef
  - TestContext2Validate_invalidModuleOutputRef

* selvpc -> selectel

* test a nil computed value within an expression

Comparing a nil-computed value was returning unknown, preventing the
data source from being evaluated.

* Update CHANGELOG.md

* Updates path name

* udpate go.mod and vendor

* backend/remote: support the new force-unlock API

Add support for the new `force-unlock` API and at the same time improve
performance a bit by reducing the amount of API calls made when using
the remote backend for state storage only.

* go-mod: update go-tfe dependency

* Update CHANGELOG.md

* core: Reject provider schemas with version < 0

There's no reason for a negative version, so by blocking it now we'll
ensure that none creep in.

The more practical short-term motivation for this is that we're still
using uint64 for these internally in some cases and so this restriction
ensures that we won't run into rough edges when converting from int64 to
uint64 at those boundaries until we later fix everything to use int64
consistently.

* core: NodePlanDeposedResourceInstanceObject populate EvalReadStateDeposed

The Provider field is required and will cause a panic if not populated.

* providers: Consistently use int64 for schema versions

The rest of Terraform is still using uint64 for this in various spots, but
we'll update that gradually later. We use int64 here because that matches
what's used in our protobuf definition, and unsigned integers are not
portable across all of the protobuf target languages anyway.

* core: Remove some leftover dead code in ProviderSchema

* core: Automatically upgrade resource instance states on read

If an instance object in state has an earlier schema version number then
it is likely that the schema we're holding won't be able to decode the
raw data that is stored. Instead, we must ask the provider to upgrade it
for us first, which might also include translating it from flatmap form
if it was last updated with a Terraform version earlier than v0.12.

This ends up being a "seam" between our use of int64 for schema versions
in the providers package and uint64 everywhere else. We intend to
standardize on int64 everywhere eventually, but for now this remains
consistent with existing usage in each layer to keep the type conversion
noise contained here and avoid mass-updates to other Terraform components
at this time.

This also includes a minor change to the test helpers for the
backend/local package, which were inexplicably setting a SchemaVersion of
1 on the basic test state but setting the mock schema version to zero,
creating an invalid situation where the state would need to be downgraded.

* catch conversion errors in PrepareProviderConfig

Errors were being ignore with the intention that they would be caught
later in validation, but it turns out we nee dto catch those earlier.

The legacy schemas also allowed providers to set and empty string for a
bool value, which we need to handle here, since it's not being handled
from user input like a normal config value.

* backend/local: decode variables with cty.DynamicPseudoType

Variables values are marshalled with an explicit type of
cty.DynamicPseudoType, but were being decoded using `Implied Type` to
try and guess the type. This was causing errors because `Implied Type`
does not expect to find a late-bound value.

* StateFunc tests

* Handle StateFuncs in provider shim

Any state modifying functions can only be run once during the plan-apply
cycle. When regenerating the Diff during ApplyResourceChange, strip out
all StateFunc and CustomizeDiff functions from the schema.

Thew NewExtra diff field was where config data that was modified by a
StateFunc was stored, and needs to be maintained between plan and apply.

During PlanResourceChange, store any NewExtra data from the Diff in the
PlannedPrivate data, and re-insert the NewExtra data into the Diff
generated during ApplyResourceChange.

* vendor: update github.com/kardianos/osext

This adds support for some additional Go platforms.

* format/state: added missing newline in the `outputs` output (#19542)

* configs/configupgrade: Basic migration of provider blocks

This involved some refactoring of how block bodies are migrated, which
still needs some additional work to deal with meta-arguments but is now
at least partially generalized to support both resource and provider
blocks.

* configs/configupgrade: Generalize migration of block bodies

The main area of interest in upgrading is dealing with special cases for
individual block items, so this generalization allows us to use the same
overall body-processing logic for everything but to specialize just how
individual items are dealt with, which we match by their names as given
in the original input source code.

* configs/configupgrade: Pass through diagnostics from body upgrades

* configs/configupgrade: Upgrade rules for the "terraform" block type

This includes the backend configuration, so we need to load the requested
backend and migrate the included configuration per that schema.

* configs/configupgrade: Populate the test provider schema

This will allow us to test some schema-sensitive migration rules.

* configs/configupgrade: Add some logging and enable it for tests

* configs/configupgrade: Test that map attrs as blocks are fixed

The old parser was forgiving in allowing the use of block syntax where a
map attribute was expected, but the new parser is not (in order to allow
for dynamic map keys, for expressions, etc) and so the upgrade tool must
fix these to use attribute syntax.

* configs/configupgrade: Decide on blank lines by ends of items

Previously we were using the line count difference between the start of
one item and the next to decide whether to insert a blank line between
two items, but that is incorrect for multi-line items.

Instead, we'll now count the difference from the final line of the
previous item to the first line of the next, as best we can with the
limited position info recorded by the HCL1 parser.

* configs/configupgrade: Upgrade expressions nested in HCL list/object

In the old world, lists and maps could be created either using functions
in HIL or list/object constructs in HCL. Here we ensure that in the HCL
case we'll apply any required expression transformations to the individual
items within HCL's compound constructs.

* configs/configupgrade: Replace calls to list() and map()

We now have native language features for declaring tuples and objects,
which are the idiomatic way to construct sequence and mapping values that
can then be converted to list, set, and map types as needed.

* configs/configupgrade: Replace lookup(...) with index syntax

If lookup is being used with only two arguments then it is equivalent to
index syntax and more readable that way, so we'll replace it.

Ideally we'd do similarly for element(...) here but sadly we cannot
because we can't prove in static analysis that the user is not relying
on the modulo wraparound behavior of that function.

* configs/configupgrade: Use break to cancel default function rendering

We're using break elsewhere in here so it was weird to have a small set
of situations that return instead, which could then cause confusion for
future maintenance if a reader doesn't notice that control doesn't always
leave the outer switch statement.

* configs/configupgrade: Upgrading of simple nested blocks

* configs/configupgrade: Convert block-as-attr to dynamic blocks

Users discovered that they could exploit some missing validation in
Terraform v0.11 and prior to treat block types as if they were attributes
and assign dynamic expressions to them, with some significant caveats and
gotchas resulting from the fact that this was never intended to work.

However, since such patterns are in use in the wild we'll convert them
to a dynamic block during upgrade. With only static analysis we must
unfortunately generate a very conservative, ugly dynamic block with
every possible argument set. Users ought to then clean up the generated
configuration after confirming which arguments are actually required.

* configs/configupgrade: Distinguish data resources in analysis

Early on it looked like we wouldn't need to distinguish these since we
were only analyzing for provider types, but we're now leaning directly
on the resource addresses later on and so we need to make sure we produce
valid ones when data resources are present.

* configs/configupgrade: Print trailing comments inside blocks

Previously we were erroneously moving these out of their original block
into the surrounding body. Now we'll make sure to collect up any remaining
ad-hoc comments inside a nested block body before closing it.

* configs/configupgrade: Normalize and upgrade reference expressions

The reference syntax is not significantly changed, but there are some
minor additional restrictions on identifiers in HCL2 and as a special case
we need to rewrite references to data.terraform_remote_state .

Along with those mandatory upgrades, we will also switch references to
using normal index syntax where it's safe to do so, as part of
de-emphasizing the old strange integer attribute syntax (like foo.0.bar).

* configs/configupgrade: Fix up references to counted/uncounted resources

Prior to v0.12 Terraform was liberal about these and allowed them to
mismatch, but now it's important to get this right so that resources
and resource instances can be used directly as object values, and so
we'll fix up any sloppy existing references so things keep working as
expected.

This is particularly important for the pattern of using count to create
conditional resources, since previously the "true" case would create one
instance and Terraform would accept an unindexed reference to that.

* vendor: upgrade github.com/hashicorp/hcl2

This includes a fix to the HCL scanner to properly parse multi-line
comments.

* configs/configupgrade: Retain any .tf.json files unchanged

We don't change JSON files at all and instead just emit a warning about
them since JSON files are usually generated rather than hand-written and
so any updates need to happen in the generator program rather than in its
output.

However, we do still need to copy them verbatim into the output map so
that we can keep track of them through any subsequent steps.

* export ShimLegacyState to use it in resource tests

The helper/resource test harness needs to shim the legacy states as
well. Export this so we can get an error to check too.

* add implied providers during import

The CLI adds the provider references during import, but tests may not
have them specified.

* remove empty flatmap containers from test states

Don't compare attributes with zero-length flatmap continers in tests.

* handle shim errors in provider tests

These should be rare, and even though it's likely a shim bug, the error
is probably easier for provider developers to deal with than the
panic

* update CHANGELOG.md

* helper/schema: Always propagate NewComputed for previously zero value primative type attributes

When the following conditions were met:
* Schema attribute with a primative type (e.g. Type: TypeString) and Computed: true
* Old state of attribute set to zero value for type (e.g. "")
* Old state ID of resource set to non-empty (e.g. existing resource)

Attempting to use CustomizeDiff with SetNewComputed() would result in the difference  previously being discarded. This update ensures that previous zero values or resource existence does not influence the propagation of the computed update.

Previously:

```
--- FAIL: TestSetNewComputed (0.00s)
    --- FAIL: TestSetNewComputed/NewComputed_should_always_propagate (0.00s)
        resource_diff_test.go:684: Expected (*terraform.InstanceDiff)(0xc00051cea0)({
             mu: (sync.Mutex) {
              state: (int32) 0,
              sema: (uint32) 0
             },
             Attributes: (map[string]*terraform.ResourceAttrDiff) (len=1) {
              (string) (len=3) "foo": (*terraform.ResourceAttrDiff)(0xc0003dcec0)({
               Old: (string) "",
               New: (string) "",
               NewComputed: (bool) true,
               NewRemoved: (bool) false,
               NewExtra: (interface {}) <nil>,
               RequiresNew: (bool) false,
               Sensitive: (bool) false,
               Type: (terraform.DiffAttrType) 0
              })
             },
             Destroy: (bool) false,
             DestroyDeposed: (bool) false,
             DestroyTainted: (bool) false,
             Meta: (map[string]interface {}) <nil>
            })
            , got (*terraform.InstanceDiff)(0xc00051ce80)({
             mu: (sync.Mutex) {
              state: (int32) 0,
              sema: (uint32) 0
             },
             Attributes: (map[string]*terraform.ResourceAttrDiff) {
             },
             Destroy: (bool) false,
             DestroyDeposed: (bool) false,
             DestroyTainted: (bool) false,
             Meta: (map[string]interface {}) <nil>
            })

--- FAIL: TestSchemaMap_Diff (0.01s)
    --- FAIL: TestSchemaMap_Diff/79-NewComputed_should_always_propagate_with_CustomizeDiff (0.00s)
        schema_test.go:3289: expected:
            *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"foo":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}

            got:
            <nil>

FAIL
FAIL  github.com/hashicorp/terraform/helper/schema  0.825s
```

* helper/schema: Fix setting a set in a list

The added test in this commit, without the fix, will make d.Set return
the following error:

`Invalid address to set: []string{"ports", "0", "set"}`

This was due to the fact that setSet in feild_writer_map tried to
convert a slice into a set by creating a temp set schema and calling
writeField on that with the address(`[]string{"ports", "0", "set"}"` in
this case). However the temp schema was only for the set and not the
whole schema as seen in the address so, it should have been `[]string{"set"}"`
so it would align with the schema.

This commits adds another variable there(tempAddr) which will only
contain the last entry of the address that would be the set key, which
would match the created schema

This commit potentially fixes the problem described in #16331

* go-mod: update the `go-tfe` dependency

* backend/remote: use entitlements to select backends

Use the entitlements to a) determine if the organization exists, and b) as a means to select which backend to use (the local backend with remote state, or the remote backend).

* configs/configupgrade: Rules-based upgrade for "locals" block

Previously we were handling this one as a special case, effectively
duplicating most of the logic from upgradeBlockBody.

By doing some prior analysis of the block we can produce a "rules" that
just passes through all of the attributes as-is, allowing us to reuse
upgradeBlockBody. This is a little weird for the locals block since
everything in it is user-selected names, but this facility will also be
useful in a future commit for dealing with module blocks, which contain
a mixture of user-chosen and reserved argument names.

* configs/configupgrade: Initial passthrough mapping for module blocks

Some further rules are required here to deal with the meta-arguments we
accept inside these blocks, but this is good enough to pass through most
module blocks using the standard attribute-expression-based mapping.

* configs/configupgrade: Upgrade provider addresses

Both resource blocks and module blocks contain references to providers
that are expressed as short-form provider addresses ("aws.foo" rather than
"provider.aws.foo").

These rules call for those to be unwrapped as naked identifiers during
upgrade, rather than appearing as quoted strings. This also introduces
some further rules for other simpler meta-arguments that are required
for the test fixtures for this feature.

* configs/configupgrade: Upgrade the resource "lifecycle" nested block

The main tricky thing here is ignore_changes, which contains strings that
are better given as naked traversals in 0.12. We also handle here mapping
the old special case ["*"] value to the new "all" keyword.

* configs/configupgrade: Upgrade depends_on in resources and outputs

* configs/configupgrade: Pass through connection and provisioner blocks

This is a temporary implementation of these rules just so that these can
be passed through verbatim (rather than generating an error) while we
do testing of other features.

A subsequent commit will finish these with their own custom rulesets.

* don't add numeric indexes to resources with a count of 0

* remove approximate release date

"Later this summer" is long-gone.

* Update CHANGELOG for #19548

* Add skytap links

* vendor: Fix dependency on github.com/kardianos/osext

In 98c8ac0862e3 I merged a change to the vendored code for this module but
didn't spot that it didn't also update the dependency metadata to match.
Here we just catch up the metadata to match the vendored version, with
no change to the vendored code itself.

* update go-plugin to our provisional alpha branch

* update vendor from go.mod

* enable Auto mTLS in go-plugin

* v0.12.0-alpha3

* release: clean up after v0.12.0-alpha3

* states/statemgr: use -mod=vendor to run the state locking helper

This ensures that we test using the same source as we're using everywhere
else, and more tactically also ensures that when running in Travis-CI we
won't try to download all of the dependencies of Terraform during this
test.

In the long run we will look for a more global solution to this, rather
than adding this to all of our embedded "go" command calls directly, but
this is intended as a low-risk solution to get the build working again in
the mean time.

* backend/s3: Support DynamoDB, IAM, and STS endpoint configurations

This change enables a few related use cases:
* AWS has partitions outside Commercial, GovCloud (US), and China, which are the only endpoints automatically handled by the AWS Go SDK. DynamoDB locking and credential verification can not currently be enabled in those regions.
* Allows usage of any DynamoDB-compatible API for state locking
* Allows usage of any IAM/STS-compatible API for credential verification

* helper/resource: Create a separate provider instance each call

Previously we were running the factory function only once when
constructing the provider resolver, which means that all contexts created
from that resolver share the same provider instance.

Instead now we will call the given factory function once for each
instantiation, ensuring that each caller ends up with a separate object
as would be the case in real-world use.

* helper/resource: Get schemas from Terraform context

Previously the test harness was preloading schemas from the providers
before running any test steps.

Since terraform.NewContext already deals with loading provider schemas,
we can instead just use the schemas it loaded for our shimming needs,
avoiding the need to reimplement the schema lookup behavior and thus
the need to create a throwaway provider instance with which to do it.

* v0.12.0-alpha4

* release: clean up after v0.12.0-alpha4

* vendor: upgrade github.com/hashicorp/hcl2

This includes a change to improve the precision of types returned from
splat expressions when a source value is unknown.

* lang: EvalExpr only convert if wantType is not dynamic

This actually seems to be a bug in the underlying cty Convert function
since converting to cty.DynamicPseudoType should always just return the
input verbatim, but it seems like it's actually converting unknown values
of any type to be cty.DynamicVal, losing the type information.

We should eventually fix this in cty too, but having this extra check in
the Terraform layer is harmless and allows us to make progress without
context-switching.

* configs/configupgrade: Expression type inference

Although we can't do fully-precise type inference with access only to a
single module's configuration, we can do some approximate inference using
some clues within the module along with our resource type schemas.

This depends on HCL's ability to pass through type information even if the
input values are unknown, mapping our partial input type information into
partial output type information by evaluating the same expressions.

This will allow us to do some upgrades that require dynamic analysis to
fully decide, by giving us three outcomes: needed, not needed, or unknown.
If it's unknown then that'll be our prompt to emit a warning for the user
to make a decision.

* configs/configupgrade: Do type inference with input variables

By collecting information about the input variables during analysis, we
can return approximate type information for any references to those
variables in expressions.

Since Terraform 0.11 allowed maps of maps and lists of lists in certain
circumstances even though this was documented as forbidden, we
conservatively return collection types whose element types are unknown
here, which allows us to do shallow inference on them but will cause
us to get an incomplete result if any operations are performed on
elements of the list or map value.

* configs/configupgrade: Remove redundant list brackets

In early versions of Terraform where the interpolation language didn't
have any real list support, list brackets around a single string was the
signal to split the string on a special uuid separator to produce a list
just in time for processing, giving expressions like this:

    foo = ["${test_instance.foo.*.id}"]

Logically this is weird because it looks like it should produce a list
of lists of strings. When we added real list support in Terraform 0.7 we
retained support for this behavior by trimming off extra levels of list
during evaluation, and inadvertently continued relying on this notation
for correct type checking.

During the Terraform 0.10 line we fixed the type checker bugs (a few
remaining issues notwithstanding) so that it was finally possible to
use the more intuitive form:

    foo = "${test_instance.foo.*.id}"

...but we continued trimming off extra levels of list for backward
compatibility.

Terraform 0.12 finally removes that compatibility shim, causing redundant
list brackets to be interpreted as a list of lists.

This upgrade rule attempts to identify situations that are relying on the
old compatibility behavior and trim off the redundant extra brackets. It's
not possible to do this fully-generally using only static analysis, but
we can gather enough information through or partial type inference
mechanism here to deal with the most common situations automatically and
produce a TF-UPGRADE-TODO comment for more complex scenarios where the
user intent isn't decidable with only static analysis.

In particular, this handles by far the most common situation of wrapping
list brackets around a splat expression like the first example above.
After this and the other upgrade rules are applied, the first example
above will become:

    foo = test_instance.foo.*.id

* plugin: Fix incorrect trace log message in provider Close

Was incorrectly printing out "PlanResourceChange" instead of "Close".

* helper/resource: print full diagnostics for operation errors in tests

This causes the output to include additional helpful context such as
the values of variables referenced in the config, etc. The output is in
the same format as normal Terraform CLI error output, though we don't
retain a source code cache in this codepath so it will not include a
source code snippet.

* core: Attach schemas to nodes created by ResourceCountTransformer

Previously we were only doing this in the case where count wasn't set at
all.

* don't modify argument slices

There were a couple spots where argument slices weren't being copied
before `append` was called, which could possibly modify the caller's
slice data.

* update CHANGELOG.md

* command/format: Add tests for ResourceChange renderer

* six new community providers

* core: enhance service discovery

This PR improves the error handling so we can provide better feedback about any service discovery errors that occured.

Additionally it adds logic to test for specific versions when discovering a service using `service.vN`. This will enable more informational errors which can indicate any version incompatibilities.

* Update CHANGELOG.md

* backend/azurerm: Support for authenticating using the Azure CLI (#19465)

* Upgrading to 2.0.0 of github.com/hashicorp/go-azure-helpers

* Support for authenticating using Azure CLI

* backend/azurerm: support for authenticating using the Azure CLI

* Updating to include #19465

* update go mock

* generate new plugin proto mocks

* remove govendor

* remove vendor-status

* Update CHANGELOG for #19571

* command/format: Restructure tests

* command/format: Add more tests to cover non-primitive fields

* backend/local: Fix incorrect destroy/update count on apply

* command/format: Fix rendering of nested blocks during update

* Update CHANGELOG.md

* Update CHANGELOG.md

* command/format: Fix rendering of force-new updates

* command/format: Fix tests

* Update CHANGELOG.md

* update go-plugin

All our changes have been merged, so this moved the dependency back to
master

* backend/local: Avoid rendering data sources on destroy

* update CHANGELOG.md

* Fix winrm default ssl port (#19540)

* Update provisioner.go

Changed the default port used for winrm when https is specified

* update CHANGELOG.md

* Update CHANGELOG.md

* go-mod: update go-version

* deps: github.com/aws/aws-sdk-go@v1.16.4 and github.com/terraform-providers/terraform-provider-aws@v1.52.0

Notable changes:

* backend/s3: Automatic validation of `eu-north-1` region
* backend/s3: Support for `credential_process` handling in AWS configuration file

Updated via:

```
go get github.com/aws/aws-sdk-go@v1.16.4
go get github.com/terraform-providers/terraform-provider-aws@v1.52.0
go mod tidy
go mod vendor
```

* build: Use Go 1.11.3 in Travis-CI and in environments using "goenv"

* Add a method to retrieve version contraints

* Update CHANGELOG.md

* backend/local: Render CBD replacement (+/-) correctly (#19642)

* backend/local: Render CBD replacement (+/-) correctly

* command/format: Use IsReplace helper function

* Update CHANGELOG.md

* backend/remote: return detailed incompatibility info

* Update CHANGELOG.md

* Add the v0.11 branch to Travis so we can test builds

* vendor: upgrade github.com/hashicorp/hcl2

This includes a number of upstream bug fixes, which in turn fix a number
of issues here in Terraform:

- New-style "full splat" operator now working correctly (#19181)
- The weird HCL1-ish single-line block syntax is now supported (#19153)
- Formatting of single-line blocks adds spaces around the braces (#19154)

This also includes a number of other upstream fixes that were not tracked
as issues in the Terraform repository. The highlights of those are:

- A for expression with the "for" keyword wrapped onto a newline after its
  opening bracket now parses correctly.
- In JSON syntax, interpolation sequences in properties of objects that
  are representing expressions now have their variables properly detected.
- The "flush" heredoc variant is now functional again after being broken
  in some (much-)earlier rework of the template parser.

* Update CHANGELOG.md

* backend/remote: fix an error that prevents checking constraints

* Update CHANGELOG.md

* vendor: update github.com/hashicorp/hcl2

This includes a fix to hcl.RelTraversalForExpr where it would
inadvertantly modify the internals of a traversal AST node as part of
relativizing the traversal in order to return it.

* core: Validate depends_on and ignore_changes traversals

Both depends_on and ignore_changes contain references to objects that we
can validate.

Historically Terraform has not validated these, instead just ignoring
references to non-existent objects. Since there is no reason to refer to
something that doesn't exist, we'll now verify this and return errors so
that users get explicit feedback on any typos they may have made, rather
than just wondering why what they added seems to have no effect.

This is particularly important for ignore_changes because users have
historically used strange values here to try to exploit the fact that
Terraform was resolving ignore_changes against a flatmap. This will give
them explicit feedback for any odd constructs that the configuration
upgrade tool doesn't know how to detect and fix.

* failing tests when using resources with count

Two different tests failing around resourced with count

* don't expand EachMode from state during validation

Validate should not require state or changes to be present. Break out
early when using evaluationStateData during walkValidate before checking
state or changes, to prevent errors when indexing resources that haven't
been expanded.

* command: allow -no-color option on "providers" command

* Update CHANGELOG.md

* Update CHANGELOG for #19651

* ResourceInstanceObject needs to return a copy

* attach a deep copy of ResourceState

* ensure NodeDestroyableDataResource has provider

Make sure that NodeDestroyableDataResource has a ResolvedProvider to
call EvalWriteState. This entails setting the ResolvedProvider in
concreteResourceDestroyable, as well as calling EvalGetProvider in
NodeDestroyableDataResource to load the provider schema.

Even though writing the state for a data destroy node should just be
removing the instance, every instance written sets the Provider for the
entire resource. This means that when scaling back a counted data
source, if the removed instances are written last, the data source will
be missing the provider in the state.

* don't allow EvalWriteState without a provider

* rename NodeDestroyableDataResourceInstance

Make this node consistent with the naming if the other instances.

* update CHANGELOG.md

* decode backend hash as uint64

Older versions of terraform could save the backend hash number in a
value larger than an int.

While we could conditionally decode the state into an intermediary data
structure for upgrade, or detect the specific decode error and modify
the json, it seems simpler to just decode into the most flexible value
for now, which is a uint64.

* missing commits from 19688

* vendor: upgrade github.com/zclconf/go-cty

This includes:
- An additional check in the format stdlib function to fail if there are
  too many arguments given, rather than silently ignoring.
- Refinements for the type unification behavior to allow unification of
  object/tuple types into weaker map/list types when no other unification
  is possible.
- Improvements to the error messages for failed type conversions on
  collection and structural types to talk about mismatching element types
  where possible, rather than the outer value.

* backend/remote: compare versions without the prerelease

* json output of terraform plan (#19687)

* command/show: adding functions to aid refactoring

The planfile -> statefile -> state logic path was getting hard to follow
with blurry human eyes. The getPlan... and getState... functions were
added to help streamline the logic flow. Continued refactoring may follow.

* command/show: use ctx.Config() instead of a config snapshot

As originally written, the jsonconfig marshaller was getting an error
when loading configs that included one or more modules. It's not clear
if that was an error in the function call or in the configloader itself,
  but as a simpler solution existed I did not dig too far.

* command/jsonplan: implement jsonplan.Marshal

Split the `config` portion into a discrete package to aid in naming
sanity (so we could have for example jsonconfig.Resource instead of
jsonplan.ConfigResource) and to enable marshaling the config on it's
own.

* Update CHANGELOG.md

* make connection host Required

And provide the connection config for validation

* don't evaluate an empty connection body

There's a required field now, so evaluating an empty block will always
fail.

* fix provisioner tests

Add host where required in the test configs, and fix the mock to check
for a null connection.

* links for ucloud provider

* update CHANGELOG.md

* core: Detect and reject self-referencing local values

We already catch indirect cycles through the normal cycle detector, but
we never create self-edges in the graph so we need to handle a direct
self-reference separately here.

The prior behavior was simply to produce an incorrect result (since the
local value wasn't assigned a new value yet).

This fixes #18503.

* command: Always normalize config path before operations

Previously we were doing this rather inconsistently: some commands would
do it and others would not. By doing it here we ensure we always apply the
same normalization, regardless of which operation we're running.

This normalization is mostly for cosmetic purposes in error messages, but
it also ends up being used to populate path.module and path.root and so
it's important that we always produce consistent results here so that
we don't produce flappy changes as users work with different commands.

The fact that thus mutates a data structure as a side-effect is not ideal
but this is the best place to ensure it always gets applied without doing
any significant refactoring, since everything after this point happens in
the backend package where the normalizePath method is not available.

* core: path.module, path.root, path.cwd use fwd slashes on all platforms

Previously we used the native slash type for the host platform, but that
leads to issues if the same configuration is applied on both Windows and
non-Windows systems.

Since Windows supports slashes and backslashes, we can safely return
always slashes here and require that users combine the result with
subsequent path parts using slashes, like:

    "${path.module}/foo/bar"

Previously the above would lead to an error on Windows if path.module
contained any backslashes.

This is not really possible to unit test directly right now since we
always run our tests on Unix systems and filepath.ToSlash is a no-op on
Unix. However, this does include some tests for the basic behavior to
verify that it's not regressed as a result of this change.

This will need to be reported in the changelog as a potential breaking
change, since anyone who was using Terraform _exclusively_ on Windows may
have been using expressions like "${path.module}foo\\bar" which they will
now need to update.

This fixes #14986.

* vendor: upgrade github.com/hashicorp/hcl2

This includes a fix to prevent the creation of an invalid block object
when the configuration contains invalid template sequences in block
labels.

* configs: New test cases for invalid interpolations in block labels

The parent commit fixes an issue where this would previously have led to
a crash. These new test cases verify that parsing is now able to complete
without crashing, though the result is still invalid.

* vendor: upgrade github.com/hashicorp/hcl2

This includes a change to accept and ignore a UTF-8 BOM at the start of
any given native syntax configuration.

Although a BOM is redundant in UTF-8, we learned in #18618 that several
software products on Windows will produce a BOM whenever they save as
UTF-8, so accepting it avoids friction when using those tools to author
or generate Terraform configuration files.

This fixes #18618.

* Update CHANGELOG.md

* don't convert empty DynamicValue to nil

* don't attempt to decode empty changes values

An empty DynamicValue can't be decoded but indicates no state, so just
return a NullVal.

* add hedvig provider links

* test for destroy plan round trip

* update CHANGELOG.md

* fix spacing in website docs

* core: Specialized errors for incorrect indexes in resource reference

In prior versions of Terraform we permitted inconsistent use of indexes
in resource references, but in as of 0.12 the index usage must correlate
properly with whether "count" is set on the resource.

Since users are likely to have existing configurations with incorrect
usage, here we introduce some specialized error messages for situations
where we can detect such issues statically. This seems to cover all of the
common patterns we've seen in practice.

Some usage patterns will fall back on a less-helpful dynamic error here,
but no configurations coming from 0.11 can end up that way because 0.11
did not permit forms such as aws_instance.no_count[count.index].bar that
this validation would not be able to "see".

Our configuration upgrade tool also contains a fix for this already, but
it takes a more conservative approach of adding the index [1] rather than
[count.index] because it can't be sure (without human help) if correlation
of indices is what was intended.

* there is always an implied workspace of "default"

Fetching a state requires a workspace name, which should default to
"default"

* website: Structural edit of configuration language docs

This commit is a wide-ranging set of edits to the pages under
/docs/configuration. Among other things, it

- Separates style conventions out into their own page.
- Separates type constraints and conversion info into their own page.
- Conflates similar complex types a little more freely, since the distinction is
  only relevant when restricting inputs for a reusable module or resource.
- Clarifies several concepts that confused me during edits.

* website: Update configuration language section page titles

* website: Update functions section page titles

* vendor: upgrade github.com/hashicorp/hcl2

This includes a fix to the dynamic block extension so that nested dynamic
blocks can iterate over the temporary variables created by their ancestors.

* provider/test: Test for nested dynamic blocks

This is a HCL feature rather than a Terraform feature really, but we want
to make sure it keeps working consistently in future versions of Terraform
so this is a Terraform-flavored test for the block expansion behavior.

In particular, it tests that a nested dynamic block can access the parent
iterator, so that we won't regress #19543 in future.

* jsonplan: remove "proposed_unknown" structure in favor of "after_unknown" field in "change" (#19709)

* jsonplan: remove "proposed_unknown" structure in favor of
"after_unknown" field in "change"

* Update CHANGELOG.md

* lang/funcs: templatefile function

This function is similar to the template_file data source offered by the
template provider, but having it built in to the language makes it more
convenient to use, allowing templates to be rendered from files anywhere
an inline template would normally be allowed:

    user_data = templatefile("${path.module}/userdata.tmpl", {
      hostname = format("petserver%02d", count.index)
    })

Unlike the template_file data source, this function allows values of any
type in its variables map, passing them through verbatim to the template.
Its tighter integration with Terraform also allows it to return better
error messages with source location information from the template itself.

The template_file data source was originally created to work around the
fact that HIL didn't have any support for map values at the time, and
even once map support was added it wasn't very usable. With HCL2
expressions, there's little reason left to use a data source to render
a template; the only remaining reason left to use template_file is to
render a template that is constructed dynamically during the Terraform
run, which is a very rare need.

* command/jsonstate: marshal state into a machine readable blob o'json (#19911)

* command/jsonstate: marshal state into a machine readable blob o'json
* consistent json tags are nice

* restore (via copypaste) terraform.State.Remove

* helper/schema: Skip validation of unknown values

With the introduction of explicit "null" in 0.12 it's possible for a value
that is unknown during plan to become a known null during apply, so we
need to slightly weaken our validation rules to accommodate that, in
particular skipping the validation of conflicting attributes if the result
could potentially be valid after the unknown values become known.

This change is in the codepath that is common to both 0.12 and 0.11
callers, but that's safe because 0.11 re-runs validation during the apply
step and so will still catch problems here, albeit in the apply step
rather than in the plan step, thus matching the 0.12 behavior. This new
behavior is a superset of the old in the sense that everything that was
valid before is still valid.

The implementation here also causes us to skip all other validation for
an attribute whose value is unknown. Most of the downstream validation
functions handle this directly anyway, but again this doesn't add any new
failure cases, and should clean up some of the rough edges we've seen with
unknown values in 0.11 once people upgrade to 0.12-compatible providers.
Any issues we now short-circuit during planning will still be caught
during apply.

While working on this I found that the existing "Not a list" test was not
actually testing the correct behavior, so this also includes a tweak to
that to ensure that it really is checking the "should be a list" path
rather than the "cannot be set" codepath it was inadvertently testing
before.

* helper/resource: Shim back to old state must preserve schema version

We use a shim to convert from the new state model back to the old because
the provider test API is still using the old API throughout. However, the
shim was not preserving the schema version recorded in the new-style state
and so a round-trip through this shim would cause the schema versions to
all revert to zero.

This can cause trouble with the destroy phase of provider tests because
(for API legacy reasons) we round-trip from old state back to new again
before the destroy phase and thus causing the providers to try to upgrade
from state version zero even though the data was already latest, which
can cause errors because state upgrades are generally not idempotent.

* add endpoint and change location
@rberlind
Copy link
Contributor Author

rberlind commented Mar 11, 2019

I confirmed that this works with terraform 0.12 beta 1. Specifically, this now works:

output "public_addresses_full_splat" {
  value = [ aws_instance.ubuntu[*].public_dns ]
}

@ghost
Copy link

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants