-
-
Notifications
You must be signed in to change notification settings - Fork 957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terragrunt fails to get outputs of dependency #1330
Comments
If you want to use relative paths, you need to remove the prefix |
No, it's not relative paths - it's three dots in the example above, not two. I just removed my long path since it's not relevant. Ah, I see there was one with only two - I'll update the post! |
Ah gotcha. Sorry for the confusion. Can you share how you are configuring your remote state? Does accessing the remote state depend on any special processing like See the docs on dependency optimization for more info. |
The dependency "bbb" is actually used to when I define the remote state... This is from the included terragrunt.hcl:
Will check the dependency opt tomorrow! Thanks! |
Oh hmm that should work automatically... I'll need to investigate this to see what might have caused the regression. For now, disabling the optimization should work. |
I tried adding the "disable_dependency_optimization = false" to all remote_state -blocks, no change! |
I spent some time digging into this, but I was unable to reproduce the particular issue you are running into. Can you take a look at my test module and see if you can identify some differences between your setup and mine? https://github.com/gruntwork-io/terragrunt/tree/yori-investigate-dependency-regression/test/fixture-get-output/nested-optimization-dependency Note how I am using outputs from |
I fixed a test case and invited you @yorinasub17 |
Thanks for providing a test case. I tried it out with tg 0.24.0, but I was unable to reproduce the issue. However, I noticed that you were using workspaces and I believe this may be the issue. In particular, the dependency optimization will fail if the dependency is using a workspace for the state because it doesn't switch to that workspace. So in your example, I was able to produce an issue if I use a different workspace for the S3 and dynamodb modules, but keep the default for the debug environment code that depends on it. I was also able to successfully work around it when I added As a side note, the dependency fetching will also naturally fail if you wipe the terragrunt cache, as terragrunt doesn't know to switch to that workspace when it reinitializes the cache. So if you are wiping the cache in between runs, that can also cause this issue. |
I was adding a docker image to the repo, to make sure we were running the same thing. But I couldn't recreate the issue in that new image (it still exists in our "real" docker image). So this issue must be because of some other config we're doing somewhere... I will look into this further when I have time, and get back to you! |
Hi, └── xxx
├── aks
│ └── terragrunt.hcl
└── aks-addon
└── terragrunt.hcl
In my "aks-addon" module terragrunt.hcl I defined dependency dependency "aks" {
config_path = "../aks"
} But when I try to start any terragrunt command inside aks-addon directory I get [terragrunt] [/home/adamplaczek/xxx/aks] 2020/11/26 08:16:35 Running command: terraform init -get=false -get-plugins=false
[terragrunt] [/home/adamplaczek/xxx/aks] 2020/11/26 08:16:38 Running command: terraform output -json
[terragrunt] 2020/11/26 08:16:40 /home/adamplaczek/xxx/aks/terragrunt.hcl is a dependency of /home/adamplaczek/xxx/aks-addons/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block. But when I enter the dependency manually and start terraform init -get=false -get-plugins=false
terraform output -json I can see the output. If I set |
Sadly I get the same problem but for me the |
I thought https://terragrunt.gruntwork.io/docs/features/caching/ meant we could safely get rid of these or are you talking about something else? |
We triggered the issue when we migrated our remote state files (S3, Dynamo for locking) to another location. Before that, relative dependencies were working fine but now The terraform = 0.12.29 |
Still reproducing this with: disable_dependency_optimization = true does not help My modules was in plan stage. I get rid of: and left only: Started to work. // module
include {
path = find_in_parent_folders("account.hcl")
}
locals {
root_vars = read_terragrunt_config(find_in_parent_folders())
region = local.root_vars.locals.default_region
}
terraform {
source = "../../../../../modules//module"
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<-EOF
provider "aws" {
region = "${local.region}"
}
EOF
} // deepmodule
include {
path = find_in_parent_folders("account.hcl")
}
locals {
root_vars = read_terragrunt_config(find_in_parent_folders())
region = local.root_vars.locals.default_region
}
terraform {
source = "../../../../../modules//deepmodule"
}
dependency "vpc" {
config_path = "../vpc"
// mock_outputs_allowed_terraform_commands = ["validate"] <- not working
mock_outputs = {
internal_vpc = { id = "MOCK"}
internal_public_subnets = { ids = ["subnet-MOCK"] }
internal_private_subnets = { ids = ["subnet-MOCK"] }
}
}
inputs = {
vpc_id = dependency.vpc.outputs.internal_vpc.id
network = {
public_subnets = tolist(dependency.vpc.outputs.internal_public_subnets.ids)
private_subnets = tolist(dependency.vpc.outputs.internal_private_subnets.ids)
}
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<-EOF
provider "aws" {
region = "${local.region}"
}
EOF
} |
The issue occurred for us after updating from terraform 👉 We could fix the issue by deleting the This might also explain why @yorinasub17 could not reproduce the issue. |
I had the same issue and I had the following configuration that caused this bug for me: config = {
key = "${path_relative_to_include()}/terraform.tfstate"
resource_group_name = get_env("REMOTE_STATE_RESOURCE_GROUP", "rg-terragrunt-backend-state")
storage_account_name = get_env("REMOTE_STATE_STORAGE_ACCOUNT", "stterragruntstate")
container_name = get_env("REMOTE_STATE_STORAGE_CONTAINER", "terragrunt")
} This means, the backend configuration is dynamic and determined by environment variables. However, I needed to run terragrunt from an automation pipeline (in our case Azure pipelines), and then the same backend configuration is used for the modules and all the dependencies. config = {
key = "${path_relative_to_include()}/terraform.tfstate"
resource_group_name = "rg-terragrunt-backend-state"
storage_account_name = "stterragruntstate"
container_name = "deployment-stamp-eu1-dev"
} Hope it helps if someone finds himself in the same situation. |
Same issue as the original post: TG = v0.29.10 |
@khushil the reason the cache is no longer ephemeral if you are using workspaces is because the cache now contains environmental context (the enabled workspace) that is recorded on disk. |
@valdestron when you say
Do you mean the modules were in a clean slate with no deployed infrastructure? In that case the dependency fetching is properly giving you an error because there are no outputs (because the dependent module hasn't been applied) when your configuration expects it. Using |
Problem continues in 2022 latest version...
|
This was the solution for me. I was trying to test changes to infrastructure using a custom prefix, and those resources did not exist in AWS. Running init and apply on the modules supplying the outputs for the dependent modules solved this issue. |
Is there any movement to this ? This is a very peculiar case since this is a classic case of terraform plan and identifying dependencies. Mock is a way out though but it is a very misguiding in plan vs apply . |
Hello everyone. I'm using the latest version (v0.38.7) and if I use Has anyone solved this? Thanks. |
We experienced the same issue when importing resources into terragrunt. |
Even in my environment, running However, if the terraform project is large, for example, with many dependencies, |
Have the same problem, tried everything but didnt get though it. For me it was a reason to not use terragrunt any more. |
after |
Getting same error, I am trying to run Full error
|
Yeah this error still persists even after 2 years of the original post. To get this to work, unfortunately I had to generate the outputs directly in the CI/CD environment:
And then after that, the dependency block is able to retrieve the outputs. It's a terrible hack, but it is clear the underlying issue here will not be resolved for some time. |
mock_outputs is also a terrible solution for this. The output of a plan using mock outputs does not reflect the actual real plan that should be, e.g. renaming resource app-test -> app-fake. So if all I did was want to change the name of one resource, the mock_outputs would show several changes that need to take place since the real resources are suffixed with test rather than fake, for example. |
There is also an issue with using mock_outputs. If you use mock_outputs, such as a vpc_id and the source module uses a terraform data source, e.g:
Then the plan will fail with "no matching vpc id" found, if you use a mock vpc id string, which by the name "mock_outputs" suggests you should be able to use mock strings. |
mocking variable is terrible solution |
Why does it still try to reslove the inputs whilst the skip flag is set to true and skip_outputs is set to true |
Has there been any progress on this? This is currently blocking a refactor of our infrastructure, as it will not allow an |
I'm facing the same issue. My project will ditch Terragrunt and go for plain Terraform. For context, to try this out, we set up a simple example repo with literally two Azure resources, a resource group and a storage account, as two different modules with a dependency on the resource group from the storage account. While I was originally excited by the prospect of having less repeated code, when fundamental functionality like this isn't in place, what's even the point? None of the workarounds work for me. |
@snorrea can you share the code of that Sample repository? From my experience terragrunt works very Well, and often Times it is really just a configuration issue. |
I have worked with Terragrunt and Terraform for a long time. Because this issue hasn't been fixed for years, I am going to move on from Terragrunt. The positive effects of Terragrunt do not outweigh the negative side-effects, for example the flawed dependency resolution. But for larger projects, I have also noted that breaking the state up doesn't really benefit the build time in terms of caching and speed. And dividing the state in different substates was exactly one of the most valuable things Terragrunt had to offer. The problem here is that when running a And then comes the usual answer that you can mock outputs. I don't want to mock outputs everytime I add a new module or attribute to a module. I only want to mock when I test something, this is not something I want to put inside my production configuration files. A mistake is easily made and when a resource is renamed, it could be destroyed. |
We experimented with examining the plan output and generating locals stubs in the terraform code using a special provider marking the corresponding variables of the outputs in the dependent module as not yet known during plan time. This way you can still have a glance at the impact and have your changes plannable to some extent but of course with limitations. It has probably been mentioned a couple times before, but things are complex. What if e.g. you are using the kubernetes provider in the dependent module and configure it using a data block using the cluster name got from the first module? The cluster does not exist yet and the data will fail. Terragrunt can not fix this. Instead the concept should be solved on a higher level in terraform itself. Tools like terragrunt help in splitting things up in an attempt to fix a flaw or unconsidered use case in the design of how terraform works... |
I just ran into this, however there was an error on my part. This was my layout (gke module depends on vpc) that I had:
$ terragrunt plan
ERRO[0002] /tmp/dev/vpc/terragrunt.hcl is a dependency of /tmp/dev/gke/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block. to fix this, I added $ cd /tmp/dev/vpc
$ terraform apply # (to add the output)
$ cd /tmp/dev/gke
$ terragrunt plan
No changes. Your infrastructure matches the configuration. so final layout:
with:
...
dependency "vpc" {
config_path = "../vpc"
}
inputs = {
vpc_id = dependency.vpc.outputs.vpc_id
}
variable "vpc_id" {
type = string
}
module "gke" {
source = "../../modules/gke"
....
network = var.vpc_id
...
}
module "vpc" {
source = "../../modules/vpc"
...
}
output "vpc_id" {
value = module.vpc.vpc_id
}
resource "google_compute_network" "vpc_network" {
...
}
output "vpc_id" {
value = google_compute_network.vpc_network.id
} |
how is this a bug? imagine not being able to create multiple resources at the same time just cause they have dependencies on each other. |
I get the following error when running "terragrunt apply" in module "bbb":
There is however outputs. My code worked with terragrunt 0.23.36, but broke with 0.23.37. It doesn't work with 0.24.0 either.
Using terragrunt version 0.13.2 during all runs.
The setup is as follows:
The text was updated successfully, but these errors were encountered: