Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugin Crashes #44

Open
PatriQ1414 opened this issue May 23, 2022 · 12 comments
Open

Plugin Crashes #44

PatriQ1414 opened this issue May 23, 2022 · 12 comments

Comments

@PatriQ1414
Copy link

I ran this successfully when I was testing it with a few lines of code, but as soon as I started customizing the code, using "for_each" statements, the plugin would just crash
See error below

`│ Error: Plugin did not respond

│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.


│ Error: Plugin did not respond

│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.


│ Error: Plugin did not respond

│ with data.logicmonitor_data_resource_aws_external_id.my_external_id,
│ on LM_demo.tf line 28, in data "logicmonitor_data_resource_aws_external_id" "my_external_id":
│ 28: data "logicmonitor_data_resource_aws_external_id" "my_external_id" {

│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.

Stack trace from the terraform-provider-logicmonitor_v2.0.1.exe plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x0 pc=0x7110ec]

goroutine 47 [running]:
terraform-provider-logicmonitor/client/device_group.(*GetDeviceGroupByIDOK).GetPayload(...)
terraform-provider-logicmonitor/client/device_group/get_device_group_by_id_responses.go:61
terraform-provider-logicmonitor/logicmonitor/resources.getDeviceGroupById(0xaf40cc, 0x124044c0, 0x1246e780, 0x92ada0, 0x121fa6a0, 0x124044c0, 0x122836b8, 0x19bafc)
terraform-provider-logicmonitor/logicmonitor/resources/device_group_resource.go:128 +0x2ac
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x124b75e0, 0xaf408c, 0x1209c1a0, 0x1246e780, 0x92ada0, 0x121fa6a0, 0x0, 0x0, 0x0)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.6.1/helper/schema/resource.go:347 +0x11f
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).RefreshWithoutUpgrade(0x124b75e0, 0xaf408c, 0x1209c1a0, 0x12404100, 0x92ada0, 0x121fa6a0, 0x122836a8, 0x0, 0x0, 0x0)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.6.1/helper/schema/resource.go:624 +0x158
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadResource(0x1226aa70, 0xaf408c, 0x1209c1a0, 0x1209c1e0, 0x1209c1a0, 0x199726, 0x9a2640)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.6.1/helper/schema/grpc_provider.go:575 +0x352
github.com/hashicorp/terraform-plugin-go/tfprotov5/server.(*server).ReadResource(0x124020d8, 0xaf40ec, 0x1209c1a0, 0x1266e1b0, 0x124020d8, 0x12559401, 0x126701e0)
github.com/hashicorp/terraform-plugin-go@v0.3.0/tfprotov5/server/server.go:298 +0xd4
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadResource_Handler(0x9c0e80, 0x124020d8, 0xaf40ec, 0x126701e0, 0x1266e180, 0x0, 0xaf40ec, 0x126701e0, 0x12559400, 0x4b6)
github.com/hashicorp/terraform-plugin-go@v0.3.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:344 +0x199
google.golang.org/grpc.(*Server).processUnaryRPC(0x1210c120, 0xaf8d34, 0x12134100, 0x125d8090, 0x124020f0, 0xf4bda8, 0x0, 0x0, 0x0)
google.golang.org/grpc@v1.32.0/server.go:1194 +0x4ea
google.golang.org/grpc.(*Server).handleStream(0x1210c120, 0xaf8d34, 0x12134100, 0x125d8090, 0x0)
google.golang.org/grpc@v1.32.0/server.go:1517 +0xa71
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x12400110, 0x1210c120, 0xaf8d34, 0x12134100, 0x125d8090)
google.golang.org/grpc@v1.32.0/server.go:859 +0x92
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/grpc@v1.32.0/server.go:857 +0x1b0

Error: The terraform-provider-logicmonitor_v2.0.1.exe plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.`

lm_plugin

@kabeladev
Copy link

we are currently having the same issue. This seems to happen due to the large number of devices / groups managed by Terraform. We are also hitting the API limits https://www.logicmonitor.com/support/rest-api-developers-guide/overview/rate-limiting

@PatriQ1414
Copy link
Author

we are currently having the same issue. This seems to happen due to the large number of devices / groups managed by Terraform. We are also hitting the API limits https://www.logicmonitor.com/support/rest-api-developers-guide/overview/rate-limiting

Were you able to find a workaround?

@kabeladev
Copy link

No yet no. We are still not 100% sure if this started to manifest itself due to hitting API limits. Just to confirm, can you check from the portal if you are being throttled? You can find this under Audit logs:

Throttled API request: API token XXXXXXX attempted to access path '/santaba/rest/device/devices/859' with Method: GET

@PatriQ1414
Copy link
Author

No yet no. We are still not 100% sure if this started to manifest itself due to hitting API limits. Just to confirm, can you check from the portal if you are being throttled? You can find this under Audit logs:

Throttled API request: API token XXXXXXX attempted to access path '/santaba/rest/device/devices/859' with Method: GET

TF code doesn't even start applying so I have nothing on Audit logs.
See image

lm2

@kabeladev
Copy link

audit logs are from LM portal

image

@gagansingh355
Copy link
Contributor

@PatriQ1414 Can you provide .tf file that you are using for this?

@PatriQ1414
Copy link
Author

audit logs are from LM portal

image

Yes, no info on there

@PatriQ1414
Copy link
Author

@PatriQ1414 Can you provide .tf file that you are using for this?

`terraform {
required_version = ">= 0.13.00"
#required_version = ">= 1.2.0"
required_providers {
logicmonitor = {
source = "logicmonitor/logicmonitor"
version = "2.0.1"
}
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "logicmonitor" {
api_id = "var.api_id"
api_key = "var.api_key"
company = "lmsandbox"
}

provider "aws" {
access_key = "var.access_key"
secret_key = "var.secret_key"
region = var.aws_region
}

data "logicmonitor_data_resource_aws_external_id" "my_external_id" {
}

data "aws_iam_policy_document" "assume_role" {
statement {
actions = [
"sts:AssumeRole"
]
condition {
test = "StringEquals"
values = [
data.logicmonitor_data_resource_aws_external_id.my_external_id.external_id
]
variable = "sts:ExternalId"
}
effect = "Allow"
principals {
identifiers = [
"282028653949"
]
type = "AWS"
}
}
}

resource "aws_iam_role" "lm" {
assume_role_policy = data.aws_iam_policy_document.assume_role.json
name = "TF-Integration-Role"
}

resource "aws_iam_role_policy_attachment" "read_only_access" {
policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
role = aws_iam_role.lm.name
}

resource "aws_iam_role_policy_attachment" "aws_support_access" {
policy_arn = "arn:aws:iam::aws:policy/AWSSupportAccess"
role = aws_iam_role.lm.name
}

resource "null_resource" "wait" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "sleep 10s"
}
depends_on = [
aws_iam_role.lm,
aws_iam_role_policy_attachment.read_only_access,
aws_iam_role_policy_attachment.aws_support_access,
]
}

resource "logicmonitor_device_group" "parent_device_group" {
description = "Customer Parent Device Group"
disable_alerting = false
enable_netflow = true
group_type = "Normal"
name = "TEST-DEVICE-GROUP"
parent_id = var.parent_id
}

variable "lm_environment" {
type = list(string)
default = ["Prod", "Dev", "NonProd", ]
}
resource "logicmonitor_device_group" "environment_device_group" {

for_each = toset(var.lm_environment)
disable_alerting = false
enable_netflow = true
group_type = "Normal"
name = each.value
parent_id = logicmonitor_device_group.parent_device_group.id
}

resource "logicmonitor_device_group" "management_device_group" {
disable_alerting = false
enable_netflow = true
group_type = "Normal"
name = "Management"
parent_id = logicmonitor_device_group.parent_device_group.id
}

variable "mgmt_environment" {
type = list(string)
default = ["mgt1", "mgt2", ]
}

resource "logicmonitor_device_group" "mgmt_dymanic_group" {

for_each = toset(var.mgmt_environment)
description = "Management"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = each.value
parent_id = logicmonitor_device_group.management_device_group.id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "batch_dymanic_group" {

for_each = logicmonitor_device_group.environment_device_group
description = "Batch"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = "Batch"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "client_dymanic_group" {

for_each = logicmonitor_device_group.environment_device_group
description = "Client"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = "Client"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "database_dymanic_group" {

for_each = logicmonitor_device_group.environment_device_group
description = "Database"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = "Database"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "grp1_dymanic_group" {

for_each = logicmonitor_device_group.environment_device_group
description = "grp1"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = "Grp1"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "grp2_dymanic_group" {

for_each = logicmonitor_device_group.environment_device_group
description = "Grp2"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = "Grp2"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "grp3_dymanic_group" {

for_each = logicmonitor_device_group.environment_device_group
description = "Grp3"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = "Grp3"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "grp4_dymanic_group" {

for_each = logicmonitor_device_group.environment_device_group
description = "Grp4"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = "Grp4"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "grp5_dymanic_group" {

for_each = logicmonitor_device_group.environment_device_group
description = "Grp5"
disable_alerting = true
enable_netflow = true
group_type = "Normal"
name = "Grp5"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
applies_to = "environment_type == xxxx and environment_id == xxxxxx"
}

resource "logicmonitor_device_group" "my_aws_device_group" {
for_each = logicmonitor_device_group.environment_device_group
description = "test description"
disable_alerting = false
enable_netflow = false
extra {
account {
assumed_role_arn = aws_iam_role.lm.arn
external_id = data.logicmonitor_data_resource_aws_external_id.my_external_id.external_id
}
default {
disable_terminated_host_alerting = true
select_all = false
monitoring_region_infos = ["EU_WEST_1", ]
dead_operation = "MANUALLY"
use_default = true
name_filter = []

}
services {

  a_p_p_l_i_c_a_t_i_o_n_e_l_b {
    use_default = true
  }
  a_p_p_s_t_r_e_a_m {
    use_default = true
  }
  a_u_t_o_s_c_a_l_i_n_g {
    use_default = true
  }
  e_b_s {
    use_default = true
  }
  e_c2 {
    use_default = true
  }
  e_f_s {
    use_default = true
  }

  n_e_t_w_o_r_k_e_l_b {
    use_default = true
  }

  r_d_s {
    use_default = true
  }
  r_e_d_s_h_i_f_t {
    use_default = true
  }
  r_o_u_t_e53 {
    use_default = true
  }
  r_o_u_t_e53_r_e_s_o_l_v_e_r {
    use_default = true
  }
  s3 {
    use_default = true
  }
  s_a_g_e_m_a_k_e_r {
    use_default = true
  }
  s_e_s {
    use_default = true
  }
  s_n_s {
    use_default = true
  }
  s_q_s {
    use_default = true
  }
  s_t_e_p_f_u_n_c_t_i_o_n_s {
    use_default = true
  }
  s_w_f_a_c_t_i_v_i_t_y {
    use_default = true
  }
  s_w_f_w_o_r_k_f_l_o_w {
    use_default = true
  }
  t_r_a_n_s_i_t_g_a_t_e_w_a_y {
    use_default = true
  }
  v_p_n {
    use_default = true
  }
  w_o_r_k_s_p_a_c_e {
    use_default = true
  }
  w_o_r_k_s_p_a_c_e_d_i_r_e_c_t_o_r_y {
    use_default = true
  }
}

}

group_type = "AWS/AwsRoot"
name = "PUTH"
parent_id = logicmonitor_device_group.environment_device_group[each.key].id
depends_on = [
null_resource.wait,
]
}`

@PatriQ1414
Copy link
Author

Hi Guys, any luck with this?

@kabeladev
Copy link

We had to split our workspace so state file contains less than 500 resources to avoid hitting the throttling limit which in our case was causing the provider errors

@PatriQ1414
Copy link
Author

Changed my version from

required_providers {
logicmonitor = {
source = "logicmonitor/logicmonitor"
version = "2.0.1"

to

required_providers {
logicmonitor = {
source = "logicmonitor/logicmonitor"
version = ">2.0.1"

Terrafrom init -upgrade

logic monitor plugin upgraded to version 2.0.2

That seems to have fixed the plugin issue

@lmswagatadutta
Copy link
Contributor

provider "logicmonitor" {
api_id = var.logicmonitor_api_id
api_key = var.logicmonitor_api_key
company = var.logicmonitor_company
bulk_resource = true //When working with bulk resources, this feature is optional to handle the Santaba API's rate limit.//
}

We have added this provider configuration to handle the rate limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants