Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: aws_msk_broker_nodes data source panics for the cluster with kraft mode enabled #38028

Open
alex-px opened this issue Jun 18, 2024 · 1 comment · May be fixed by #38042
Open

[Bug]: aws_msk_broker_nodes data source panics for the cluster with kraft mode enabled #38028

alex-px opened this issue Jun 18, 2024 · 1 comment · May be fixed by #38042
Labels
bug Addresses a defect in current functionality. crash Results from or addresses a Terraform crash or kernel panic. service/kafka Issues and PRs that pertain to the kafka service.

Comments

@alex-px
Copy link

alex-px commented Jun 18, 2024

Terraform Core Version

1.5.5

AWS Provider Version

5.52.0

Affected Resource(s)

Data source: aws_msk_broker_nodes

Expected Behavior

It returns the list of cluster nodes

Actual Behavior

panic: runtime error: invalid memory address or nil pointer dereference

Relevant Error/Panic Output Snippet

Stack trace from the terraform-provider-aws_v5.52.0_x5 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x100b717e]

goroutine 1538 [running]:
github.com/hashicorp/terraform-provider-aws/internal/service/kafka.dataSourceBrokerNodesRead.func1(0xc003914600?, 0xc001db2ed8?)
        github.com/hashicorp/terraform-provider-aws/internal/service/kafka/broker_nodes_data_source.go:92 +0x1e
sort.insertionSort_func({0xc001db3170?, 0xc0020f6b40?}, 0x0, 0x6)
        sort/zsortfunc.go:12 +0xa7
sort.pdqsort_func({0xc001db3170?, 0xc0020f6b40?}, 0xc003914600?, 0x0?, 0x0?)
        sort/zsortfunc.go:73 +0x31b
sort.Slice({0x13a746c0?, 0xc003914600?}, 0xc001db3170)
        sort/slice.go:29 +0xc5
github.com/hashicorp/terraform-provider-aws/internal/service/kafka.dataSourceBrokerNodesRead({0x167dea08, 0xc0039fe900}, 0xc0011cad00, {0x16573580?, 0xc002196340?})
        github.com/hashicorp/terraform-provider-aws/internal/service/kafka/broker_nodes_data_source.go:91 +0x318
github.com/hashicorp/terraform-provider-aws/internal/provider.New.(*wrappedDataSource).Read.interceptedHandler[...].func7(0xc0011cad00?, {0x16573580?, 0xc002196340})
        github.com/hashicorp/terraform-provider-aws/internal/provider/intercept.go:113 +0x283
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0x167dea08?, {0x167dea08?, 0xc002221f50?}, 0xd?, {0x16573580?, 0xc002196340?})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.34.0/helper/schema/resource.go:818 +0x7a
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0xc001257ce0, {0x167dea08, 0xc002221f50}, 0xc0011ca900, {0x16573580, 0xc002196340})
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.34.0/helper/schema/resource.go:1043 +0x13a
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0xc00354a918, {0x167dea08?, 0xc002221e90?}, 0xc002221b60)
        github.com/hashicorp/terraform-plugin-sdk/v2@v2.34.0/helper/schema/grpc_provider.go:1434 +0x6b1
github.com/hashicorp/terraform-plugin-mux/tf5muxserver.(*muxServer).ReadDataSource(0xc00085d340, {0x167dea08?, 0xc002221bc0?}, 0xc002221b60)
        github.com/hashicorp/terraform-plugin-mux@v0.16.0/tf5muxserver/mux_server_ReadDataSource.go:36 +0x193
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0xc000823900, {0x167dea08?, 0xc0022210e0?}, 0xc004ee6c30)
        github.com/hashicorp/terraform-plugin-go@v0.23.0/tfprotov5/tf5server/server.go:688 +0x27d
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x163413c0, 0xc000823900}, {0x167dea08, 0xc0022210e0}, 0xc0009bfe00, 0x0)
        github.com/hashicorp/terraform-plugin-go@v0.23.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:572 +0x1a6
google.golang.org/grpc.(*Server).processUnaryRPC(0xc000c7a200, {0x167dea08, 0xc002220f60}, {0x1682b400, 0xc00118d980}, 0xc001cf79e0, 0xc002220990, 0x1f52c9c0, 0x0)
        google.golang.org/grpc@v1.63.2/server.go:1369 +0xdf8
google.golang.org/grpc.(*Server).handleStream(0xc000c7a200, {0x1682b400, 0xc00118d980}, 0xc001cf79e0)
        google.golang.org/grpc@v1.63.2/server.go:1780 +0xe8b
google.golang.org/grpc.(*Server).serveStreams.func2.1()
        google.golang.org/grpc@v1.63.2/server.go:1019 +0x8b
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 14
        google.golang.org/grpc@v1.63.2/server.go:1030 +0x125

Error: The terraform-provider-aws_v5.52.0_x5 plugin crashed!

Terraform Configuration Files

resource "aws_msk_cluster" "example" {
  cluster_name           = "example"
  kafka_version          = "3.7.x.kraft"
  ....
  # standart inputs
  ....
  
data "aws_msk_broker_nodes" "default" {
  cluster_arn = "here goes cluster ARN"
}
  

Steps to Reproduce

  1. Create a MSK-provisioned cluster with kraft mode enabled
  2. Try to use aws_msk_broker_nodes data source:
data "aws_msk_broker_nodes" "default" {
  cluster_arn = "here goes cluster ARN"
}

Debug Output

No response

Panic Output

No response

Important Factoids

ATM for the kraft clusters AWS API returns empty "CONTROLLER" nodes in addition to broker nodes

$ aws kafka list-nodes --cluster-arn arn:aws:kafka:eu-west-1...
{
    "NodeInfoList": [
        {
            "NodeType": "CONTROLLER"
        },
        {
            "AddedToClusterTime": "2024-06-18T09:48:35.409Z",
            "BrokerNodeInfo": {
                "AttachedENIId": "eni-aaa",
                "BrokerId": 3,
                "ClientSubnet": "subnet-0000",
                "ClientVpcIpAddress": "10.10.10.10",
                "CurrentBrokerSoftwareInfo": {
                    "KafkaVersion": "3.7.x.kraft"
                },
                "Endpoints": [
                    "xxx.c9.kafka.eu-west-1.amazonaws.com"
                ]
            },
            "InstanceType": "m7g.large",
            "NodeARN": "arn:aws:kafka:eu-west-1...",
            "NodeType": "BROKER"
        },
        

But this code cannot handle this situation even though there is already a check to handle brokerNodeInfo != nil here

image

For clusters with zookeeper aws api returns only broker nodes.

References

No response

Would you like to implement a fix?

Yes

@alex-px alex-px added the bug Addresses a defect in current functionality. label Jun 18, 2024
Copy link

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please see our prioritization guide for information on how we prioritize.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@github-actions github-actions bot added crash Results from or addresses a Terraform crash or kernel panic. service/kafka Issues and PRs that pertain to the kafka service. labels Jun 18, 2024
@terraform-aws-provider terraform-aws-provider bot added the needs-triage Waiting for first response or review from a maintainer. label Jun 18, 2024
@ewbankkit ewbankkit removed the needs-triage Waiting for first response or review from a maintainer. label Jun 18, 2024
@alex-px alex-px linked a pull request Jun 19, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Addresses a defect in current functionality. crash Results from or addresses a Terraform crash or kernel panic. service/kafka Issues and PRs that pertain to the kafka service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants