New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
confluent_kafka_cluster Data Source is missing attribute 'http_endpoint' #14
Comments
@nikolai-fra could you share more details about your cluster? Is it Basic or Peering / Private Linked one? upd: oh wait, you're saying it's missing completely? |
Just run the following for a Basic cluster:
so I suspect your cluster is either PL / Peering, could you share its ID / orgID to the support team / |
Thanks for your reply @linouk23 So ok that explains why the value was empty on our cluster. So there is no general problem with the 'http_endpoint' attribute. But: One issue still persists: The DataSource confluent_kafka_cluster does not list the attribute "http_endpoint". It's missing in the terraform docs -> https://github.com/confluentinc/terraform-provider-confluent/blob/master/docs/data-sources/confluent_kafka_cluster.md?plain=1#L52 Therefore I assumed a general problem with this attribute on the data source and opened this ticket. |
No worries at all! Opened #15 to track doc update 👍 and will talk to our support team to update your Kafka cluster such that |
Thank you very much :) |
Let's keep it open for now for additional visibility until we get to a resolution. |
btw qq: is it a test cluster or something? The simplest solution for you could be to recreate a cluster (all new PL clusters should have |
@nikolai-fra ⬆️ I talked to some engineers internally and it seems like right now we can't fix it from our side for existing PL clusters (eta is a bit unknown but if I had to guess, we're blocked for like 2 weeks or something) so you might have to provision a new PL clusters for |
@linouk23 as I was blocked by #18,, as recommened I deleted the cluster, network service and private link. To ensure all all was fresh I even deleted my backend tfstatefile so there was no sign of any old configuration anywhere. When I provision new via terraform I hit the same issue the http_endpoint is missing. Is the workaround to manually create the cluster and then import it into terraform in the short term? |
@Marcus-James-Adams are you saying you created a new cluster and
Our teams are working hard to enable all the regions for both existing and new clusters though. |
@linouk23 correct, our clusters are in Azure UKSouth if that makes a difference? |
@Marcus-James-Adams I see, unfortunately your region isn't enabled just yet so you may need to wait for a fix from our backend team 😕 |
Is there an ETA for when each region will have it enabled by default? Or in the short term can it be manually set by support? |
@Marcus-James-Adams |
Can this issue be fixed manually on a cluster by cluster basis by the support teams? We are restricted to UK South for regulatory reasons and have a project looking to go live before June. |
@Marcus-James-Adams they may be able to do a one-off "fix" for your cluster a little bit earlier but you might need to talk to them directly. |
@linouk23 This issue also effects the
This effectively stops terraform being useable in any region without the fix. |
I understand the frustration and we'll do our best to enable other regions as soon as possible but regarding
I'd like to highlight that TF works for new (or "reprovisioned") PL clusters in the following regions already:
|
Hi @linouk23 , |
Now that our region has had the fix applied this is working ok - thank you. |
@Marcus-James-Adams that's great to hear! We'll probably keep this ticket open until we receive the official confirmation that the enablement process is completed for all the clusters. |
Hey @linouk23 @Marcus-James-Adams, do you guys know if this has been fixed in AWS |
@komal-SkyNET if the question is about the existing clusters, we're currently targeting end of June to enable |
closing ⬆️ since our backend team rolled out a fix for
that should resolve the issue. Also check out our latest
|
Hello,
we've encountered the following problem:
The resource confluent_kafka_cluster is exposing an attribute 'http_endpoint'
The corresponding datasource confluent_kafka_cluster does not provide this attribute.
When using this attribute (it is defined the the schema and terraform validation is successful) it seems to be empty as the following error shows during "terrafom apply":
Error: Post "/kafka/v3/clusters/redacted/topics": POST /kafka/v3/clusters/redacted/topics giving up after 1 attempt(s): Post "/kafka/v3/clusters/redacted/topics": unsupported protocol scheme ""
I think this is a bug?
The text was updated successfully, but these errors were encountered: