Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AKS: NetworkProfile / Advanced Networking #1479

Merged

Conversation

lfshr
Copy link
Contributor

@lfshr lfshr commented Jul 3, 2018

Added network_profile to kubernetes_cluster resource.
#1389 #1434

@lfshr lfshr changed the title Containerservices advancednetworking Containerservices NetworkProfile / Advanced Networking Jul 3, 2018
@lfshr
Copy link
Contributor Author

lfshr commented Jul 3, 2018

@tombuildsstuff I couldn't get the tests running properly (wouldn't fail when it should have), may need some assistance here. Also think I should merge the two tests into one. This is just the resource. If you want me to include the data source here, let me know. Otherwise I'll open up another PR for it. 😄

Also.. Check my networking. Not one of my strong points! 😛

@lfshr
Copy link
Contributor Author

lfshr commented Jul 4, 2018

@tombuildsstuff the resource and data source is good to review. Just about to add an example into the examples.

Be harsh! 👍

@lfshr
Copy link
Contributor Author

lfshr commented Jul 4, 2018

Result from pinging node agent from another vm within the same vnet:

liam@test-networking:~$ ping 10.1.0.35
PING 10.1.0.35 (10.1.0.35) 56(84) bytes of data.
64 bytes from 10.1.0.35: icmp_seq=1 ttl=64 time=1.50 ms
64 bytes from 10.1.0.35: icmp_seq=2 ttl=64 time=0.853 ms
64 bytes from 10.1.0.35: icmp_seq=3 ttl=64 time=0.787 ms
64 bytes from 10.1.0.35: icmp_seq=4 ttl=64 time=2.14 ms
64 bytes from 10.1.0.35: icmp_seq=5 ttl=64 time=0.884 ms
64 bytes from 10.1.0.35: icmp_seq=6 ttl=64 time=0.752 ms
64 bytes from 10.1.0.35: icmp_seq=7 ttl=64 time=0.730 ms
64 bytes from 10.1.0.35: icmp_seq=8 ttl=64 time=0.692 ms
64 bytes from 10.1.0.35: icmp_seq=9 ttl=64 time=0.682 ms
64 bytes from 10.1.0.35: icmp_seq=10 ttl=64 time=0.893 ms
64 bytes from 10.1.0.35: icmp_seq=11 ttl=64 time=0.745 ms

Nothing more satisfying 😄

@tombuildsstuff

This comment has been minimized.

@lfshr
Copy link
Contributor Author

lfshr commented Jul 25, 2018

@tombuildsstuff did you look at my above comments? The error you're receiving on the serviceCidr is a non-descriptive error. See Azure/AKS#506
The above ARM template will not work. You need to supply the subnetId inside agentPoolProfile and the serviceCidr must be within this subnet.

@tombuildsstuff
Copy link
Contributor

@lfshr I read the first two links but missed the third, apologies. I'd suggest we reply within those specific issues to work around the lack of threading here?

You need to supply the subnetId inside agentPoolProfile and the serviceCidr must be within this subnet.

🤦‍♂️ good spot - I'll re-test that now

@tombuildsstuff tombuildsstuff self-assigned this Jul 25, 2018
The ip fields are now Computed too
… the network_profile blocks specified

Example:

```

Error: Error running plan: 1 error(s) occurred:

* azurerm_kubernetes_cluster.test: 1 error(s) occurred:

* azurerm_kubernetes_cluster.test: If a `network_profile` block is specified, the Agent Pools must be attached to a Subnet

```
@tombuildsstuff
Copy link
Contributor

tombuildsstuff commented Jul 25, 2018

@lfshr cool, so I can confirm setting that works. Given I've made those changes locally, I'm going to commit them (and then also add a couple more for acceptance tests for both the Azure network profile & the Kubenet network profile; and some validation to work around Azure/AKS#506) - so that we can run the tests and ship this (I hope you don't mind!)

@metacpp thanks for the heads up :)

@tombuildsstuff tombuildsstuff removed the upstream/microsoft Indicates that there's an upstream issue blocking this issue/PR label Jul 25, 2018
@tombuildsstuff tombuildsstuff force-pushed the containerservices-advancednetworking branch from 8e8f39e to ff3e206 Compare July 25, 2018 15:00
@lfshr
Copy link
Contributor Author

lfshr commented Jul 25, 2018

@tombuildsstuff not at all. Do what you need to do. Anything you need from my end?

@metacpp
Copy link
Contributor

metacpp commented Jul 25, 2018

@lfshr @tombuildsstuff You can see all my changes on top of @lfshr 's branch:
https://github.com/lfshr/terraform-provider-azurerm/pull/2/files

  1. It bypasses the validation error.
  2. It resolves the rerun of terraform plan.

It's just a reference. :-)

@tombuildsstuff
Copy link
Contributor

Resource Tests pass:

screenshot 2018-07-25 at 21 26 12

Data Source tests pass:

screenshot 2018-07-25 at 21 26 06

@tombuildsstuff
Copy link
Contributor

@lfshr nope, all good - the tests pass :) Thanks again for this PR - apologies for the confusion here.

@metacpp
Copy link
Contributor

metacpp commented Jul 25, 2018

@tombuildsstuff what're the conclusions here?

@giggio
Copy link

giggio commented Jul 31, 2018

Do we know when is 1.12 expected?

@tombuildsstuff
Copy link
Contributor

@giggio we don't give exact dates since things can change - but I have a feeling it'll probably be next week at this point.

katbyte pushed a commit that referenced this pull request Mar 21, 2019
This PR aims to add Calico Network policy to AKS as described in [MS Docs](https://docs.microsoft.com/en-us/azure/aks/use-network-policies).

Added `network_policy` to `kubernetes_cluster` resource.

PR Inspiration: #1479
@ghost
Copy link

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants