Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AKS RBAC #104

Closed
matelang opened this issue Jun 10, 2019 · 28 comments
Closed

AKS RBAC #104

matelang opened this issue Jun 10, 2019 · 28 comments
Labels

Comments

@matelang
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

The AzureRM provider enables you to define a managed Kubernetes cluster (AKS) on Azure.
There is a possibility to enable RBAC(Role Based Access Control) which tightly integrates Kubernetes' authentication and authorization with Azure Active Directory.

In order for that to be enabled you have to define the following block on the azurerm_kubernetes_cluster:

  role_based_access_control {
    enabled = true

    azure_active_directory {
      client_app_id = ""
      server_app_id = ""
      server_app_secret = ""
      tenant_id = ""
    }

After some documentation I realized that there is no possibility to set this feature up end to end by using plain terraform.

The following blog post depicts how you need to create a server application, update its manifest, create and assign a client application to be able to set RBAC up correctly:
https://blog.jcorioland.io/archives/2018/11/20/azure-aks-kubernetes-rbac-azure-active-directory-terraform.html

Also there is a GitHub repository automating most of the above from the same author:
https://github.com/jcorioland/aks-rbac-azure-ad

Is it possible to add support for the AD related steps from the above installation scenario?
Thanks.

New or Affected Resource(s)

TBD

Potential Terraform Configuration

TBD

References

@dbourcet
Copy link

The links you provided are outdated. I manage to do almost everything stated in the previous links using Terraform, except the "Grant admin consent" part. It implies that you use the last version of the azuread provider (0.4.0).
Here, how to create both applications (client and server):

######################################################################### SERVER
resource "azuread_application" "server" {
  name                    = "k8s_server"
  reply_urls              = ["http://k8s_server"]
  type                    = "webapp/api"
  group_membership_claims = "All"

  required_resource_access {
    # Windows Azure Active Directory API
    resource_app_id = "00000002-0000-0000-c000-000000000000"

    resource_access {
      # DELEGATED PERMISSIONS: "Sign in and read user profile":
      # 311a71cc-e848-46a1-bdf8-97ff7156d8e6
      id   = "311a71cc-e848-46a1-bdf8-97ff7156d8e6"
      type = "Scope"
    }
  }

  required_resource_access {
    # MicrosoftGraph API
    resource_app_id = "00000003-0000-0000-c000-000000000000"

    # APPLICATION PERMISSIONS: "Read directory data":
    # 7ab1d382-f21e-4acd-a863-ba3e13f7da61
    resource_access {
      id   = "7ab1d382-f21e-4acd-a863-ba3e13f7da61"
      type = "Role"
    }

    # DELEGATED PERMISSIONS: "Sign in and read user profile":
    # e1fe6dd8-ba31-4d61-89e7-88639da4683d
    resource_access {
      id   = "e1fe6dd8-ba31-4d61-89e7-88639da4683d"
      type = "Scope"
    }

    # DELEGATED PERMISSIONS: "Read directory data":
    # 06da0dbc-49e2-44d2-8312-53f166ab848a
    resource_access {
      id   = "06da0dbc-49e2-44d2-8312-53f166ab848a"
      type = "Scope"
    }
  }
}

resource "azuread_service_principal" "server" {
  application_id = "${azuread_application.server.application_id}"
}

resource "azuread_service_principal_password" "server" {
  service_principal_id = "${azuread_service_principal.server.id}"
  value                = "${random_string.application_server_password.result}"
  end_date             = "${timeadd(timestamp(), "87600h")}" # 10 years

  # The end date will change at each run (terraform apply), causing a new password to 
  # be set. So we ignore changes on this field in the resource lifecyle to avoid this
  # behaviour.
  # If the desired behaviour is to change the end date, then the resource must be
  # manually tainted.
  lifecycle {
    ignore_changes = ["end_date"]
  }
}

resource "random_string" "application_server_password" {
  length  = 16
  special = true

  keepers = {
    service_principal = "${azuread_service_principal.server.id}"
  }
}

######################################################################### CLIENT

resource "azuread_application" "client" {
  name       = "k8s_client"
  reply_urls = ["http://k8s_client"]
  type       = "native"

  required_resource_access {
    # Windows Azure Active Directory API
    resource_app_id = "00000002-0000-0000-c000-000000000000"

    resource_access {
      # DELEGATED PERMISSIONS: "Sign in and read user profile":
      # 311a71cc-e848-46a1-bdf8-97ff7156d8e6
      id   = "311a71cc-e848-46a1-bdf8-97ff7156d8e6"
      type = "Scope"
    }
  }

  required_resource_access {
    # AKS ad application server
    resource_app_id = "${azuread_application.server.application_id}"

    resource_access {
      # Server app Oauth2 permissions id
      id   = "${lookup(azuread_application.server.oauth2_permissions[0], "id")}"
      type = "Scope"
    }
  }
}

resource "azuread_service_principal" "client" {
  application_id = "${azuread_application.client.application_id}"
}

resource "azuread_service_principal_password" "client" {
  service_principal_id = "${azuread_service_principal.client.id}"
  value                = "${random_string.application_client_password.result}"
  end_date             = "${timeadd(timestamp(), "87600h")}" # 10 years

  lifecycle {
    ignore_changes = ["end_date"]
  }
}

resource "random_string" "application_client_password" {
  length  = 16
  special = true

  keepers = {
    service_principal = "${azuread_service_principal.client.id}"
  }
}

Then the part of the cluster creation:

resource "azurerm_kubernetes_cluster" "this" {
  [...]
  service_principal {
    client_id     = "${azuread_application.client.application_id}"
    client_secret = "${azuread_service_principal_password.client.value}"
  }

  role_based_access_control {
    enabled = true

    azure_active_directory {
      client_app_id     = "${azuread_application.client.application_id}"
      server_app_id     = "${azuread_application.server.application_id}"
      server_app_secret = "${azuread_service_principal_password.server.value}"
    }
  }
}

Then, the apply must go in two parts. First, create only the server and clients applications:

$ terraform apply -target azuread_service_principal.server -target azuread_service_principal.client

Now go on the Azure Portal and Grant admin consent manually (click click!) on both applications (the server, then the client). I didn't manage yet to find how to Terraform that step. I will let you know if I find.

Then you can now apply to create everything:

$ terraform apply

Please let me know if I wasn't clear on some points.

@matelang
Copy link
Author

@dbourcet thank you so much for the detailed explanation! I am going to try to implement this right away!

If this works as expected, then the issue can be marked as resolved, and sorry for the disturbance!

@dbourcet
Copy link

My pleasure. If it doesn't work for you let me know, as it works for me. Also, if in the future you find a way to Terraform the "Grant admin consent" part, please think of me!

@evenh
Copy link
Contributor

evenh commented Jun 13, 2019

Can confirm that @dbourcet's approach works. Just found out the same configuration (and issue with "Grant admin consent") yesterday.

@katbyte
Copy link
Collaborator

katbyte commented Jun 13, 2019

If you don't mind @dbourcet i am going to add this to the examples folder, please let me know if that's not ok!

@katbyte katbyte added this to the v0.5.0 milestone Jun 13, 2019
@matelang
Copy link
Author

matelang commented Jun 13, 2019

@dbourcet correct me if I'm wrong, I remember reading somewhere that it would be best practice to have a third Service Principal(SP) for the cluster's own usage, separate from the RBAC AD Client SP.

In this case there would be three SPs in total:

  • cluster - to be assumed by the kubernetes to be able to operate on Azure resources
  • server - for AD RBAC auth
  • client - for AD RBAC auth

So in terraform terms add the following:

resource "azuread_application" "aks_cluster" {
  name = "aks-${var.identifier}"
}

resource "azuread_service_principal" "aks_cluster" {
  application_id = azuread_application.aks_cluster.application_id
}

resource "random_string" "aks_cluster_password" {
  length = 16
  special = false

  keepers {
    service_principal = azuread_service_principal.aks_cluster.id
  }
}

resource "azuread_service_principal_password" "aks_cluster_passwod" {
  service_principal_id = azuread_service_principal.aks_cluster.id
  value = random_string.aks_cluster_password.result

  end_date = timeadd(timestamp(), "87600h")

  lifecycle {
    ignore_changes = [
      "end_date"]
  }
}

I highlighted with comments what would be changed in this case:

resource "azurerm_kubernetes_cluster" "this" {
  [...]
  service_principal {
    client_id = azuread_application.aks_cluster.id ############ HERE
    client_secret = random_string.aks_cluster_password.result ############ HERE
  }

  role_based_access_control {
    enabled = true

    azure_active_directory {
      client_app_id     = "${azuread_application.client.application_id}"
      server_app_id     = "${azuread_application.server.application_id}"
      server_app_secret = "${azuread_service_principal_password.server.value}"
    }
  }
}

What do you think?

@dbourcet
Copy link

@katbyte : I'm ok if you want to add it to the examples folder but you have to know, I copy/pasted those snippets and remove quickly some business related naming, so there is possibility that some variables/resources names does not match or even that my removal added here or there little syntax errors. Please ensure that this code is Terraform valid and working and tweak it here or there before adding it, to avoid mistakes.

@dbourcet
Copy link

@matelang I didn't read somewhere that this is a best practice, but it doesn't matter: I find it best practice too, as it separate concerns and implements the least privilege principle. I didn't implemented it in my business, as I was in a hurry, so you are on your own if you want to try, but I will sure try one day to do it this way, as I find it more proper and elegant.

@matelang
Copy link
Author

@dbourcet I am going to try it as I'm implementing from scratch, if it works I'll confirm here!

@dbourcet
Copy link

@katbyte I just created a project with Terraform files and some documentation: https://github.com/dbourcet/aks-rbac-azure-ad
I made it clean and tested it so you can pick from it if you want to for the examples folder.

@PirateBread
Copy link

PirateBread commented Jul 10, 2019

Some good work here chaps. The problem is not so much automation as security in my opinion.

You have to have global admin to grant consent which means if you want to automate it, your pipeline needs god mode on your entire tenant. So for now there's still a manual step.

@dbourcet
Copy link

I agree. As I don't want my pipeline to be in god mode, I am still stuck with the manual step of Granting consent by clicking in the Azure portal. My business needs allows me to include this manual step, but nevertheless it bothers me.

@jpreese
Copy link
Contributor

jpreese commented Jul 10, 2019

@dbourcet we are dealing with this exact problem today, and are looking for a solution. What would even be the god mode solution?

It doesn't look like service principals can grant consent, only users can?

@mocofound
Copy link

mocofound commented Jul 11, 2019

👋 I agree, great work here everyone. I have also been working on automating this workflow end-to-end using Terraform.

@dbourcet I have tested it and the Configure Kubernetes RBAC section could also be implemented in Terraform using the kubernetes provider in a third run: Terraform Kubernetes Provider Cluster Role Binding

@matelang I also have the same questions about that possible third service principal and I am interested in more info around the security of this.

@jpreese The admin consent can now be granted via Azure CLI as opposed to the Azure Portal UI so I am investigating using that via local-exec but there is a chance this is still an out-of-band step that comes with security considerations:

az ad app permission admin-consent --id $serverApplicationId

@jpreese
Copy link
Contributor

jpreese commented Jul 11, 2019

@mocofound it can be done with the azure CLI, yes, but can it be done when you are logged in as a service principal? We would like to use a service principal to grant consent as this will be done in automation.

Azure/azure-cli#8912

@dbourcet
Copy link

@mocofound Using @matelang remark, we manage to configure RBAC with a third run: see this.

@jpreese The god mode solution is using local-exec and a CLI call, as suggested by @mocofound, as you are authenticated with your user account when Terraform runs, but I don't understand yet either if this comes with security considerations.

@jpreese
Copy link
Contributor

jpreese commented Jul 12, 2019

@dbourcet the issue is that we run terraform in automation, in a pipeline, logged in as a service amount. We're not logged in as a user.

So until Microsoft allows that to happen we'll most likely need to run the manual step.

@matelang
Copy link
Author

It's indeed a bit weird having an extra manual step in the lifecycle of the project.
On the other hand, how I approached it for now is:

  1. Run terraform and let it fail first.
  2. Authorize the app manually
  3. Re-run terraform and let it complete

I know it's not nice, but this way I do not introduce anything "extra" in the DSL or local-exec, and 99% of the times there is no required intervention. Well, the 1% is still ugly :)

Maybe it's off topic but do you have working example for terraform configuration for AKS to access a ACR (container registry)?

@PirateBread
Copy link

There are two ways to access ACR. Using the AKS Service Principal, or with a kubernetes secret.

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks

You can either use Terraform to apply the RBACpermissions to the ACR to allow the AKS SPN, or you can use the Terraform Kubernetes provider to inject the secret.

@matelang
Copy link
Author

Thanks @PirateBread,

I'd prefer the solution to grant access to AKS to pull containers from ACR.

I have a bit of confusion figuring out how the following script from the link you provided would translate into terraform.

#!/bin/bash

AKS_RESOURCE_GROUP=myAKSResourceGroup
AKS_CLUSTER_NAME=myAKSCluster
ACR_RESOURCE_GROUP=myACRResourceGroup
ACR_NAME=myACRRegistry

# Get the id of the service principal configured for AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)

# Get the ACR registry resource id
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)

# Create role assignment
az role assignment create --assignee $CLIENT_ID --role acrpull --scope $ACR_ID

@PirateBread
Copy link

You would have to use this: https://www.terraform.io/docs/providers/azurerm/r/role_assignment.html

resource "azurerm_role_assignment" "Give_AKS_SPN_Access_To_ACR" {
  scope                = "Define_your_Scope"
  role_definition_name = "AcrPull"
  principal_id         = "your_aks_service_principal_id"
}

What this is doing is granting your AKS service principal the role of AcrPull over your ACR container registry.

You can define the scope against just the individual ACR, the resource group, or the entire subscription, whatever you feel best meets your requirements.

@matelang
Copy link
Author

Thanks @PirateBread for the example.
Do you think we can close this issue or is there still something to be clarified?

I am now enlightened, so I consider it done, since I guess there is nothing we could potentially do about the manual step anymore.

@katbyte katbyte modified the milestones: v0.5.0, v0.6.0 Jul 24, 2019
@lzadjsf
Copy link

lzadjsf commented Aug 15, 2019

Hello,

What is the reason to do like:

  1. Run terraform and let it fail first.
  2. Authorize the app manually.
  3. Re-run terraform and let it complete

It's seems like you want to do it manually not more. It' not improve security in fully automated pipelines. Why not allow to grant admin consent to who run TF script execution? If it allowed to deploy and run TF scripts there is no more security to wit till fail then manual grant and run again.

@tombuildsstuff tombuildsstuff modified the milestones: v0.6.0, v0.7.0 Aug 21, 2019
@matelang
Copy link
Author

Hi @lzadjsf,

I am all in on having a fully automated solution but in my opinion there is no point adding a workaround for something that you are probably going to have to do just once - the app authorization.

When proper support will be added to terraform I guess it makes total sense to have it also authorize the app, but this highly depends on the organization and the authority of the teams in your environment.

I have seen orgs having priviliged teams / pipelines taking care of IAM.

I hope I could clarify my point of view.

@tylersoren
Copy link

tylersoren commented Aug 29, 2019

I was able to create a workaround for this by adding a provisioner to the "azuread_service_principal" resource to run the grant command. This assumes that your terraform runner has the Azure CLI installed.

  provisioner "local-exec" {

    command = "az ad app permission admin-consent --id ${azuread_application.app_name.application_id}"

  }

@katbyte katbyte modified the milestones: v0.7.0, v0.8.0 Oct 11, 2019
@spanktar
Copy link

spanktar commented Oct 25, 2019

After beating my head against this for some time, here is what I have that applies successfully, combining all examples above. My apologies for not clearing out our variable conventions. After this, we can no longer use kubectl and I'm not sure why.

### AKS

resource "azurerm_kubernetes_cluster" "az_kubernetes_cluster" {
  dns_prefix          = "${var.env_name}"
  location            = "${azurerm_resource_group.aks_resource_group.location}"
  name                = "${var.account_name}-${var.env_name}-aks"
  resource_group_name = "${azurerm_resource_group.aks_resource_group.name}"

  agent_pool_profile {
    count               = 3
    enable_auto_scaling = true
    max_count           = 12
    min_count           = 3
    name                = "${var.env_name}"
    os_disk_size_gb     = "${var.aks_root_volume_size}"
    os_type             = "Linux"
    type                = "VirtualMachineScaleSets"
    vm_size             = "${var.vm_size}"
  }

  lifecycle {
    # This format will change in 0.12.x and is broken in 0.12.2
    ignore_changes = [
      "agent_pool_profile.0.count",
    ]
  }

  role_based_access_control {
    enabled = true

    azure_active_directory {
      client_app_id     = "${azuread_application.client.application_id}"
      server_app_id     = "${azuread_application.server.application_id}"
      server_app_secret = "${azuread_service_principal_password.server.value}"
    }
  }

  service_principal {
    client_id     = "${azuread_application.aks_cluster.application_id}"
    client_secret = "${random_string.aks_cluster_password.result}"
  }

  tags = {
    environment = "${var.env_name}"
  }
}

# Resource group
resource "azurerm_resource_group" "aks_resource_group" {
  location = "${var.location}"
  name     = "${var.account_name}-${var.env_name}-aks-rg"

  tags = {
    environment = "${var.env_name}"
  }
}

# Shamelessly plucked from:
# https://github.com/terraform-providers/terraform-provider-azuread/issues/104
# https://github.com/dbourcet/aks-rbac-azure-ad

### CLUSTER ####################################################################

resource "azuread_application" "aks_cluster" {
  name = "${var.account_name}-${var.env_name}-aks-cluster-sp"
}

resource "azuread_service_principal" "aks_cluster" {
  application_id = "${azuread_application.aks_cluster.application_id}"
}

resource "random_string" "aks_cluster_password" {
  length      = "${var.password_length}"
  min_lower   = 1
  min_numeric = 1
  min_special = 1
  min_upper   = 1
  special     = true

  keepers {
    service_principal = "${azuread_service_principal.aks_cluster.id}"
  }
}

resource "azuread_service_principal_password" "aks_cluster_password" {
  service_principal_id = "${azuread_service_principal.aks_cluster.id}"
  value                = "${random_string.aks_cluster_password.result}"

  end_date = "${timeadd(timestamp(), "87600h")}"

  lifecycle {
    ignore_changes = ["end_date"]
  }
}

### SERVER ######################################################################

resource "azuread_application" "server" {
  name                    = "${var.account_name}-${var.env_name}-aks-server-sp"
  reply_urls              = ["http://${var.account_name}-${var.env_name}-aks-server-sp"]
  type                    = "webapp/api"
  group_membership_claims = "All"

  required_resource_access {
    # Windows Azure Active Directory API
    resource_app_id = "00000002-0000-0000-c000-000000000000"

    resource_access {
      # DELEGATED PERMISSIONS: "Sign in and read user profile":
      # 311a71cc-e848-46a1-bdf8-97ff7156d8e6
      id = "311a71cc-e848-46a1-bdf8-97ff7156d8e6"

      type = "Scope"
    }
  }

  required_resource_access {
    # MicrosoftGraph API
    resource_app_id = "00000003-0000-0000-c000-000000000000"

    # APPLICATION PERMISSIONS: "Read directory data":
    # 7ab1d382-f21e-4acd-a863-ba3e13f7da61
    resource_access {
      id   = "7ab1d382-f21e-4acd-a863-ba3e13f7da61"
      type = "Role"
    }

    # DELEGATED PERMISSIONS: "Sign in and read user profile":
    # e1fe6dd8-ba31-4d61-89e7-88639da4683d
    resource_access {
      id   = "e1fe6dd8-ba31-4d61-89e7-88639da4683d"
      type = "Scope"
    }

    # DELEGATED PERMISSIONS: "Read directory data":
    # 06da0dbc-49e2-44d2-8312-53f166ab848a
    resource_access {
      id   = "06da0dbc-49e2-44d2-8312-53f166ab848a"
      type = "Scope"
    }
  }
}

resource "azuread_service_principal" "server" {
  application_id = "${azuread_application.server.application_id}"
}

resource "azuread_service_principal_password" "server" {
  service_principal_id = "${azuread_service_principal.server.id}"
  value                = "${random_string.application_server_password.result}"
  end_date             = "${timeadd(timestamp(), "87600h")}"                   # 10 years

  # The end date will change at each run (terraform apply), causing a new password to
  # be set. So we ignore changes on this field in the resource lifecyle to avoid this
  # behaviour.
  # If the desired behaviour is to change the end date, then the resource must be
  # manually tainted.
  lifecycle {
    ignore_changes = ["end_date"]
  }
}

resource "random_string" "application_server_password" {
  length  = "${var.password_length}"
  special = true

  keepers = {
    service_principal = "${azuread_service_principal.server.id}"
  }
}

### CLIENT #######################################################################

resource "azuread_application" "client" {
  name       = "${var.account_name}-${var.env_name}-aks-client-sp"
  reply_urls = ["http://${var.account_name}-${var.env_name}-aks-client-sp"]
  type       = "native"

  required_resource_access {
    # Windows Azure Active Directory API
    resource_app_id = "00000002-0000-0000-c000-000000000000"

    resource_access {
      # DELEGATED PERMISSIONS: "Sign in and read user profile":
      # 311a71cc-e848-46a1-bdf8-97ff7156d8e6
      id = "311a71cc-e848-46a1-bdf8-97ff7156d8e6"

      type = "Scope"
    }
  }

  required_resource_access {
    # AKS ad application server
    resource_app_id = "${azuread_application.server.application_id}"

    resource_access {
      # Server app Oauth2 permissions id
      id   = "${lookup(azuread_application.server.oauth2_permissions[0], "id")}"
      type = "Scope"
    }
  }
}

resource "azuread_service_principal" "client" {
  application_id = "${azuread_application.client.application_id}"

  # Holy crap, shell out to do this?!
  provisioner "local-exec" {
    command = "az ad app permission admin-consent --id ${azuread_application.server.application_id}"
  }
}

resource "azuread_service_principal_password" "client" {
  service_principal_id = "${azuread_service_principal.client.id}"
  value                = "${random_string.application_client_password.result}"
  end_date             = "${timeadd(timestamp(), "87600h")}"                   # 10 years

  lifecycle {
    ignore_changes = ["end_date"]
  }
}

resource "random_string" "application_client_password" {
  length  = "${var.password_length}"
  special = true

  keepers = {
    service_principal = "${azuread_service_principal.client.id}"
  }
}

# AKS AD Users
resource "azuread_user" "aks_ad_user_admin" {
  account_enabled       = true
  display_name          = "${title(var.env_name)} ${title(var.company)} Admin"
  force_password_change = false
  password              = "${random_string.aks_ad_user_admin_password.result}"
  user_principal_name   = "${var.account_name}-${var.env_name}-admin@${var.company_ad_domain}"
}

resource "azuread_user" "aks_ad_user_developer" {
  account_enabled       = true
  display_name          = "${title(var.env_name)} ${title(var.company)} Dev"
  force_password_change = false
  password              = "${random_string.aks_ad_user_developer_password.result}"
  user_principal_name   = "${var.account_name}-${var.env_name}-dev@${var.company_ad_domain}"
}

# AKS AD User's Passwords
resource "random_string" "aks_ad_user_admin_password" {
  length      = 16   # Hard limit (in the UI at least)
  min_lower   = 1
  min_numeric = 1
  min_special = 1
  min_upper   = 1
  special     = true
}

resource "random_string" "aks_ad_user_developer_password" {
  length      = 16   # Hard limit (in the UI at least)
  min_lower   = 1
  min_numeric = 1
  min_special = 1
  min_upper   = 1
  special     = true
}

# AKS AD Group
resource "azuread_group" "aks_ad_group_all" {
  name = "${title(var.env_name)} AKS Group ALL"

  members = [
    "${azuread_user.aks_ad_user_admin.object_id}",
    "${azuread_user.aks_ad_user_developer.object_id}",
  ]
}

# AD Role Assignment
resource "azurerm_role_assignment" "aks_role_assignment" {
  principal_id         = "${azuread_group.aks_ad_group_all.id}"
  role_definition_name = "Azure Kubernetes Service Cluster User Role"
  scope                = "${azurerm_kubernetes_cluster.az_kubernetes_cluster.id}"
}

@katbyte katbyte removed this from the v0.8.0 milestone Mar 11, 2020
@manicminer
Copy link
Member

manicminer commented May 19, 2020

As has been discussed, you are able to use Terraform to configure the necessary app registrations, service principals and related API permissions to enable AAD RBAC for AKS (thanks @dbourcet and @matelang for the config examples!). The issue of requiring admin consent is generally considered best practise to perform out of band, by a human operator (and to this end you can only do this when authenticated as a user and not as a service principal). See https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/grant-admin-consent and https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-admin-consent

This does present a workflow where manual steps are required, but there's not much more we can reasonably do here as it's by design. That said, I believe it's now possible to configure AAD integration using an AKS preview that doesn't require admin consent (caveat: I haven't tried it and it does say you will require new clusters) - see https://docs.microsoft.com/en-us/azure/aks/managed-aad

Accordingly, I'm going to close this issue as resolved, but please feel free to comment if I have missed something. Thanks!

@ghost
Copy link

ghost commented Jun 18, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Jun 18, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests