Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I get admin end point and master key when running in container #4147

Open
VenkateshSrini opened this issue Feb 27, 2019 · 22 comments
Open

Comments

@VenkateshSrini
Copy link

I'm running Azure function in container. I want to retrieve the master key and hit the admin endpoint for the function running in container. How do I do that?

Investigative information

Please provide the following:

  • Function App version (1.0 or 2.0):2.0
  • Function App name: N/A
  • Function name(s) (as appropriate):N/A
  • Invocation ID:N/A
  • Region: Container

Repro steps

Provide the steps required to reproduce the problem:

  1. Create a Azure function and create the docker file for the Azure function
  2. Hos the docker container in Docker Desktop for windows CE
  3. Access the admin end point. It is not accessible

Expected behavior

I should be able to access the and I should be able to retrieve the admin end point

Actual behavior

Provide a description of the actual behavior observed.

The Azure function admin endpoint is not accessible inside Docker container

Known workarounds

None

@pragnagopa
Copy link
Member

@maiqbal11 - can you please help with this?

@VenkateshSrini
Copy link
Author

@maiqbal11 ,
Sorry for the urgency. But the query has been open for quite some time with no help. So any insights on this could help us a lot

@maiqbal11
Copy link
Contributor

@VenkateshSrini, what scenario are you looking to enable with this specifically? Running Functions in a Docker Desktop container is not one of our mainline scenarios so we'd have to look more deeply into this.

We have documentation on how you can access keys via the admin endpoint (major changes to this are coming shortly):

  1. https://github.com/Azure/azure-functions-host/wiki/Key-management-API (you can use the scm/Kudu endpoint for admin access for apps running in our hosting environment)
  2. https://github.com/Azure/azure-functions-host/wiki/Changes-to-Key-Management-in-Functions-V2

@pragnagopa
Copy link
Member

cc @mathewc @balag0
@VenkateshSrini - As @maiqbal11 mentioned it would help to understand your scenario. Is this for local development?
Here is the information on using custom docker image
https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image

@VenkateshSrini
Copy link
Author

My scenario is very simple. I work for an service company that serves a big enterprise client. The client is still not 'all' into public cloud. They would soon make a choice on their public cloud vendor. Now they have docker and kubernetes in their in house web farm. They want to go for serverless and they are exploring options of which is the best serverless platform that would give them identical benefits both in public cloud and private cloud (Kubernetes). At present the client likes the simple programing model of Azure functions, the power f durable functions and extension points in terms of custom trigger and bindings provided by azure

But at the same time, they see that there are a variety of security options available in Public cloud. Also by providing access to the admin point they also get to do more. Since it is the same run time and same code that runs in cloud and local they would go for azure function if they are able to teach Admin endpoints even in their local hosted container

Now to get access to admin end point in Azure function (Whether in cloud or in container) you need to have access to the Master key. In azure we can take this from configuration blades. However, when running the same locally, the expectation is can we set this key through environment, so that using this key we can control the admin endpoint of functions hosted in a container. At present we are not able to do this. We do not have SCM or KUDU when accessing the functions on a container deployed locally (not on Azure )
Please let us know ho to achieve this

@ahmelsayed
Copy link
Contributor

ahmelsayed commented Mar 28, 2019

@VenkateshSrini Here are some ways you can achieve that today as a workaround. I understand that none of these are really good solutions and we should provide some better way to expose this. I'm working with @fabiocav to try to find a worklfow that makes sense.

Option 1: Creating host.json secrets file on Blob storage

prerequisites: Having AzureWebJobsStorage defined.

By default the runtime stores all keys in your storage account defined in AzureWebJobsStroage as a blob under

  • azure-webjobs-secrets/{AppName-Or-HostId}/{functionName}.json for functions
  • azure-webjobs-secrets/{AppName-Or-HostId}/host.json for the admin/master key

{AppName-Or-HostId} is defined by either of the following environment variables:

AzureFunctionsWebHost__hostid
# or 
WEBSITE_SITE_NAME

I think WEBSITE_SITE_NAME has precedence.

so you can upload a host.json file that looks like

{
  "masterKey": {
    "name": "master",
    "value": "MASTER_KEY",
    "encrypted": false
  },
  "functionKeys": [ ]
}                 

Then you can use MASTER_KEY as your masterkey.

To put the whole thing together here is a bash script that automates the process

# assume you have docker, az-cli, curl

# put whatever name you use to identify your deployment
HOST_ID=your_app_id
# what you'll use for a masterKey it can be any string
MASTER_KEY=$(date | sha256sum | cut -d ' ' -f 1 )

# Get connection string
CS=$(az storage account show-connection-string --name $StorageAccountName --output json --query "connectionString"| tr -d '"')

# Create azure-webjobs-secrets container if it doesn't exist
az storage container create --name azure-webjobs-secrets --connection-string $CS

# Create a local host.json that contains:
# {
#   "masterKey": {
#     "name": "master",
#     "value": "MASTER_KEY",
#     "encrypted": false
#   },
#   "functionKeys": [ ]
# } 

# Replace MASTER_KEY in the file with your $MASTER_KEY
sed -i "s/MASTER_KEY/$MASTER_KEY/g" host.json

# Upload that file to the azure-webjobs-secrets container 
az storage blob upload --container-name azure-webjobs-secrets \
    --name $HOST_ID/host.json \
    --connection-string $CS \
    --file host.json

# start your container with the settings above
docker run --rm -d -e AzureWebJobsStorage="$CS" \
    -e AzureFunctionsWebHost__hostid="$HOST_ID" \
    -p 8080:80
    <yourImageName>

# This should work
curl http://localhost:8080/admin/host/status?code=$MASTER_KEY

Option 2: Creating host.json secrets file in the container files

This doesn't require a storage account if you're not using one.

If you set an env var AzureWebJobsSecretStorageType=files, then you can put the same host.json file from above in /azure-functions-host/Secrets

This is a bit combursom since it requires the secrets to be in your image, which is not very secure. You can also just mount the path if you want. So you can do

docker run -v /etc/my-secrets:/azure-functions-host/Secrets \
    -e AzureWebJobsSecretStorageType=files
    -p 8080:80 \
    <yourImageName>

then you should be able to use and store keys on your machine under /etc/my-secrets/

@ahmelsayed
Copy link
Contributor

If you're using Kubernetes, then you can do something like this

apiVersion: v1
kind: Secret
metadata:
  name: azure-functions-secrets
type: Opaque
stringData:
  host.json: |-
    {
      "masterKey": {
        "name": "master",
        "value": "MASTER_KEY",
        "encrypted": false
      },
      "functionKeys": [ ]
    }
  httpTrigger.json: |-
    {
      "keys": [
        {
          "name": "default",
          "value": "A_FUNCTION_KEY",
          "encrypted": false
        }
      ]
    }

Then you should be able to mount that as a volume by adding that to your pod specs

spec:
  containers:
  - name: {azure-functions-container-name}
    image: {your-container-image}
    volumeMounts:
    - name: secrets
      mountPath: "/azure-functions-host/Secrets"
      readOnly: true
    env:
    - name: AzureWebJobsSecretStorageType
      value: files
  volumes:
  - name: secrets
    secret:
      secretName: azure-functions-secrets

I haven't tested this yet, but I'll do tomorrow. It's really just a variation on Option 2 from above, just using kubernetes default mount-secrets-as-volumes

@VenkateshSrini
Copy link
Author

@maiqbal11 - can you please help with this?

NO real time execution from on premise Kubernetes or Cloud foundry cluster.

@VenkateshSrini
Copy link
Author

Thanks a ton. I tried the same in local by modifying the environment using local.settings.json file. It worked. My next target will be to add the same in docker and test

@deyanp
Copy link

deyanp commented Oct 7, 2019

@VenkateshSrini , what do you mean by "I tried the same in local by modifying the environment using local.settings.json file"?

Is there an easier way to inject function keys when testing a function locally in a docker container (Linux)?

And what about deploying the container to production in AKS - how should the secrets be configured there? Using Kubernetes secret and mounted volume or there is another better way?

@VenkateshSrini
Copy link
Author

You can see all the options stated above. In AKS to the best of my knowledge, please use mounted volumes. Another approach that strikes me now but yet to try it, we can use Azure Vault as a key store and mount the same as Flex volumes. Please see this blog
https://dzone.com/articles/access-azure-key-vault-from-your-kubernetes-pods

@tanzencoder
Copy link

I've used the approach above and it works great until I had an anonymous auth level on an httptrigger function that expected a query param of code=somevalue and my anonymous function would not work and throws a 500 error with no logging. If I don't mount the host keys the anonymous functions work but my AuthorizationLevel.Function functions are unauthorized. Any ideas on how I can get around this?

@changeforan
Copy link

hi @ahmelsayed, is there any other option in 2022? I noticed that AzureWebJobsSecretStorageType can be set to keyvault now. But there is no doc showing how to store master key in keyvault for an Azure Function.
Can you take a look?

@v1ferrarij
Copy link

Yes what is the best way to achieve this in 2022?

We are trying to hit an admin endpoint on a Function App hosted in Kubernetes.

Thanks!

@DamolAAkinleye
Copy link

Please what is the best way to do this in 2023. There is no clear documentation on how to do this on the docs.

We are trying to hit the admin endpoint and we are just hitting unauthorised.

@Insadem
Copy link

Insadem commented May 3, 2023

What is the best way to achieve this in {insert your year} x3?
Probably in 2030 we will get answer

@changeforan
Copy link

changeforan commented May 4, 2023

I have already successfully stored/read the master key in Azure KeyValut. And I like to share my experience.

  1. Set environment variable AzureWebJobsSecretStorageType = keyvault
  2. Set env var AzureWebJobsSecretStorageKeyVaultUri=https://{your-keyvault-uri}
  3. [Optional] Set env var FUNCTIONS_SECRETS_PATH=C:/approot/secrets (I am hosting Azure function host runtime in Windows Container, and I found that if this env var is missed, the runtime will raise an exception. Set this path to any empty folder as you like.)
  4. Set other env variables depending on the managed identity type. (ref:
    Secret repositories
    )
    I am using the user-assigned managed identity, so I set env var AzureWebJobsSecretStorageKeyVaultClientId=<my-client-id>
  5. Granting your identity access to Key Vault. I enabled Get/List/Create permissions for my identity.
  6. Bind your user-assigned managed identity to your pod. I am using aad-pod-identity
  7. Start your host runtime. And if you could not find the master key in your key vault secrets, don't worry, this could happen when the function runs for the first time. You can call the Admin API to trigger the host runtime to create the master key, like Get http://{admin-end-point}/admin/host/status. and then you will find a new secret called "host--masterKey--master" in your keyvault.

@cmcconomyfwig
Copy link

Here is the solution I synthesized from the EXTREMELY HELPFUL above conversation:

in your compose.yaml, include the following:

    environment:
      - AzureWebJobsSecretStorageType=files  # looks for secrets under /azure-functions-host/Secrets/

in your Dockerfile, include the following three lines:

# for local run - create a known key ('test') for x-functions-key
RUN mkdir -p /azure-functions-host/Secrets/
RUN echo '{"masterKey":{"name":"master","value":"test","encrypted":false},"functionKeys":[]}' > /azure-functions-host/Secrets/host.json

This worked, so that now I can use the value test for my x-functions-key - hope this helps someone!

@SeanFeldman
Copy link

Thank you, @cmcconomyfwig. Your post was extremely helpful.

I simplified this a little more and set the environment variable along with the commands to produce host.json to keep it all in a single place.

# Log to console and read secrets from the file system
ENV AzureFunctionsJobHost__Logging__Console__IsEnabled=true \
    AzureWebJobsSecretStorageType=files

# for local run - create a known key ('test') for x-functions-key
RUN mkdir -p /azure-functions-host/Secrets/
RUN echo '{"masterKey":{"name":"master","value":"test","encrypted":false},"functionKeys":[]}' > /azure-functions-host/Secrets/host.json

@nrjohnstone
Copy link

@cmcconomyfwig your solution above means that your container isn't really suitable for production, only local testing.

If you instead use environment variables and mount a host.json with the master key it means you can build the container once, run it locally using a known "test" master key, and then deploy the same container to Azure where it will run correctly with the master key from the environment

@harinarayanan-muthukumar
Copy link

harinarayanan-muthukumar commented Apr 10, 2024

I have already successfully stored/read the master key in Azure KeyValut. And I like to share my experience.

  1. Set environment variable AzureWebJobsSecretStorageType = keyvault
  2. Set env var AzureWebJobsSecretStorageKeyVaultUri=https://{your-keyvault-uri}
  3. [Optional] Set env var FUNCTIONS_SECRETS_PATH=C:/approot/secrets (I am hosting Azure function host runtime in Windows Container, and I found that if this env var is missed, the runtime will raise an exception. Set this path to any empty folder as you like.)
  4. Set other env variables depending on the managed identity type. (ref:
    Secret repositories
    )
    I am using the user-assigned managed identity, so I set env var AzureWebJobsSecretStorageKeyVaultClientId=<my-client-id>
  5. Granting your identity access to Key Vault. I enabled Get/List/Create permissions for my identity.
  6. Bind your user-assigned managed identity to your pod. I am using aad-pod-identity
  7. Start your host runtime. And if you could not find the master key in your key vault secrets, don't worry, this could happen when the function runs for the first time. You can call the Admin API to trigger the host runtime to create the master key, like Get http://{admin-end-point}/admin/host/status. and then you will find a new secret called "host--masterKey--master" in your keyvault.

Hi @changeforan - Thanks for the help with the Secret method , much useful . Will try it out . I do have quick couple of questions based on your comments.

  1. Azure dosent expose Windows based images for Azure functions, How did you acheive it ? Can you shed some light so I can attempt the same ? Would you recommend doing this for Prod (windows) ?
  2. Has anyone figured out a way to access KUDU while running on Kubernetes ?

@usmankhawar22
Copy link

any leads on how to make authenticated call via api key in azure container apps? basically the same function app running in container on azure container apps?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests