Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I get admin end point and master key when running in container #4147

Open
VenkateshSrini opened this issue Feb 27, 2019 · 12 comments
Open

Comments

@VenkateshSrini
Copy link

@VenkateshSrini VenkateshSrini commented Feb 27, 2019

I'm running Azure function in container. I want to retrieve the master key and hit the admin endpoint for the function running in container. How do I do that?

Investigative information

Please provide the following:

  • Function App version (1.0 or 2.0):2.0
  • Function App name: N/A
  • Function name(s) (as appropriate):N/A
  • Invocation ID:N/A
  • Region: Container

Repro steps

Provide the steps required to reproduce the problem:

  1. Create a Azure function and create the docker file for the Azure function
  2. Hos the docker container in Docker Desktop for windows CE
  3. Access the admin end point. It is not accessible

Expected behavior

I should be able to access the and I should be able to retrieve the admin end point

Actual behavior

Provide a description of the actual behavior observed.

The Azure function admin endpoint is not accessible inside Docker container

Known workarounds

None

@pragnagopa
Copy link
Contributor

@pragnagopa pragnagopa commented Mar 25, 2019

@maiqbal11 - can you please help with this?

@VenkateshSrini
Copy link
Author

@VenkateshSrini VenkateshSrini commented Mar 26, 2019

@maiqbal11 ,
Sorry for the urgency. But the query has been open for quite some time with no help. So any insights on this could help us a lot

@maiqbal11
Copy link
Contributor

@maiqbal11 maiqbal11 commented Mar 26, 2019

@VenkateshSrini, what scenario are you looking to enable with this specifically? Running Functions in a Docker Desktop container is not one of our mainline scenarios so we'd have to look more deeply into this.

We have documentation on how you can access keys via the admin endpoint (major changes to this are coming shortly):

  1. https://github.com/Azure/azure-functions-host/wiki/Key-management-API (you can use the scm/Kudu endpoint for admin access for apps running in our hosting environment)
  2. https://github.com/Azure/azure-functions-host/wiki/Changes-to-Key-Management-in-Functions-V2

@pragnagopa
Copy link
Contributor

@pragnagopa pragnagopa commented Mar 26, 2019

cc @mathewc @balag0
@VenkateshSrini - As @maiqbal11 mentioned it would help to understand your scenario. Is this for local development?
Here is the information on using custom docker image
https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image

@VenkateshSrini
Copy link
Author

@VenkateshSrini VenkateshSrini commented Mar 27, 2019

My scenario is very simple. I work for an service company that serves a big enterprise client. The client is still not 'all' into public cloud. They would soon make a choice on their public cloud vendor. Now they have docker and kubernetes in their in house web farm. They want to go for serverless and they are exploring options of which is the best serverless platform that would give them identical benefits both in public cloud and private cloud (Kubernetes). At present the client likes the simple programing model of Azure functions, the power f durable functions and extension points in terms of custom trigger and bindings provided by azure

But at the same time, they see that there are a variety of security options available in Public cloud. Also by providing access to the admin point they also get to do more. Since it is the same run time and same code that runs in cloud and local they would go for azure function if they are able to teach Admin endpoints even in their local hosted container

Now to get access to admin end point in Azure function (Whether in cloud or in container) you need to have access to the Master key. In azure we can take this from configuration blades. However, when running the same locally, the expectation is can we set this key through environment, so that using this key we can control the admin endpoint of functions hosted in a container. At present we are not able to do this. We do not have SCM or KUDU when accessing the functions on a container deployed locally (not on Azure )
Please let us know ho to achieve this

@ahmelsayed
Copy link
Contributor

@ahmelsayed ahmelsayed commented Mar 28, 2019

@VenkateshSrini Here are some ways you can achieve that today as a workaround. I understand that none of these are really good solutions and we should provide some better way to expose this. I'm working with @fabiocav to try to find a worklfow that makes sense.

Option 1: Creating host.json secrets file on Blob storage

prerequisites: Having AzureWebJobsStorage defined.

By default the runtime stores all keys in your storage account defined in AzureWebJobsStroage as a blob under

  • azure-webjobs-secrets/{AppName-Or-HostId}/{functionName}.json for functions
  • azure-webjobs-secrets/{AppName-Or-HostId}/host.json for the admin/master key

{AppName-Or-HostId} is defined by either of the following environment variables:

AzureFunctionsWebHost__hostid
# or 
WEBSITE_SITE_NAME

I think WEBSITE_SITE_NAME has precedence.

so you can upload a host.json file that looks like

{
  "masterKey": {
    "name": "master",
    "value": "MASTER_KEY",
    "encrypted": false
  },
  "functionKeys": [ ]
}                 

Then you can use MASTER_KEY as your masterkey.

To put the whole thing together here is a bash script that automates the process

# assume you have docker, az-cli, curl

# put whatever name you use to identify your deployment
HOST_ID=your_app_id
# what you'll use for a masterKey it can be any string
MASTER_KEY=$(date | sha256sum | cut -d ' ' -f 1 )

# Get connection string
CS=$(az storage account show-connection-string --name $StorageAccountName --output json --query "connectionString"| tr -d '"')

# Create azure-webjobs-secrets container if it doesn't exist
az storage container create --name azure-webjobs-secrets --connection-string $CS

# Create a local host.json that contains:
# {
#   "masterKey": {
#     "name": "master",
#     "value": "MASTER_KEY",
#     "encrypted": false
#   },
#   "functionKeys": [ ]
# } 

# Replace MASTER_KEY in the file with your $MASTER_KEY
sed -i "s/MASTER_KEY/$MASTER_KEY/g" host.json

# Upload that file to the azure-webjobs-secrets container 
az storage blob upload --container-name azure-webjobs-secrets \
    --name $HOST_ID/host.json \
    --connection-string $CS \
    --file host.json

# start your container with the settings above
docker run --rm -d -e AzureWebJobsStorage="$CS" \
    -e AzureFunctionsWebHost__hostid="$HOST_ID" \
    -p 8080:80
    <yourImageName>

# This should work
curl http://localhost:8080/admin/host/status?code=$MASTER_KEY

Option 2: Creating host.json secrets file in the container files

This doesn't require a storage account if you're not using one.

If you set an env var AzureWebJobsSecretStorageType=files, then you can put the same host.json file from above in /azure-functions-host/Secrets

This is a bit combursom since it requires the secrets to be in your image, which is not very secure. You can also just mount the path if you want. So you can do

docker run -v /etc/my-secrets:/azure-functions-host/Secrets \
    -e AzureWebJobsSecretStorageType=files
    -p 8080:80 \
    <yourImageName>

then you should be able to use and store keys on your machine under /etc/my-secrets/

@ahmelsayed
Copy link
Contributor

@ahmelsayed ahmelsayed commented Mar 28, 2019

If you're using Kubernetes, then you can do something like this

apiVersion: v1
kind: Secret
metadata:
  name: azure-functions-secrets
type: Opaque
stringData:
  host.json: |-
    {
      "masterKey": {
        "name": "master",
        "value": "MASTER_KEY",
        "encrypted": false
      },
      "functionKeys": [ ]
    }
  httpTrigger.json: |-
    {
      "keys": [
        {
          "name": "default",
          "value": "A_FUNCTION_KEY",
          "encrypted": false
        }
      ]
    }

Then you should be able to mount that as a volume by adding that to your pod specs

spec:
  containers:
  - name: {azure-functions-container-name}
    image: {your-container-image}
    volumeMounts:
    - name: secrets
      mountPath: "/azure-functions-host/Secrets"
      readOnly: true
    env:
    - name: AzureWebJobsSecretStorageType
      value: files
  volumes:
  - name: secrets
    secret:
      secretName: azure-functions-secrets

I haven't tested this yet, but I'll do tomorrow. It's really just a variation on Option 2 from above, just using kubernetes default mount-secrets-as-volumes

@VenkateshSrini
Copy link
Author

@VenkateshSrini VenkateshSrini commented Mar 29, 2019

@maiqbal11 - can you please help with this?

NO real time execution from on premise Kubernetes or Cloud foundry cluster.

@VenkateshSrini
Copy link
Author

@VenkateshSrini VenkateshSrini commented Mar 29, 2019

Thanks a ton. I tried the same in local by modifying the environment using local.settings.json file. It worked. My next target will be to add the same in docker and test

@deyanp
Copy link

@deyanp deyanp commented Oct 7, 2019

@VenkateshSrini , what do you mean by "I tried the same in local by modifying the environment using local.settings.json file"?

Is there an easier way to inject function keys when testing a function locally in a docker container (Linux)?

And what about deploying the container to production in AKS - how should the secrets be configured there? Using Kubernetes secret and mounted volume or there is another better way?

@VenkateshSrini
Copy link
Author

@VenkateshSrini VenkateshSrini commented Oct 8, 2019

You can see all the options stated above. In AKS to the best of my knowledge, please use mounted volumes. Another approach that strikes me now but yet to try it, we can use Azure Vault as a key store and mount the same as Flex volumes. Please see this blog
https://dzone.com/articles/access-azure-key-vault-from-your-kubernetes-pods

@tanzencoder
Copy link

@tanzencoder tanzencoder commented Jul 16, 2021

I've used the approach above and it works great until I had an anonymous auth level on an httptrigger function that expected a query param of code=somevalue and my anonymous function would not work and throws a 500 error with no logging. If I don't mount the host keys the anonymous functions work but my AuthorizationLevel.Function functions are unauthorized. Any ideas on how I can get around this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants