Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Roadmap: Multiple ports support #763

Open
torosent opened this issue May 30, 2023 · 56 comments
Open

Roadmap: Multiple ports support #763

torosent opened this issue May 30, 2023 · 56 comments
Labels
enhancement New feature or request roadmap This feature is on the roadmap

Comments

@torosent
Copy link
Member

torosent commented May 30, 2023

8/30/2023 Public Preview: https://azure.microsoft.com/en-us/updates/public-preview-azure-container-apps-supports-additional-tcp-ports/
Docs: https://aka.ms/aca/additional-tcp-ports

@microsoft-github-policy-service microsoft-github-policy-service bot added the Needs: triage 🔍 Pending a first pass to read, tag, and assign label May 30, 2023
@torosent torosent added enhancement New feature or request roadmap This feature is on the roadmap and removed Needs: triage 🔍 Pending a first pass to read, tag, and assign labels May 30, 2023
@Justrebl
Copy link

Justrebl commented May 31, 2023

Don't need more than just this title : Hyped 💯 😄

@dhilgarth
Copy link

Awesome, thanks for listening!

@joaquinvacas
Copy link

Waiting for this, trying to migrate my actual SMTP relay to Container Apps but this breaks the whole process.

ATM made it working by having two instances of SMTP, one for port 25 and another one for 465 sharing the volume.
Not the best practice, but the one that works.

@Phiph
Copy link

Phiph commented Jun 19, 2023

Looking forward to this! Thank you team!

@riccardopinosio
Copy link

This is currently a massive blocker for ACA usage in my opinion, as there's a ton of services that require multiple exposed ports. Really looking forward for this to be implemented.

@sebastian-hans-swm
Copy link

I'm also really looking forward to be able to do remote debugging of my web applications.

@elruss
Copy link

elruss commented Jul 21, 2023

Slightly confused here...are we talking about multiple ports exposed externally, or internally, or both?

My use case is Selenium Grid, where I need a "hub" container to have the only externally available ingress port for its management console. But separate node/worker Container Apps in the same environment need to be able to consume an event queue on the hub where the publish and subscribe ports are different.

So, need a container app with one ingress port and two "internal" ports.

@ahmelsayed
Copy link
Member

Slightly confused here...are we talking about multiple ports exposed externally, or internally, or both?

Both. Each additional port mapping will have its own external/internal state ref.

@dvdr00t
Copy link

dvdr00t commented Jul 28, 2023

Very hyped for this! 🚀
Any updates on the work so far? Is there an estimated release date?

@torosent
Copy link
Member Author

torosent commented Aug 7, 2023

We are working on the docs but you can use it now with api version 2023-05-02-preview under ingress section

"additionalPortMappings": [
              {
                "external": true,
                "targetPort": 1234
              },
              {
                "external": false,
                "targetPort": 2345,
                "exposedPort": 3456
              }
            ]

@elglogins
Copy link

What is the reason of having a custom VNET as a requirement? To expose multiple external http ports?

@drapka
Copy link

drapka commented Aug 11, 2023

I am sorry, I am just starting with Azure and I am completely lost about updating existing ACA to add additionalPortMappings config using CLI.
I am using az containerapp update -g groupName -n appName --yaml <yamlPath>

The yaml contains the following section:

    ingress:
      external: true
      transport: Tcp
      allowInsecure: false
      targetPort: 10000
      exposedPort: 10000
      additionalPortMappings:
        - external: true
          targetPort: 1001
          exposesdPort: 1001

But it still gives me error: Bad Request({"type":"https://tools.ietf.org/html/rfc7231#section-6.5.1","title":"One or more validation errors occurred.","status":400,"traceId":"00-f4f96e97b9de893a2316e4f101410e53-685b10742c4f49c5-01","errors":{"$":["Unknown properties additionalPortMappings in Microsoft.ContainerApps.WebApi.Views.Version20230401Preview.ContainerAppIngress are not supported"]}})

I believe I have the latest version of the CLI extension, specifying the preview apiVersion at the top of the yaml file seems to have no effect.

When I check the details of the container app via az containerapp show command I can already see the new property additionalPortMapping, which is obviously set to null.

Thanks for any help.

@ahmelsayed
Copy link
Member

What is the reason of having a custom VNET as a requirement? To expose multiple external http ports?

@elglogins additional ports are all TCP. External http ports are 80/443 only.

@ahmelsayed
Copy link
Member

ahmelsayed commented Aug 11, 2023

@drapka the cli hasn't been updated yet to use that preview api version so it won't work there.

If you want to live dangerously :) you can put this in a patch.json file

{
  "properties": {
    "configuration": {
      "ingress": {
        "external": true,
        "transport": "Tcp",
        "targetPort": 10000,
        "exposedPort": 10000,
        "additionalPortMappings": [
          {
            "external": true,
            "targetPort": 1001,
            "exposedPort": 1001
          }
        ]
      }
    }
  }
}

then do

# get your app's full resource id
ID=$(az containerapp show -n appName -g groupName -o tsv --query id)

# patch the app using patch.json and api-version=2023-05-02-preview
az rest \
  --method patch \
  --body @patch.json \
  --url "${ID}?api-version=2023-05-02-preview"

# verify the property is there
az rest \
  --method get \
  --url "${ID}?api-version=2023-05-02-preview" | \
  jq -r '.properties.configuration.ingress'

@gcrockenberg
Copy link

What if the transport for the second port is different. For example, gRPC for integration between microservices?

@ahmelsayed
Copy link
Member

What if the transport for the second port is different. For example, gRPC for integration between microservices?

@gcrockenberg All additional ports are TCP. So any tcp based protocol (like htt2/grpc) should work.

@tiwood
Copy link

tiwood commented Aug 22, 2023

@ahmelsayed, if I understood this correctly this won't allow us to expose UDP ports?
If so, are they any plans to introduce this?

We have a use case where we must expose a service over udp.

@ahmelsayed
Copy link
Member

Correct, we don't have UDP. Can't speak of any plans myself.

@simonkurtz-MSFT
Copy link

Hi @torosent and @ahmelsayed,

First, thank you for this feature!

Is this still in development or is it in preview now as the documentation indicates?
Do you have a rough ETA for GA?

https://learn.microsoft.com/en-us/azure/container-apps/ingress-overview#additional-tcp-ports

@ahmelsayed
Copy link
Member

It's in preview now. You can use it through ARM/bicep (api-version=2023-05-02-preview) or the cli with --yaml option. Here is a bicep sample https://github.com/ahmelsayed/bicep-templates/blob/main/aca-app-multiple-ports.bicep#L20-L36

with --yaml in the cli, add the following to your ingress

additionalPortMappings:
  - external: false
    targetPort: 9090
    exposedPort: 9090
  - external: false
    targetPort: 22
    exposedPort: 22

@gcrockenberg
Copy link

I just tried running a bicep "what-if" with that preview and did not see the additionalPortMapping applied. I didn't see an error just didn't see the port mapping either.

@RocheVal
Copy link

RocheVal commented Aug 29, 2023

Thank for this feature !

I tried it with the cli and the --yaml option but it doesn't seem to work.

Like @gcrockenberg I didn't have any errors but the additional ports are not accessible.
And the additionalPortMappings is actually displayed in the result of the cli command with the good value in it. But it's the only place I saw it (but because it's in preview it doesn't surprise me).

And for information the "main" HTTP port is working correctly.

@simonkurtz-MSFT
Copy link

@chadray, you just worked on this and got it working with external: true, right? I'm not sure I fully understand that property yet, but @gcrockenberg and @RocheVal, it may be worth exploring.

@jlkardas
Copy link

jlkardas commented Sep 6, 2023

Happy to see the feature is in preview!

I'm running a few container apps with an additional port specified on each one so I can facilitate gRPC between containers. My container app env is also deployed to a custom vnet. When I execute a grpcurl request within one container to another, I receive a successful response when addressing the internal IP address of the container, (e.g. grpcurl -plaintext 10.5.0.37:443 list). However, I cannot get a successful response when addressing the container by its host name + DNS suffix (e.g. my-service.<unique-identifier>.<location>.azurecontainerapps.io:443).

The additional ports are all external. Is there something I am missing?

additionalPorts:
        - external: true
          targetPort: 443
          exposedPort: 443

@pizerg
Copy link

pizerg commented Sep 21, 2023

Trying to deploy a container app with 4 external tcp ports using CLI + yml, the deployment completes just fine but the provision status is stuck in "Provisioning". The only information available is the following log from the Sytem Logs stream:

{"TimeStamp":"2023-09-21T08:58:32Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Connecting to the events collector...","Reason":"StartingGettingEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:58:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Successfully connected to events server","Reason":"ConnectedToEventsServer","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Deactivating old revisions for ContainerApp \u0027[APP_NAME_HERE]\u0027","Reason":"RevisionDeactivating","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Successfully provisioned revision \[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionReady","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Successfully updated containerApp: [APP_NAME_HERE]","Reason":"ContainerAppReady","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Updating containerApp: [APP_NAME_HERE]","Reason":"ContainerAppUpdate","EventSource":"ContainerAppController","Count":11}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Updating revision : [APP_NAME_HERE]--z00g9fu","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":10}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Setting traffic weight of \u0027100%\u0027 for revision \u0027[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:46 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 08:56:53 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:59:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:00:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:01:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 09:01:49 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 09:01:55 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":31}
{"TimeStamp":"2023-09-21T09:02:55Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}

The following YAML was used to create the app:

location: westeurope
name: [APP_NAME_HERE]
properties:
  configuration:
    activeRevisionsMode: Single
    secrets:
      [SOME_SECRETS_HERE]
    registries:
      [ACR_SETTINGS_HERE]
    ingress:
      transport: tcp
      allowInsecure: false
      exposedPort: 9100
      targetPort: 9100
      external: true
      additionalPortMappings:
      - exposedPort: 9200
        targetPort: 9200
        external: true
      - exposedPort: 9300
        targetPort: 9300
        external: true
      - exposedPort: 9400
        targetPort: 9400
        external: true
      traffic:
      - latestRevision: true
        weight: 100
  managedEnvironmentId: [ENV_ID_HERE]
  template:
    containers:
    - image: [ACR_NAME_HERE]/[IMAGE_NAME_HERE]:[IMAGE_REVISION_HERE]
      name: [IMAGE_NAME_HERE]
      resources:
        cpu: 0.25
        memory: 0.5Gi
      env:
     [SOME_ENV_REFERENCING_SECRETS]
    scale:
      maxReplicas: 1
      minReplicas: 1
  workloadProfileName: Consumption
type: Microsoft.App/containerApps

After a while, status changes to Provisioned but Running Status becomes "Degraded" and no replica is actually running

@pizerg
Copy link

pizerg commented Sep 21, 2023

@jlkardas

Have you tried changing the port to other than 443?

@roxana-muresan
Copy link

Hello together!

When is this feature gonna be generaly available?

Best regards

@zhenqxuMSFT
Copy link

zhenqxuMSFT commented Oct 11, 2023

Happy to see the feature is in preview!

I'm running a few container apps with an additional port specified on each one so I can facilitate gRPC between containers. My container app env is also deployed to a custom vnet. When I execute a grpcurl request within one container to another, I receive a successful response when addressing the internal IP address of the container, (e.g. grpcurl -plaintext 10.5.0.37:443 list). However, I cannot get a successful response when addressing the container by its host name + DNS suffix (e.g. my-service.<unique-identifier>.<location>.azurecontainerapps.io:443).

The additional ports are all external. Is there something I am missing?

additionalPorts:
        - external: true
          targetPort: 443
          exposedPort: 443

@jlkardas hostname+DNS suffix only supports ports with HTTP transport. You need to use <app name>:<port> for additional ports.

@zhenqxuMSFT
Copy link

api-version=2023-05-02-preview

@ahmelsayed I tried the patching solution, in my case it fails with:

Bad Request({"error":{"code":"ContainerAppSecretInvalid","message":"Invalid Request: Container app secret(s) with name(s) 'reg-pswd-f60c7731-bf65' are invalid: value or keyVaultUrl and identity should be provided."}}).

@dummy-andra looks like your secret was not configured correctly in the payload. Make sure you have provide the secret value if it's not a key vault secret.

@zhenqxuMSFT
Copy link

Trying to deploy a container app with 4 external tcp ports using CLI + yml, the deployment completes just fine but the provision status is stuck in "Provisioning". The only information available is the following log from the Sytem Logs stream:

{"TimeStamp":"2023-09-21T08:58:32Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Connecting to the events collector...","Reason":"StartingGettingEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:58:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Successfully connected to events server","Reason":"ConnectedToEventsServer","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Deactivating old revisions for ContainerApp \u0027[APP_NAME_HERE]\u0027","Reason":"RevisionDeactivating","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Successfully provisioned revision \[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionReady","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Successfully updated containerApp: [APP_NAME_HERE]","Reason":"ContainerAppReady","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Updating containerApp: [APP_NAME_HERE]","Reason":"ContainerAppUpdate","EventSource":"ContainerAppController","Count":11}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Updating revision : [APP_NAME_HERE]--z00g9fu","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":10}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Setting traffic weight of \u0027100%\u0027 for revision \u0027[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:46 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 08:56:53 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:59:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:00:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:01:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 09:01:49 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 09:01:55 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":31}
{"TimeStamp":"2023-09-21T09:02:55Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}

The following YAML was used to create the app:

location: westeurope
name: [APP_NAME_HERE]
properties:
  configuration:
    activeRevisionsMode: Single
    secrets:
      [SOME_SECRETS_HERE]
    registries:
      [ACR_SETTINGS_HERE]
    ingress:
      transport: tcp
      allowInsecure: false
      exposedPort: 9100
      targetPort: 9100
      external: true
      additionalPortMappings:
      - exposedPort: 9200
        targetPort: 9200
        external: true
      - exposedPort: 9300
        targetPort: 9300
        external: true
      - exposedPort: 9400
        targetPort: 9400
        external: true
      traffic:
      - latestRevision: true
        weight: 100
  managedEnvironmentId: [ENV_ID_HERE]
  template:
    containers:
    - image: [ACR_NAME_HERE]/[IMAGE_NAME_HERE]:[IMAGE_REVISION_HERE]
      name: [IMAGE_NAME_HERE]
      resources:
        cpu: 0.25
        memory: 0.5Gi
      env:
     [SOME_ENV_REFERENCING_SECRETS]
    scale:
      maxReplicas: 1
      minReplicas: 1
  workloadProfileName: Consumption
type: Microsoft.App/containerApps

After a while, status changes to Provisioned but Running Status becomes "Degraded" and no replica is actually running

@pizerg this issue is related to additional ports. Are you still hitting the same issue now?

@dummy-andra
Copy link

dummy-andra commented Oct 11, 2023

api-version=2023-05-02-preview

@ahmelsayed I tried the patching solution, in my case it fails with:
Bad Request({"error":{"code":"ContainerAppSecretInvalid","message":"Invalid Request: Container app secret(s) with name(s) 'reg-pswd-f60c7731-bf65' are invalid: value or keyVaultUrl and identity should be provided."}}).

@dummy-andra looks like your secret was not configured correctly in the payload. Make sure you have provide the secret value if it's not a key vault secret.

This secret reg-pswd-f60c7731-bf65 was NOT Configured by me.

When deploying container app with ACR, it automatically add ACR password to ACA Secrets and the reg-pswd-xxxxxx is the name of the secret automatically generated upon the app's creation

Anyway I recreated ACA via Bicep and worked since updating it with the patch did not worked as expected.

@pizerg
Copy link

pizerg commented Oct 11, 2023

Trying to deploy a container app with 4 external tcp ports using CLI + yml, the deployment completes just fine but the provision status is stuck in "Provisioning". The only information available is the following log from the Sytem Logs stream:

{"TimeStamp":"2023-09-21T08:58:32Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Connecting to the events collector...","Reason":"StartingGettingEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:58:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"Successfully connected to events server","Reason":"ConnectedToEventsServer","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Deactivating old revisions for ContainerApp \u0027[APP_NAME_HERE]\u0027","Reason":"RevisionDeactivating","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Successfully provisioned revision \[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionReady","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:47 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Successfully updated containerApp: [APP_NAME_HERE]","Reason":"ContainerAppReady","EventSource":"ContainerAppController","Count":2}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Updating containerApp: [APP_NAME_HERE]","Reason":"ContainerAppUpdate","EventSource":"ContainerAppController","Count":11}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"","Msg":"Updating revision : [APP_NAME_HERE]--z00g9fu","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":10}
{"TimeStamp":"2023-09-21 08:56:48 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"","ReplicaName":"","Msg":"Setting traffic weight of \u0027100%\u0027 for revision \u0027[APP_NAME_HERE]--z00g9fu\u0027","Reason":"RevisionUpdate","EventSource":"ContainerAppController","Count":3}
{"TimeStamp":"2023-09-21 08:56:46 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 08:56:53 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T08:59:35Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:00:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21T09:01:36Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}
{"TimeStamp":"2023-09-21 09:01:49 \u002B0000 UTC","Type":"Warning","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"0/2 nodes are available: 2 node(s) didn\u0027t match Pod\u0027s node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.","Reason":"AssigningReplicaFailed","EventSource":"ContainerAppController","Count":0}
{"TimeStamp":"2023-09-21 09:01:55 \u002B0000 UTC","Type":"Normal","ContainerAppName":"[APP_NAME_HERE]","RevisionName":"[APP_NAME_HERE]--z00g9fu","ReplicaName":"[APP_NAME_HERE]--z00g9fu-5f4b867ff9-rlhvm","Msg":"pod didn\u0027t trigger scale-up: 3 node(s) didn\u0027t match Pod\u0027s node affinity/selector","Reason":"NotTriggerScaleUp","EventSource":"ContainerAppController","Count":31}
{"TimeStamp":"2023-09-21T09:02:55Z","Type":"Normal","ContainerAppName":null,"RevisionName":null,"ReplicaName":null,"Msg":"No events since last 60 seconds","Reason":"NoNewEvents","EventSource":"ContainerAppController","Count":1}

The following YAML was used to create the app:

location: westeurope
name: [APP_NAME_HERE]
properties:
  configuration:
    activeRevisionsMode: Single
    secrets:
      [SOME_SECRETS_HERE]
    registries:
      [ACR_SETTINGS_HERE]
    ingress:
      transport: tcp
      allowInsecure: false
      exposedPort: 9100
      targetPort: 9100
      external: true
      additionalPortMappings:
      - exposedPort: 9200
        targetPort: 9200
        external: true
      - exposedPort: 9300
        targetPort: 9300
        external: true
      - exposedPort: 9400
        targetPort: 9400
        external: true
      traffic:
      - latestRevision: true
        weight: 100
  managedEnvironmentId: [ENV_ID_HERE]
  template:
    containers:
    - image: [ACR_NAME_HERE]/[IMAGE_NAME_HERE]:[IMAGE_REVISION_HERE]
      name: [IMAGE_NAME_HERE]
      resources:
        cpu: 0.25
        memory: 0.5Gi
      env:
     [SOME_ENV_REFERENCING_SECRETS]
    scale:
      maxReplicas: 1
      minReplicas: 1
  workloadProfileName: Consumption
type: Microsoft.App/containerApps

After a while, status changes to Provisioned but Running Status becomes "Degraded" and no replica is actually running

@pizerg this issue is related to additional ports. Are you still hitting the same issue now?

@zhenqxuMSFT I opened a support request with the Azure team and they are investigating this, as far as I know, last week the issue was still happening, however I managed to find a workaround, after the initial deploy fails as described in my initial message, I just need to create a new revision (using the portal for example) and the new revision is deployed correctly and fully functional, including the 4 tcp ports defined in the original failed deployment

@cforce
Copy link

cforce commented Oct 27, 2023

@zhenqxuMSFT
Please support "transport":"http" for additional ports as well.
If we only get tcp we have to start to offload tls in the app , that is very ugly.
We want that we can manage (custsom) certs in azure not in our app, also because of load balancing etc

It works over additional port (no ssl, we would need setup extra ssl inside the app)
It does not work over main port with ssl managed by ACA - Is this expected to be a bug?

@pizerg
Copy link

pizerg commented Nov 23, 2023

@zhenqxuMSFT

It seems that the official Azure DevOps pipeline is breaking the additional ports configuration of an existing app that uses this feature (no ingress settings specified in the pipeline stage "Azure Container Apps Deploy" version 1.*) and only keeping the main port active.

@jlkardas
Copy link

@zhenqxuMSFT

It seems that the official Azure DevOps pipeline is breaking the additional settings configuration of an existing app that uses this feature (no ingress settings specified in the pipeline stage "Azure Container Apps Deploy" version 1.*) and only keeping the main port active.

Also experiencing this issue

@zhenqxuMSFT
Copy link

@cforce Thanks for the feedback, noted.

It does not work over main port with ssl managed by ACA - Is this expected to be a bug?

Did you mean custom domain is not working for you? Could you elaborate more?

@zhenqxuMSFT
Copy link

@pizerg @jlkardas do you have any name of container app names and timeframes I can take a look? Or if it's possible for you to provide some steps for me to repro the issue?

@RocheVal
Copy link

RocheVal commented Dec 4, 2023

I had to put this subject aside but now I'm back on it.
And when I tried to reuse my "old" yaml file the additional ports are not working (like when I tried few months ago). And if I look at configuration details I don't see any of the additional ports (contrary to few months ago where I could see the additional ports in the config).

I used the workaround proposed by @ahmelsayed, and I updated my app through the API, to add additional ports and it worked.

So it seems it's working only through API.

I will continue to use it and tell you if I face other issues

@zhenqxuMSFT
Copy link

@RocheVal could you upgrade to latest cli and try with --yaml again?
If that still not work, could you provide the output of cli command with --debug option to acasupport at microsoft dot com and we will take a look at the issue ASAP.

@RocheVal
Copy link

RocheVal commented Dec 6, 2023

I updated az cli to 2.55.0 and the results is the same (app created but additional ports not working).

I sent an email to acasupport@microsoft.com with the output of cli with --debug.

@Juliehzl
Copy link
Member

Juliehzl commented Dec 7, 2023

@RocheVal could you install containerapp cli extension with az extension add -n containerapp and then try again? Only GA feature will be in azure cli core and all preview features will be in containerapp extension.

@RocheVal
Copy link

RocheVal commented Dec 7, 2023

Thank you it's working correctly with the extension containerapp.

I didn't see in it the docs, I don't know if it wasn't specified or if I didn't read correctly.

But thank again it's working like expected now !

@pizerg
Copy link

pizerg commented Dec 13, 2023

@zhenqxuMSFT

@pizerg @jlkardas do you have any name of container app names and timeframes I can take a look? Or if it's possible for you to provide some steps for me to repro the issue?

If you contact me privately I could provide the required information, otherwise the repro steps are quite straightforward, just deploy any container app with additional ports (in our case 1 main tcp port and 3 additional tcp ports running in a consumption environment). After checking all ports work correctly, deploy a new revision using the official Azure DevOps' Container App Deploy pipeline (just leave ingress setting empty) and you should see that only the main port is working after that

@jlkardas
Copy link

@zhenqxuMSFT

@pizerg @jlkardas do you have any name of container app names and timeframes I can take a look? Or if it's possible for you to provide some steps for me to repro the issue?

If you contact me privately I could provide the required information, otherwise the repro steps are quite straightforward, just deploy any container app with additional ports (in our case 1 main tcp port and 3 additional tcp ports running in a consumption environment). After checking all ports work correctly, deploy a new revision using the official Azure DevOps' Container App Deploy pipeline (just leave ingress setting empty) and you should see that only the main port is working after that

Similarly, if you could provide me with an email address I would be more than happy to provide some documentation for our ACA and release pipeline setup, or whatever required information you may need.

@pizerg
Copy link

pizerg commented Jan 17, 2024

@zhenqxuMSFT Any update on the issue related to Azure Pipelines ?

@Wycliffe-nml
Copy link

Wycliffe-nml commented Feb 26, 2024

Hi There

We have a setup of 3 Rabbit MQ Container apps running Rabbit MQ Alpine image from docker hub
We have added an etra port for AMQP and the other port is used for health checks via TCP
1 of the containers is being used on our QA environment and the other 2 are not yet used
All three are in different resource groups

We have an issue where the containers keep creating new replicas for no aparrent reason, when they do they will all create new replicas on the same day for no apparent reason. We have made sure the containers have 4 gig ram and other resources are fine. The revisions stay the same but why are new replicas getting created?
scaling is set to
1 - 1

We would like to use this setup in production but are not sure how to fix the auto creation of replicas
The health of the replicas is Running (at max)

Please assist

@pizerg
Copy link

pizerg commented Mar 25, 2024

@zhenqxuMSFT Now that this feature is not in preview anymore, could you give us an update on the issue related to Azure Pipelines and multiple ports?

@boncyrus
Copy link

boncyrus commented Apr 16, 2024

Is this possible using Azure CLI without yaml config?

@Juliehzl
Copy link
Member

Is this possible using Azure CLI without yaml config?

Not yet

@bosh-3shape
Copy link

We use only TCP ports in our ACA, so we don't really need the default 443 and 80 ports open at all. We don't configure any ACA applications to have an ingress on those ports either. However, if I run nmap against the ACA Environment's IP, it reports that 443 and 80 are open (in addition to our expected other TCP ports).

Is there a possibility to not have 443 and 80 open at all? IMO ACA should not even open those ports if there is no ingress apps configured for them.

@jlkardas
Copy link

Any update here?

@RP-TSB
Copy link

RP-TSB commented May 30, 2024

Hello, any timeframes on making it supported by Terraform, and through Azure CLI without the need to use YAML? Appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request roadmap This feature is on the roadmap
Projects
Status: Generally Available (Done)
Development

No branches or pull requests