Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve admin upgrade page UX when upgrade service fails to update containers #7859

Closed
dianabarsan opened this issue Oct 21, 2022 · 12 comments
Closed
Assignees
Labels
Type: Improvement Make something better
Milestone

Comments

@dianabarsan
Copy link
Member

What feature do you want to improve?
If an upgrade goes well, we expect API to be down for a couple of seconds, while the new container boots up. This means that the admin upgrade page expects to receive http errors while requesting upgrade progress.

At the same time, when we call the upgrade endpoint to finalize the upgrade, and the request succeeds, we check whether our currently installed version is the version that we requested via the upgrade. If it is not, we display an error:

image

The browser logs will only display an error without any details:

image

API logs will not report an error:

2022-10-21 13:38:06 INFO: Saving ddocs for medic-users-meta 
2022-10-21 13:38:06 INFO: Staging complete 
2022-10-21 13:38:06 INFO: Indexing staged views 
2022-10-21 13:38:06 INFO: Indexing views complete 
2022-10-21 13:38:06 INFO: Completing install 
REQ 56bb7cba-9d82-46c0-aa1c-4c4e492ab491 192.168.176.1 - GET /api/v2/upgrade HTTP/1.0
RES 56bb7cba-9d82-46c0-aa1c-4c4e492ab491 192.168.176.1 - GET /api/v2/upgrade HTTP/1.0 200 906 10.413 ms
2022-10-21 13:38:08 INFO: Install complete 
2022-10-21 13:38:08 INFO: Finalizing install 
2022-10-21 13:38:08 INFO: Detected ddoc change - reloading 
2022-10-21 13:38:09 INFO: Deleting existent staged ddocs 
2022-10-21 13:38:09 INFO: Install finalized 
2022-10-21 13:38:09 INFO: Running DB compact and view cleanup for medic 
2022-10-21 13:38:09 INFO: Running DB compact and view cleanup for medic-sentinel 
2022-10-21 13:38:09 INFO: Running DB compact and view cleanup for medic-logs 
2022-10-21 13:38:09 INFO: Running DB compact and view cleanup for medic-users-meta 
REQ 1c127a1b-b657-4a7c-adbd-bb5c85b39d62 192.168.176.1 - GET /api/v2/upgrade HTTP/1.0
2022-10-21 13:38:09 INFO: Last upgrade log is already final. 
RES 1c127a1b-b657-4a7c-adbd-bb5c85b39d62 192.168.176.1 - GET /api/v2/upgrade HTTP/1.0 200 15 10.805 ms
REQ 4c8a6e9a-d25a-43bb-beb8-da36daea155b 192.168.176.1 - GET /api/deploy-info HTTP/1.0
RES 4c8a6e9a-d25a-43bb-beb8-da36daea155b 192.168.176.1 - GET /api/deploy-info HTTP/1.0 200 86 3.770 ms
2022-10-21 13:38:09 DEBUG: Checking for a configured outgoing message service 

The success of the upgrade call actually indicates an error. The most common error can be that the docker compose files were not updated (because of a name mismatch - see medic/cht-upgrade-service#9).

Describe the improvement you'd like
Analyze response from upgrade call to cht-upgrade-service.
If the cht-core compose file was not updated, display an error in api logs to indicate this.

Describe alternatives you've considered
None.

Additional context
Add any other context or screenshots about the feature request here.

@dianabarsan
Copy link
Member Author

This is ready for AT for self-hosted on: 7859-no-doc-error. Links to compose files are in the PR: #7863 .

To AT:

  • download the docker compose files for this branch
  • save them in a folder with different names
  • start cht-upgrade-service pointing to the folder above
  • upgrade to a different branch
  • check errors in API

@ngaruko ngaruko self-assigned this Nov 3, 2022
@ngaruko
Copy link
Contributor

ngaruko commented Nov 3, 2022

Seeing these errors @dianabarsan . Maybe we need another branch to upgrade to which has this fix.
image

image

Api logs (quite chatty)

[stack]: 'StatusCodeError: 500 - {"error":true,"reason":"Pulling haproxy     ... \\r\\nPulling healthcheck ... \\r\\nPulling api         ... \\r\\nPulling sentinel    ... \\r\\nPulling nginx       ... \\r\\nPulling haproxy     ... pulling from s5s3h4s7/cht-haproxy\\r\\nPulling haproxy     ... already exists\\r\\nPulling haproxy     ... already exists\\r\\nPulling haproxy     ... already exists\\r\\nPulling haproxy     ... already exists\\r\\nPulling haproxy     ... already exists\\r\\nPulling haproxy     ... pulling fs layer\\r\\nPulling haproxy     ... pulling fs layer\\r\\nPulling haproxy     ... pulling fs layer\\r\\nPulling haproxy     ... pulling fs layer\\r\\nPulling haproxy     ... pulling fs layer\\r\\nPulling haproxy     ... pulling fs layer\\r\\nPulling haproxy     ... pulling fs layer\\r\\nPulling haproxy     ... waiting\\r\\nPulling haproxy     ... waiting\\r\\nPulling haproxy     ... waiting\\r\\nPulling haproxy     ... waiting\\r\\nPulling nginx       ... pulling from s5s3h4s7/cht-nginx\\r\\nPulling nginx       ... already exists\\r\\nPulling api         ... pulling from s5s3h4s7/cht-api\\r\\nPulling api         ... already exists\\r\\nPulling nginx       ... already exists\\r\\nPulling sentinel    ... error\\r\\nPulling healthcheck ... error\\r\\nPulling nginx       ... already exists\\r\\nPulling api         ... pulling fs layer\\r\\nPulling api         ... pulling fs layer\\r\\nPulling api         ... pulling fs layer\\r\\nPulling api         ... pulling fs layer\\r\\nPulling api         ... waiting\\r\\nPulling api         ... waiting\\r\\nPulling api         ... waiting\\r\\nPulling api         ... waiting\\r\\nPulling nginx       ... already exists\\r\\nPulling nginx       ... already exists\\r\\nPulling nginx       ... pulling fs layer\\r\\nPulling nginx       ... pulling fs layer\\r\\nPulling nginx       ... pulling fs layer\\r\\nPulling nginx       ... pulling fs layer\\r\\nPulling nginx       ... pulling fs layer\\r\\nPulling nginx       ... waiting\\r\\nPulling nginx       ... waiting\\r\\nPulling nginx       ... waiting\\r\\nPulling nginx       ... waiting\\r\\nPulling nginx       ... waiting\\r\\nPulling haproxy     ... downloading (100.0%)\\r\\nPulling haproxy     ... verifying checksum\\r\\nPulling haproxy     ... download complete\\r\\nPulling haproxy     ... downloading (100.0%)\\r\\nPulling haproxy     ... verifying checksum\\r\\nPulling haproxy     ... download complete\\r\\nPulling haproxy     ... downloading (0.4%)\\r\\nPulling haproxy     ... downloading (0.8%)\\r\\nPulling haproxy     ... downloading (1.1%)\\r\\nPulling haproxy     ... downloading (1.9%)\\r\\nPulling haproxy     ... downloading (27.8%)\\r\\nPulling haproxy     ... downloading (100.0%)\\r\\nPulling haproxy     ... verifying checksum\\r\\nPulling haproxy     ... download complete\\r\\nPulling haproxy     ... downloading (2.7%)\\r\\nPulling haproxy     ... downloading (100.0%)\\r\\nPulling haproxy     ... verifying checksum\\r\\nPulling haproxy     ... download complete\\r\\nPulling haproxy     ... downloading (3.0%)\\r\\nPulling haproxy     ... downloading (3.8%)\\r\\nPulling haproxy     ... downloading (4.6%)\\r\\nPulling haproxy     ... downloading (4.9%)\\r\\nPulling haproxy     ... downloading (5.3%)\\r\\nPulling haproxy     ... downloading (6.1%)\\r\\nPulling haproxy     ... downloading (6.8%)\\r\\nPulling haproxy     ... downloading (7.6%)\\r\\nPulling haproxy     ... downloading (35.8%)\\r\\nPulling haproxy     ... downloading (100.0%)\\r\\nPulling haproxy     ... verifying checksum\\r\\nPulling haproxy     ... download complete\\r\\nPulling haproxy     ... downloading (100.0%)\\r\\nPulling haproxy     ... verifying checksum\\r\\nPulling haproxy     ... download complete\\r\\nPulling haproxy     ... downloading (8.4%)\\r\\nPulling haproxy     ... downloading (9.1%)\\r\\nPulling haproxy     ... downloading (9.9%)\\r\\nPulling haproxy     ... downloading (10.3%)\\r\\nPulling haproxy     ... downloading (11.0%)\\r\\nPulling haproxy     ... downloading (11.8%)\\r\\nPulling haproxy     ... downloading (12.5%)\\r\\nPulling haproxy     ... downloading (13.3%)\\r\\nPulling haproxy     ... downloading (13.7%)\\r\\nPulling haproxy     ... downloading (14.0%)\\r\\nPulling api         ... download complete\\r\\nPulling haproxy     ... downloading (14.4%)\\r\\nPulling haproxy     ... downloading (15.2%)\\r\\nPulling haproxy     ... downloading (16.0%)\\r\\nPulling api         ... downloading (0.6%)\\r\\nPulling haproxy     ... downloading (16.7%)\\r\\nPulling api         ... downloading (1.2%)\\r\\nPulling haproxy     ... downloading (17.1%)\\r\\nPulling api         ... downloading (1.8%)\\r\\nPulling haproxy     ... downloading (17.5%)\\r\\nPulling api         ... downloading (2.4%)\\r\\nPulling haproxy     ... downloading (17.9%)\\r\\nPulling api         ... downloading (3.0%)\\r\\nPulling haproxy     ... downloading (18.2%)\\r\\nPulling api         ... downloading (1.0%)\\r\\nPulling api         ... downloading (3.6%)\\r\\nPulling haproxy     ... downloading (18.6%)\\r\\nPulling api         ... downloading (2.1%)\\r\\nPulling api         ... downloading (3.2%)\\r\\nPulling api         ... downloading (4.2%)\\r\\nPulling api         ... downloading (4.3%)\\r\\nPulling haproxy     ... downloading (19.0%)\\r\\nPulling api         ... downloading (6.5%)\\r\\nPulling api         ... downloading (7.6%)\\r\\nPulling api         ... downloading (8.7%)\\r\\nPulling api         ... downloading (4.8%)\\r\\nPulling haproxy     ... downloading (19.4%)\\r\\nPulling api         ... downloading (10.8%)\\r\\nPulling api         ... downloading (11.9%)\\r\\nPulling api         ... downloading (5.4%)\\r\\nPulling api         ... downloading (14.0%)\\r\\nPulling haproxy     ... downloading (19.8%)\\r\\nPulling api         ... downloading (16.0%)\\r\\nPulling api         ... downloading (18.1%)\\r\\nPulling api         ... downloading (6.0%)\\r\\nPulling haproxy     ... downloading (20.1%)\\r\\nPulling api         ... downloading (20.2%)\\r\\nPulling api         ... downloading (22.2%)\\r\\nPulling api         ... downloading (6.6%)\\r\\nPulling api         ... downloading (24.4%)\\r\\nPulling haproxy     ... downloading (20.5%)\\r\\nPulling api         ... downloading (27.4%)\\r\\nPulling api         ... downloading (7.3%)\\r\\nPulling haproxy     ... downloading (20.9%)\\r\\nPulling api         ... downloading (30.5%)\\r\\nPulling api         ... downloading (7.9%)\\r\\nPulling api         ... downloading (32.5%)\\r\\nPulling haproxy     ... downloading (21.3%)\\r\\nPulling api         ... downloading (34.6%)\\r\\nPulling api         ... downloading (8.5%)\\r\\nPulling api         ... downloading (37.7%)\\r\\nPulling haproxy     ... downloading (21.7%)\\r\\nPulling api         ... downloading (39.7%)\\r\\nPulling api         ... downloading (41.7%)\\r\\nPulling api         ... downloading (9.1%)\\r\\nPulling haproxy     ... downloading (22.0%)\\r\\nPulling api         ... downloading (43.8%)\\r\\nPulling api         ... downloading (45.8%)\\r\\nPulling api         ... downloading (9.7%)\\r\\nPulling api         ... downloading (48.1%)\\r\\nPulling haproxy     ... downloading (22.4%)\\r\\nPulling api         ... downloading (50.2%)\\r\\nPulling api         ... downloading (10.3%)\\r\\nPulling api         ... downloading (52.2%)\\r\\nPulling haproxy     ... downloading (22.8%)\\r\\nPulling api         ... downloading (54.2%)\\r\\nPulling api         ... downloading (10.9%)\\r\\nPulling api         ... downloading (57.4%)\\r\\nPulling haproxy     ... downloading (23.2%)\\r\\nPulling api         ... downloading (59.5%)\\r\\nPulling api         ... downloading (11.5%)\\r\\nPulling haproxy     ... downloading (23.6%)\\r\\nPulling api         ... downloading (62.5%)\\r\\nPulling api         ... downloading (64.6%)\\r\\nPulling api         ... downloading (12.1%)\\r\\nPulling haproxy     ... downloading (23.9%)\\r\\nPulling api         ... downloading (67.7%)\\r\\nPulling api         ... downloading (69.7%)\\r\\nPulling api         ... downloading (12.7%)\\r\\nPulling api         ... downloading (71.8%)\\r\\nPulling haproxy     ... downloading (24.3%)\\r\\nPulling api         ... downloading (74.8%)\\r\\nPulling api         ... downloading (13.3%)\\r\\nPulling haproxy     ... downloading (24.7%)\\r\\nPulling api         ... downloading (76.8%)\\r\\nPulling api         ... downloading (78.9%)\\r\\nPulling api         ... downloading (13.9%)\\r\\nPulling api         ... downloading (81.0%)\\r\\nPulling haproxy     ... downloading (25.1%)\\r\\nPulling api         ... downloading (83.0%)\\r\\nPulling api         ... downloading (14.6%)\\r\\nPulling api         ... downloading (86.2%)\\r\\nPulling haproxy     ... downloading (25.5%)\\r\\nPulling api         ... downloading (88.4%)\\r\\nPulling api         ... downloading (15.2%)\\r\\nPulling api         ... downloading (90.5%)\\r\\nPulling haproxy     ... downloading (25.9%)\\r\\nPulling api         ... downloading (92.6%)\\r\\nPulling api         ... downloading (15.8%)\\r\\nPulling api         ... downloading (95.6%)\\r\\nPulling haproxy     ... downloading (26.2%)\\r\\nPulling api         ... downloading (97.6%)\\r\\nPulling api         ... downloading (16.4%)\\r\\nPulling api         ... downloading (98.7%)\\r\\nPulling api         ... verifying checksum\\r\\nPulling api         ... download complete\\r\\nPulling haproxy     ... downloading (26.6%)\\r\\nPulling api         ... downloading (17.0%)\\r\\nPulling haproxy     ... downloading (27.0%)\\r\\nPulling api         ... downloading (17.6%)\\r\\nPulling haproxy     ... downloading (27.4%)\\r\\nPulling api         ... downloading (18.2%)\\r\\nPulling haproxy     ... downloading (27.7%)\\r\\nPulling api         ... downloading (18.8%)\\r\\nPulling haproxy     ... downloading (28.1%)\\r\\nPulling api         ... downloading (19.4%)\\r\\nPulling haproxy     ... downloading (28.5%)\\r\\nPulling api         ... downloading (20.0%)\\r\\nPulling haproxy     ... downloading (28.9%)\\r\\nPulling api         ... downloading (20.7%)\\r\\nPulling haproxy     ... downloading (29.3%)\\r\\nPulling api         ... downloading (1.0%)\\r\\nPulling api         ... downloading (21.3%)\\r\\nPulling haproxy     ... downloading (29.7%)\\r\\nPulling api         ... downloading (2.0%)\\r\\nPulling api         ... downloading (3.1%)\\r\\nPulling api         ... downloading (21.9%)\\r\\nPulling api       '... 32109 more characters
} 

@ngaruko
Copy link
Contributor

ngaruko commented Nov 3, 2022

Files i am using:

cht-core.yml
version: '3.9'

services:
  haproxy:
    image: public.ecr.aws/s5s3h4s7/cht-haproxy:4.0.0-7859-no-doc-error
    hostname: haproxy
    environment:
      - "HAPROXY_IP=${HAPROXY_IP:-haproxy}"
      - "COUCHDB_USER=${COUCHDB_USER:-admin}"
      - "COUCHDB_PASSWORD=${COUCHDB_PASSWORD}"
      - "COUCHDB_SERVERS=${COUCHDB_SERVERS:-couchdb}"
      - "HAPROXY_PORT=${HAPROXY_PORT:-5984}"
      - "HEALTHCHECK_ADDR=${HEALTHCHECK_ADDR:-healthcheck}"
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
       - cht-net
    expose:
      - ${HAPROXY_PORT:-5984}

  healthcheck:
    image: public.ecr.aws/s5s3h4s7/cht-haproxy-healthcheck:4.0.0-7859-no-doc-error
    environment:
      - "COUCHDB_SERVERS=${COUCHDB_SERVERS:-couchdb}"
      - "COUCHDB_USER=${COUCHDB_USER:-admin}"
      - "COUCHDB_PASSWORD=${COUCHDB_PASSWORD}"
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net

  api:
    image: public.ecr.aws/s5s3h4s7/cht-api:4.0.0-7859-no-doc-error
    depends_on:
      - haproxy
    expose:
      - "${API_PORT:-5988}"
    environment:
      - COUCH_URL=http://${COUCHDB_USER:-admin}:${COUCHDB_PASSWORD:?COUCHDB_PASSWORD must be set}@haproxy:${HAPROXY_PORT:-5984}/medic
      - BUILDS_URL=${MARKET_URL_READ:-https://staging.dev.medicmobile.org}/${BUILDS_SERVER:-_couch/builds}
      - UPGRADE_SERVICE_URL=${UPGRADE_SERVICE_URL:-http://localhost:5100}
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net

  sentinel:
    image: public.ecr.aws/s5s3h4s7/cht-sentinel:4.0.0-7859-no-doc-error
    depends_on:
      - haproxy
    environment:
      - COUCH_URL=http://${COUCHDB_USER:-admin}:${COUCHDB_PASSWORD}@haproxy:${HAPROXY_PORT:-5984}/medic
      - API_HOST=api
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net

  nginx:
    image: public.ecr.aws/s5s3h4s7/cht-nginx:4.0.0-7859-no-doc-error
    depends_on:
      - api
      - haproxy
    ports:
      - "${NGINX_HTTP_PORT:-80}:80"
      - "${NGINX_HTTPS_PORT:-443}:443"
    volumes:
      - cht-ssl:${SSL_VOLUME_MOUNT_PATH:-/root/.acme.sh/}
    environment:
      - API_HOST=api
      - API_PORT=${API_PORT:-5988}
      - "CERTIFICATE_MODE=${CERTIFICATE_MODE:-SELF_SIGNED}"
      - "SSL_CERT_FILE_PATH=${SSL_CERT_FILE_PATH:-/etc/nginx/private/cert.pem}"
      - "SSL_KEY_FILE_PATH=${SSL_KEY_FILE_PATH:-/etc/nginx/private/key.pem}"
      - "COMMON_NAME=${COMMON_NAME:-test-nginx.dev.medicmobile.org}"
      - "EMAIL=${EMAIL:-domains@medic.org}"
      - "COUNTRY=${COUNTRY:-US}"
      - "STATE=${STATE:-California}"
      - "LOCALITY=${LOCALITY:-San_Francisco}"
      - "ORGANISATION=${ORGANISATION:-medic}"
      - "DEPARTMENT=${DEPARTMENT:-Information_Security}"
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net

networks:
  cht-net:
    name: ${CHT_NETWORK:-cht-net}

volumes:
    cht-ssl:

cht-couchdb.yml
version: '3.9'

services:
  couchdb:
    image: public.ecr.aws/s5s3h4s7/cht-couchdb:4.0.0-7859-no-doc-error
    volumes:
      - ${COUCHDB_DATA:-./srv}:/opt/couchdb/data
      - cht-credentials:/opt/couchdb/etc/local.d/
    environment:
      - "COUCHDB_USER=${COUCHDB_USER:-admin}"
      - "COUCHDB_PASSWORD=${COUCHDB_PASSWORD:?COUCHDB_PASSWORD must be set}"
      - "COUCHDB_SECRET=${COUCHDB_SECRET}"
      - "COUCHDB_UUID=${COUCHDB_UUID}"
      - "SVC_NAME=${SVC_NAME:-couchdb}"
      - "COUCHDB_LOG_LEVEL=${COUCHDB_LOG_LEVEL:-error}"
    restart: always
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      cht-net:

volumes:
  cht-credentials:

networks:
  cht-net:
    name: ${CHT_NETWORK:-cht-net}

@ngaruko ngaruko removed their assignment Nov 4, 2022
@dianabarsan
Copy link
Member Author

dianabarsan commented Nov 4, 2022

Hi @ngaruko

Seem that you're hitting the pull rate limit :) This is not related to this change.
This is fixed in the cht-upgrade-service. and is also ready for AT: medic/cht-upgrade-service#20

@lorerod lorerod self-assigned this Nov 8, 2022
@lorerod
Copy link
Contributor

lorerod commented Nov 8, 2022

Environment
Instance: Local using compose files from PR
Browser: Chrome
Client platform: MacOS
App: Webapp
Version: 4.0.0-7859-no-doc-error

Steps to reproduce:

  1. Download the docker compose files from PR
  2. Save them in a folder with different names (cht-core-renamed.yml and cht-couchdb-renamed.yml)
  3. Download the docker compose file of cht-upgrade-service from main branch
  4. Start cht-upgrade-service pointing to the folder created in step 2.
  5. Try to upgrade to a different branch (save-couchdb-creds (~4.0.0))
  6. The upgrade page shows Error triggering update error and the version doesn't change:

Captura de Pantalla 2022-11-08 a la(s) 11 43 01

  1. The api log displays the error:
2022-11-08 14:42:36 ERROR: None of the docker-compose files or containers were updated: {

  'cht-couchdb.yml': {

    ok: false,

    reason: "Existing installation not found. Use '/install' API to install."

  },

  'cht-core.yml': {

    ok: false,

    reason: "Existing installation not found. Use '/install' API to install."

  },

  'cht-couchdb-clustered.yml': {

    ok: false,

    reason: "Existing installation not found. Use '/install' API to install."

  }

} 

2022-11-08 14:42:36 ERROR: If deploying through docker-compose, please make sure that the CHT docker-compose files that you wish to be updated match the naming convention. 

2022-11-08 14:42:36 ERROR: If deploying through kubernetes, please make sure the containers you wish to be upgraded match the naming convention. 

2022-11-08 14:42:36 INFO: Last upgrade log is already final. 

2022-11-08 14:42:36 INFO: Valid Upgrade log tracking file was not found. Not updating. 

2022-11-08 14:42:36 ERROR: Error thrown while installing: Error: No containers were updated

    at Object.makeUpgradeRequest (/api/src/services/setup/utils.js:338:11)

    at runMicrotasks (<anonymous>)

    at processTicksAndRejections (node:internal/process/task_queues:96:5)

    at async Object.complete (/api/src/services/setup/upgrade-steps.js:121:20)

    at async complete (/api/src/services/setup/upgrade.js:60:20)

    at async safeInstall (/api/src/services/setup/upgrade.js:45:5) {

  [stack]: 'Error: No containers were updated\n' +

    '    at Object.makeUpgradeRequest (/api/src/services/setup/utils.js:338:11)\n' +

    '    at runMicrotasks (<anonymous>)\n' +

    '    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n' +

    '    at async Object.complete (/api/src/services/setup/upgrade-steps.js:121:20)\n' +

    '    at async complete (/api/src/services/setup/upgrade.js:60:20)\n' +

    '    at async safeInstall (/api/src/services/setup/upgrade.js:45:5)',

  [message]: 'No containers were updated'

} 

REQ 3f1094a3-0cfd-4296-80e5-0ca8921c2607 172.24.0.1 - GET /api/v2/upgrade HTTP/1.0

2022-11-08 14:42:38 INFO: Last upgrade log is already final. 

@dianabarsan
Copy link
Member Author

I'm a bit intrigued as to why you're seeing BOTH "deployment complete" and "error triggering update". I'll have to look into that.

@lorerod
Copy link
Contributor

lorerod commented Nov 8, 2022

Oh! you are right! good catch. Sorry about that. I focused on verifying the version and did not notice the successful green message. Let me know what you discover, and I will test it again.

@dianabarsan
Copy link
Member Author

I'm not seeing the "deployment complete" locally.
I did manage to replicate it if I first tried to upgrade to the current version (which reported as successful, because we already were on the current version).

If you could, please, try again, just open the upgrade page and directly try to upgrade to a different branch?
I'll open a different issue to make sure we clear the "success" status when initiating a new upgrade.

@lorerod
Copy link
Contributor

lorerod commented Nov 8, 2022

Retested
Environment
Instance: Local using compose files from PR
Browser: Chrome
Client platform: MacOS
App: Webapp
Version: 4.0.0-7859-no-doc-error

Steps to reproduce:

  1. Download the docker compose files from PR
  2. Save them in a folder with different names (cht-core-renamed.yml and cht-couchdb-renamed.yml)
  3. Download the docker compose file of cht-upgrade-service from main branch
  4. Start cht-upgrade-service pointing to the folder created in step 2.
  5. Try to upgrade to a different branch (save-couchdb-creds (~4.0.0))
  6. The upgrade page:
  • Shows Error triggering update error
  • Do not show Deployment complete green message
  • The version doesn't change
Screenshot:

Captura de Pantalla 2022-11-08 a la(s) 14 53 07

  1. The api log displays the error:
cht-upgrade-service log:
2022-11-08 17:52:25 ERROR: None of the docker-compose files or containers were updated: {

  'cht-couchdb.yml': {

    ok: false,

    reason: "Existing installation not found. Use '/install' API to install."

  },

  'cht-core.yml': {

    ok: false,

    reason: "Existing installation not found. Use '/install' API to install."

  },

  'cht-couchdb-clustered.yml': {

    ok: false,

    reason: "Existing installation not found. Use '/install' API to install."

  }

} 

2022-11-08 17:52:25 ERROR: If deploying through docker-compose, please make sure that the CHT docker-compose files that you wish to be updated match the naming convention. 

2022-11-08 17:52:25 ERROR: If deploying through kubernetes, please make sure the containers you wish to be upgraded match the naming convention. 

2022-11-08 17:52:25 INFO: Last upgrade log is already final. 

2022-11-08 17:52:25 INFO: Valid Upgrade log tracking file was not found. Not updating. 

2022-11-08 17:52:25 ERROR: Error thrown while installing: Error: No containers were updated

    at Object.makeUpgradeRequest (/api/src/services/setup/utils.js:338:11)

    at runMicrotasks (<anonymous>)

    at processTicksAndRejections (node:internal/process/task_queues:96:5)

    at async Object.complete (/api/src/services/setup/upgrade-steps.js:121:20)

    at async complete (/api/src/services/setup/upgrade.js:60:20)

    at async safeInstall (/api/src/services/setup/upgrade.js:45:5) {

  [stack]: 'Error: No containers were updated\n' +

    '    at Object.makeUpgradeRequest (/api/src/services/setup/utils.js:338:11)\n' +

    '    at runMicrotasks (<anonymous>)\n' +

    '    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n' +

    '    at async Object.complete (/api/src/services/setup/upgrade-steps.js:121:20)\n' +

    '    at async complete (/api/src/services/setup/upgrade.js:60:20)\n' +

    '    at async safeInstall (/api/src/services/setup/upgrade.js:45:5)',

  [message]: 'No containers were updated'

} 

REQ 6ac17551-ecc0-4b38-b7ef-4b58e16fc9a3 172.26.0.1 - GET /api/v2/upgrade HTTP/1.0

2022-11-08 17:52:26 INFO: Last upgrade log is already final. 

@dianabarsan
Copy link
Member Author

Thanks so much @lorerod

I'll create the issue about clearing the success status shortly.

@lorerod
Copy link
Contributor

lorerod commented Nov 8, 2022

Thank to you @dianabarsan. I will follow up on that issue.

@dianabarsan
Copy link
Member Author

Merged to master.

elvisdorkenoo added a commit that referenced this issue Nov 23, 2022
* add e2e tests to messages tab breadcrumbs


* fix breatfeeding typo for delivery form in default config (#7890)

#7884

* add GH Action to replace placeholders with staging urls  (#7860)

* add GH Action for staging urls per #7848


* trying to fix path...how I love how you have to push to test a 1 line change...

* Clean up readme and template changes in prep for PR

* 2 spaces > 4 spaces per feedback

* make search tokens more unique per feedback

* test committ to trigger replace action

* revert pinning to this branch for testing

* collapse decleration of updateme object per feedback

* un-collapse decleration of updateme object against feedback

* fix typo in branch nane

* revert pin to branch

* collapse more objects per feedback

* unping from branch

* Update .github/actions/update-staging-url-placeholders/README.md

Co-authored-by: Gareth Bowen <gareth@medic.org>

Co-authored-by: Gareth Bowen <gareth@medic.org>

* Change staging server db

#7879

* apply review recommendations

* minor changes

* Log error when no compose files were overwritten on upgrade complete (#7863)

#7859

* Update deep staging link: builds -> builds_4

* add missing opening quote on couch readme (#7881)


Co-authored-by: Gareth Bowen <gareth@medic.org>
Co-authored-by: Ashley <8253488+mrjones-plip@users.noreply.github.com>
Co-authored-by: Diana Barsan <35681649+dianabarsan@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Improvement Make something better
Projects
Status: Done
Development

No branches or pull requests

3 participants