-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start doing the minimum in /status checks #67
Labels
housekeeping
Refactoring, tidying up or other work which supports the project
Comments
Cruikshanks
added
the
housekeeping
Refactoring, tidying up or other work which supports the project
label
Dec 12, 2022
For reference, I spotted that the water-abstraction-reporting and water-abstraction-returns were doing DB checks. They might be in others, I wasn't checking too closely. |
Jozzey
added a commit
to DEFRA/water-abstraction-service
that referenced
this issue
Aug 22, 2023
https://eaflood.atlassian.net/browse/WATER-4096 Currently the `/service-status` endpoint in the UI is getting the versions for the various services from thier `/status` endpoints. As these endpoints are going to be updated to just return a static response of `{ "status": "alive" }`, this has been updated to `/health/info` which is an endpoint created for each repo a while ago to serve the repo's version. This work is the first task in this issue: DEFRA/water-abstraction-team#67
Jozzey
added a commit
to DEFRA/water-abstraction-service
that referenced
this issue
Aug 22, 2023
https://eaflood.atlassian.net/browse/WATER-4096 Currently, the `/service-status` endpoint in the UI is getting the versions for the various services from their `/status` endpoints. As these endpoints are going to be updated to just return a static response of `{ "status": "alive" }`, this has been updated to `/health/info` which is an endpoint created for each repo a while ago to serve the repo's version. This work is the first task in the issue: DEFRA/water-abstraction-team#67
This was referenced Aug 22, 2023
Jozzey
added a commit
to DEFRA/water-abstraction-returns
that referenced
this issue
Aug 22, 2023
https://eaflood.atlassian.net/browse/WATER-4096 AWS ELBs require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the `/status` endpoint. The endpoint in all our repos currently reads in the `package.json` file to get the app's version number. This information is then used to support the `/service-status` page in the water-abstraction-ui. Some of the repos also include a test query to the DB to confirm it can connect. Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in `/status` you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running. Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues. We've already added a new `/health/info` endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our `/status` endpoint across all the repos to the bare minimum; returning a static `{ "status": "alive" }` response. This issue was originally raised in DEFRA/water-abstraction-team#67
Jozzey
added a commit
to DEFRA/water-abstraction-service
that referenced
this issue
Aug 22, 2023
https://eaflood.atlassian.net/browse/WATER-4096 AWS ELBs require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the `/status` endpoint. The endpoint in all our repos currently reads in the `package.json` file to get the app's version number. This information is then used to support the `/service-status` page in the water-abstraction-ui. Some of the repos also include a test query to the DB to confirm it can connect. Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in `/status` you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running. Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues. We've already added a new `/health/info` endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our `/status` endpoint across all the repos to the bare minimum; returning a static `{ "status": "alive" }` response. This issue was originally raised in DEFRA/water-abstraction-team#67
Jozzey
added a commit
to DEFRA/water-abstraction-tactical-idm
that referenced
this issue
Aug 22, 2023
https://eaflood.atlassian.net/browse/WATER-4096 AWS ELBs require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the `/status` endpoint. The endpoint in all our repos currently reads in the `package.json` file to get the app's version number. This information is then used to support the `/service-status` page in the water-abstraction-ui. Some of the repos also include a test query to the DB to confirm it can connect. Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in `/status` you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running. Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues. We've already added a new `/health/info` endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our `/status` endpoint across all the repos to the bare minimum; returning a static `{ "status": "alive" }` response. This issue was originally raised in DEFRA/water-abstraction-team#67
Jozzey
added a commit
to DEFRA/water-abstraction-tactical-idm
that referenced
this issue
Aug 23, 2023
https://eaflood.atlassian.net/browse/WATER-4096 AWS ELBs require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the `/status` endpoint. The endpoint in all our repos currently reads in the `package.json` file to get the app's version number. This information is then used to support the `/service-status` page in the water-abstraction-ui. Some of the repos also include a test query to the DB to confirm it can connect. Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in `/status` you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running. Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues. We've already added a new `/health/info` endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our `/status` endpoint across all the repos to the bare minimum; returning a static `{ "status": "alive" }` response. This issue was originally raised in DEFRA/water-abstraction-team#67
Jozzey
added a commit
to DEFRA/water-abstraction-returns
that referenced
this issue
Aug 23, 2023
https://eaflood.atlassian.net/browse/WATER-4096 AWS ELBs require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the `/status` endpoint. The endpoint in all our repos currently reads in the `package.json` file to get the app's version number. This information is then used to support the `/service-status` page in the water-abstraction-ui. Some of the repos also include a test query to the DB to confirm it can connect. Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in `/status` you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running. Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues. We've already added a new `/health/info` endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our `/status` endpoint across all the repos to the bare minimum; returning a static `{ "status": "alive" }` response. This issue was originally raised in DEFRA/water-abstraction-team#67
Jozzey
added a commit
to DEFRA/water-abstraction-permit-repository
that referenced
this issue
Aug 23, 2023
https://eaflood.atlassian.net/browse/WATER-4096 AWS ELBs require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the `/status` endpoint. The endpoint in all our repos currently reads in the `package.json` file to get the app's version number. This information is then used to support the `/service-status` page in the water-abstraction-ui. Some of the repos also include a test query to the DB to confirm it can connect. Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in `/status` you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running. Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues. We've already added a new `/health/info` endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our `/status` endpoint across all the repos to the bare minimum; returning a static `{ "status": "alive" }` response. This issue was originally raised in DEFRA/water-abstraction-team#67
Jozzey
added a commit
to DEFRA/water-abstraction-service
that referenced
this issue
Aug 23, 2023
https://eaflood.atlassian.net/browse/WATER-4096 AWS ELBs require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the `/status` endpoint. The endpoint in all our repos currently reads in the `package.json` file to get the app's version number. This information is then used to support the `/service-status` page in the water-abstraction-ui. Some of the repos also include a test query to the DB to confirm it can connect. Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in `/status` you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running. Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues. We've already added a new `/health/info` endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our `/status` endpoint across all the repos to the bare minimum; returning a static `{ "status": "alive" }` response. This issue was originally raised in DEFRA/water-abstraction-team#67
Jozzey
added a commit
to DEFRA/water-abstraction-import
that referenced
this issue
Aug 23, 2023
https://eaflood.atlassian.net/browse/WATER-4096 AWS ELBs require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the `/status` endpoint. The endpoint in all our repos currently reads in the `package.json` file to get the app's version number. This information is then used to support the `/service-status` page in the water-abstraction-ui. Some of the repos also include a test query to the DB to confirm it can connect. Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in `/status` you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running. Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues. We've already added a new `/health/info` endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our `/status` endpoint across all the repos to the bare minimum; returning a static `{ "status": "alive" }` response. This issue was originally raised in DEFRA/water-abstraction-team#67
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
AWS ELB's require an endpoint they can hit to confirm whether an app is running. They are commonly referred to as health checks and are used to determine whether the ELB should route traffic through to an app instance. In our apps, it is the
/status
endpoint.The endpoint in all our repos reads in the
package.json
file to get the app's version number. This information is then used to support the/service-status
page in the water-abstraction-ui. Some of repos also include a test query to the DB to confirm it can connect.Having checks that confirm you can connect to dependent services (databases, other apps etc) is a good thing. But the ELB health checks are made multiple times a second across all instances. They only care whether an app is up or not. So if, for example, you include querying your DB in
/status
you're hitting your DB with multiple connections per second, multiplied by the number of server instances you have running.Including reading a file from disk each time means we're adding an unnecessary load on a service that already has performance and resource usage issues.
We've already added a new
/health/info
endpoint to each repo and we do DB connection checks elsewhere. So, we can reduce the work of our/status
endpoint across all the repos to the bare minimum; returning a static{ "status": "alive" }
response./service-status
to use/health/info
endpointsThe text was updated successfully, but these errors were encountered: