Skip to content

Commit

Permalink
Docs/add troubleshooting guide draft (#64)
Browse files Browse the repository at this point in the history
  • Loading branch information
aaperis committed Nov 16, 2023
2 parents a0532d8 + 7327e38 commit 21d9c9d
Show file tree
Hide file tree
Showing 3 changed files with 61 additions and 6 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
site/*
dictionary.dic
5 changes: 5 additions & 0 deletions docs/dictionary/wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -231,3 +231,8 @@ wyenrumyh
yaml
yihkqimti
yml
PREFETCHCOUNT
SetAccessionID
DNS
helpdesk
submitters
61 changes: 55 additions & 6 deletions docs/guides/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,58 @@
# Troubleshooting SDA services
# Troubleshooting

> TODO:
> This guide is a stub and has yet to be finished.
> If you have feedback to give on the content you would like to see, please contact us on
> [github](https://github.com/neicnordic/neic-sda)!
TODO:
This guide is a stub and has yet to be finished.
If you have feedback to give on the content you would like to see, please contact us on
[github](https://github.com/neicnordic/neic-sda)!

Exactly what's happened when a service is acting up can be hard to pinpoint.
In this guide we aim to give some general tips on how to troubleshoot and restore services to working order.

## After deployment checklist

After having deployed the SDA services in a Federated setup, the following steps can be followed to ensure that everything is up and running correctly.

### Services running

The first step is to verify that the services are up and running and the credentials are valid. Make sure that,

- credentials for access to RabbitMQ and Postgres are securely injected to the respective services in the form of secrets
- all the pods/containers are in `Ready`/`Up` status.

Next step is to make sure that the remote connections (CEGA RabbitMQ) are working. Login to the RabbitMQ admin page and check that,

- the Federation status of the Admin tab is in state `running`
- the Shovel status of the Admin tab is in state `running` for all 5 shovels.

## End-to-end testing

NOTE: This guide assumes that there exists a test instance account with Central EGA. Make sure that the account is approved and added to the submitters group.

### Upload file(s)

Upload one or a number of files of different sizes and check that,

- the file(s) exists in the configured `inbox` of the storage backend (e.g. S3 bucket or POSIX path)
- the file(s) entry exists in the database in the `sda.files` and `sda.file_event_log` tables
- If the `s3inbox` is used, there should be an `uploaded` event for each specific file in the `sda.file_event_log`
- the file(s) exists in the CEGA Submission portal (here for the test instance) Files listing, which can be accessed after pressing the three lines menu button.

### Make a test submission

Make a submission with the portal and select the file(s) that were uploaded in the previous step. Once the analysis or runs (one of the two is required) step is finished, the messages for the ingestion of the files should appear in the logs of the `ingest` service. Make sure that,

- the messages are arriving for the file(s) included in the submission
- the `ingestion`, `verify` and `finalise` processes are started and send a message when finished
- the data in `sda.files` are correct
- the files are logged in the `sda.file_event_log` for each of the services and files
- the file(s) exists in the configured `archive` storage backend, see the `archive_file_path` in the `sda.files` table for the name of the archived file(s)
- the archived file(s) exists in the configured `backup` storage backend
- delete one run in the submitter portal, then and add it back again to make sure the cancel message is working as intended.

Finally, when all files have been ingested, the submission portal should allow for finalising the submission. The submission needs first to be accepted through a helpdesk portal. Once this step is done, make sure that,

- the message for the dataset arrives to the mapper service
- the dataset is created in the database and it includes the correct files by checking the `sda.datasets` and `sda.file_dataset` tables.
- the dataset has the status `registered` in the `sda.dataset_event_log`
- the dataset gets the status `released` in the `sda.dataset_event_log`, this might take a while depending on what date was chosen in the submitter portal.

Once all the submission steps have been verified, we can assume that the pipeline part of the deployment is working properly.

0 comments on commit 21d9c9d

Please sign in to comment.